modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-27 06:27:46
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 499
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-27 06:26:25
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
shibajustfor/e7b18d46-7226-4a14-b9e0-118f744b5956 | shibajustfor | 2025-01-29T21:32:12Z | 6 | 0 | peft | [
"peft",
"safetensors",
"falcon",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:tiiuae/falcon-rw-1b",
"base_model:adapter:tiiuae/falcon-rw-1b",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T21:29:15Z | ---
library_name: peft
license: apache-2.0
base_model: tiiuae/falcon-rw-1b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e7b18d46-7226-4a14-b9e0-118f744b5956
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: tiiuae/falcon-rw-1b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 22a7382f476a3340_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/22a7382f476a3340_train_data.json
type:
field_instruction: question
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: shibajustfor/e7b18d46-7226-4a14-b9e0-118f744b5956
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: constant
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/22a7382f476a3340_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a1cb988f-7957-4a81-8cc4-c6f235d31d98
wandb_project: Birthday-SN56-38-Gradients-On-Demand
wandb_run: your_name
wandb_runid: a1cb988f-7957-4a81-8cc4-c6f235d31d98
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# e7b18d46-7226-4a14-b9e0-118f744b5956
This model is a fine-tuned version of [tiiuae/falcon-rw-1b](https://huggingface.co/tiiuae/falcon-rw-1b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4133
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 1.6886 |
| 6.2532 | 0.0031 | 13 | 1.5225 |
| 6.0738 | 0.0063 | 26 | 1.4543 |
| 5.5457 | 0.0094 | 39 | 1.4133 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kk-aivio/dc8ff502-6a08-4a0d-95f9-1aee3402ff61 | kk-aivio | 2025-01-29T21:31:22Z | 9 | 0 | peft | [
"peft",
"safetensors",
"falcon",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:tiiuae/falcon-rw-1b",
"base_model:adapter:tiiuae/falcon-rw-1b",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T21:28:27Z | ---
library_name: peft
license: apache-2.0
base_model: tiiuae/falcon-rw-1b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: dc8ff502-6a08-4a0d-95f9-1aee3402ff61
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: tiiuae/falcon-rw-1b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 22a7382f476a3340_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/22a7382f476a3340_train_data.json
type:
field_instruction: question
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kk-aivio/dc8ff502-6a08-4a0d-95f9-1aee3402ff61
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/22a7382f476a3340_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a1cb988f-7957-4a81-8cc4-c6f235d31d98
wandb_project: Birthday-SN56-17-Gradients-On-Demand
wandb_run: your_name
wandb_runid: a1cb988f-7957-4a81-8cc4-c6f235d31d98
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# dc8ff502-6a08-4a0d-95f9-1aee3402ff61
This model is a fine-tuned version of [tiiuae/falcon-rw-1b](https://huggingface.co/tiiuae/falcon-rw-1b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4497
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 1.7015 |
| 6.4133 | 0.0031 | 13 | 1.5509 |
| 6.2202 | 0.0063 | 26 | 1.4776 |
| 5.6429 | 0.0094 | 39 | 1.4497 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mrferr3t/90b604c4-6f30-4091-a762-c98f277d65bd | mrferr3t | 2025-01-29T21:30:26Z | 6 | 0 | peft | [
"peft",
"safetensors",
"falcon",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:tiiuae/falcon-rw-1b",
"base_model:adapter:tiiuae/falcon-rw-1b",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T21:26:38Z | ---
library_name: peft
license: apache-2.0
base_model: tiiuae/falcon-rw-1b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 90b604c4-6f30-4091-a762-c98f277d65bd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: tiiuae/falcon-rw-1b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 22a7382f476a3340_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/22a7382f476a3340_train_data.json
type:
field_instruction: question
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/90b604c4-6f30-4091-a762-c98f277d65bd
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0005
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 23
micro_batch_size: 2
mlflow_experiment_name: /tmp/22a7382f476a3340_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 0
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a1cb988f-7957-4a81-8cc4-c6f235d31d98
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: a1cb988f-7957-4a81-8cc4-c6f235d31d98
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 90b604c4-6f30-4091-a762-c98f277d65bd
This model is a fine-tuned version of [tiiuae/falcon-rw-1b](https://huggingface.co/tiiuae/falcon-rw-1b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4872
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 23
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 6.8564 | 0.0002 | 1 | 1.7015 |
| 6.7593 | 0.0015 | 6 | 1.6080 |
| 5.3286 | 0.0029 | 12 | 1.5384 |
| 6.3491 | 0.0044 | 18 | 1.4872 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ardaspear/b3c0b14f-d033-4843-b041-e2029afb35e7 | ardaspear | 2025-01-29T21:29:50Z | 6 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-instruct-v0.3",
"base_model:adapter:unsloth/mistral-7b-instruct-v0.3",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T19:58:00Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-instruct-v0.3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b3c0b14f-d033-4843-b041-e2029afb35e7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-instruct-v0.3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 50f3de17dcca2192_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/50f3de17dcca2192_train_data.json
type:
field_input: ''
field_instruction: rendered_input
field_output: summary
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: ardaspear/b3c0b14f-d033-4843-b041-e2029afb35e7
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/50f3de17dcca2192_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 729adb6c-9b7d-454a-b2b2-040e7bf39050
wandb_project: Gradients-On-Five
wandb_run: your_name
wandb_runid: 729adb6c-9b7d-454a-b2b2-040e7bf39050
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b3c0b14f-d033-4843-b041-e2029afb35e7
This model is a fine-tuned version of [unsloth/mistral-7b-instruct-v0.3](https://huggingface.co/unsloth/mistral-7b-instruct-v0.3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8530
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0003 | 1 | 2.0751 |
| 4.8841 | 0.0027 | 9 | 1.0569 |
| 3.3807 | 0.0053 | 18 | 0.9211 |
| 3.4477 | 0.0080 | 27 | 0.8963 |
| 3.675 | 0.0106 | 36 | 0.8811 |
| 3.1355 | 0.0133 | 45 | 0.8736 |
| 3.3764 | 0.0159 | 54 | 0.8668 |
| 3.4102 | 0.0186 | 63 | 0.8610 |
| 3.2835 | 0.0212 | 72 | 0.8611 |
| 3.3175 | 0.0239 | 81 | 0.8552 |
| 3.1059 | 0.0265 | 90 | 0.8534 |
| 3.2693 | 0.0292 | 99 | 0.8530 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
robiulawaldev/3e3a5ee9-1159-4956-a1a6-2e567838ea95 | robiulawaldev | 2025-01-29T21:28:57Z | 8 | 0 | peft | [
"peft",
"safetensors",
"falcon",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:tiiuae/falcon-rw-1b",
"base_model:adapter:tiiuae/falcon-rw-1b",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T21:25:40Z | ---
library_name: peft
license: apache-2.0
base_model: tiiuae/falcon-rw-1b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3e3a5ee9-1159-4956-a1a6-2e567838ea95
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: tiiuae/falcon-rw-1b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 22a7382f476a3340_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/22a7382f476a3340_train_data.json
type:
field_instruction: question
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: robiulawaldev/3e3a5ee9-1159-4956-a1a6-2e567838ea95
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: constant
max_steps: 55
micro_batch_size: 4
mlflow_experiment_name: /tmp/22a7382f476a3340_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a1cb988f-7957-4a81-8cc4-c6f235d31d98
wandb_project: Birthday-SN56-37-Gradients-On-Demand
wandb_run: your_name
wandb_runid: a1cb988f-7957-4a81-8cc4-c6f235d31d98
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 3e3a5ee9-1159-4956-a1a6-2e567838ea95
This model is a fine-tuned version of [tiiuae/falcon-rw-1b](https://huggingface.co/tiiuae/falcon-rw-1b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3237
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 5
- training_steps: 55
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 1.6014 |
| 2.8421 | 0.0034 | 14 | 1.4223 |
| 2.817 | 0.0068 | 28 | 1.3584 |
| 2.7062 | 0.0102 | 42 | 1.3237 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso18/23b8e394-43c2-4204-9dcd-45fad5cd7ebc | lesso18 | 2025-01-29T21:28:49Z | 7 | 0 | peft | [
"peft",
"safetensors",
"falcon",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:tiiuae/falcon-rw-1b",
"base_model:adapter:tiiuae/falcon-rw-1b",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T21:24:35Z | ---
library_name: peft
license: apache-2.0
base_model: tiiuae/falcon-rw-1b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 23b8e394-43c2-4204-9dcd-45fad5cd7ebc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: tiiuae/falcon-rw-1b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 22a7382f476a3340_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/22a7382f476a3340_train_data.json
type:
field_instruction: question
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso18/23b8e394-43c2-4204-9dcd-45fad5cd7ebc
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/22a7382f476a3340_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a1cb988f-7957-4a81-8cc4-c6f235d31d98
wandb_project: new-01-29
wandb_run: your_name
wandb_runid: a1cb988f-7957-4a81-8cc4-c6f235d31d98
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 23b8e394-43c2-4204-9dcd-45fad5cd7ebc
This model is a fine-tuned version of [tiiuae/falcon-rw-1b](https://huggingface.co/tiiuae/falcon-rw-1b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.831 | 0.0484 | 200 | 1.3973 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
daniel40/11dcd089-d487-4253-82e9-7ad783dfa708 | daniel40 | 2025-01-29T21:28:32Z | 10 | 0 | peft | [
"peft",
"safetensors",
"falcon",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:tiiuae/falcon-rw-1b",
"base_model:adapter:tiiuae/falcon-rw-1b",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T21:25:30Z | ---
library_name: peft
license: apache-2.0
base_model: tiiuae/falcon-rw-1b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 11dcd089-d487-4253-82e9-7ad783dfa708
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: tiiuae/falcon-rw-1b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 22a7382f476a3340_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/22a7382f476a3340_train_data.json
type:
field_instruction: question
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: daniel40/11dcd089-d487-4253-82e9-7ad783dfa708
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/22a7382f476a3340_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a1cb988f-7957-4a81-8cc4-c6f235d31d98
wandb_project: Birthday-SN56-31-Gradients-On-Demand
wandb_run: your_name
wandb_runid: a1cb988f-7957-4a81-8cc4-c6f235d31d98
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 11dcd089-d487-4253-82e9-7ad783dfa708
This model is a fine-tuned version of [tiiuae/falcon-rw-1b](https://huggingface.co/tiiuae/falcon-rw-1b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4466
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 6.8564 | 0.0002 | 1 | 1.7015 |
| 6.6635 | 0.0031 | 13 | 1.5675 |
| 6.2159 | 0.0063 | 26 | 1.4803 |
| 6.9377 | 0.0094 | 39 | 1.4466 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
havinash-ai/bd58355c-4c4e-4745-a761-1e0f20125940 | havinash-ai | 2025-01-29T21:28:22Z | 7 | 0 | peft | [
"peft",
"safetensors",
"falcon",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:tiiuae/falcon-rw-1b",
"base_model:adapter:tiiuae/falcon-rw-1b",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T21:25:30Z | ---
library_name: peft
license: apache-2.0
base_model: tiiuae/falcon-rw-1b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: bd58355c-4c4e-4745-a761-1e0f20125940
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: tiiuae/falcon-rw-1b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 22a7382f476a3340_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/22a7382f476a3340_train_data.json
type:
field_instruction: question
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: havinash-ai/bd58355c-4c4e-4745-a761-1e0f20125940
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/22a7382f476a3340_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a1cb988f-7957-4a81-8cc4-c6f235d31d98
wandb_project: Birthday-SN56-9-Gradients-On-Demand
wandb_run: your_name
wandb_runid: a1cb988f-7957-4a81-8cc4-c6f235d31d98
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# bd58355c-4c4e-4745-a761-1e0f20125940
This model is a fine-tuned version of [tiiuae/falcon-rw-1b](https://huggingface.co/tiiuae/falcon-rw-1b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4494
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 1.7015 |
| 6.4132 | 0.0031 | 13 | 1.5510 |
| 6.2296 | 0.0063 | 26 | 1.4777 |
| 5.6494 | 0.0094 | 39 | 1.4494 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
auxyus/7636ec70-bbaa-405f-a478-d785907d5bad | auxyus | 2025-01-29T21:27:54Z | 6 | 0 | peft | [
"peft",
"safetensors",
"bloom",
"axolotl",
"generated_from_trainer",
"base_model:bigscience/bloom-560m",
"base_model:adapter:bigscience/bloom-560m",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2025-01-29T20:58:45Z | ---
library_name: peft
license: bigscience-bloom-rail-1.0
base_model: bigscience/bloom-560m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7636ec70-bbaa-405f-a478-d785907d5bad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: bigscience/bloom-560m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2b415d885d8564e3_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2b415d885d8564e3_train_data.json
type:
field_instruction: title
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: auxyus/7636ec70-bbaa-405f-a478-d785907d5bad
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/2b415d885d8564e3_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 2d6f1768-4460-485b-b2ff-a088d6f2fd46
wandb_project: Gradients-On-Two
wandb_run: your_name
wandb_runid: 2d6f1768-4460-485b-b2ff-a088d6f2fd46
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7636ec70-bbaa-405f-a478-d785907d5bad
This model is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2345
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0008 | 1 | 3.4036 |
| 13.766 | 0.0068 | 9 | 3.3668 |
| 13.0846 | 0.0136 | 18 | 3.3128 |
| 12.9224 | 0.0203 | 27 | 3.2826 |
| 13.0051 | 0.0271 | 36 | 3.2667 |
| 12.8394 | 0.0339 | 45 | 3.2547 |
| 13.1302 | 0.0407 | 54 | 3.2464 |
| 13.0954 | 0.0475 | 63 | 3.2405 |
| 13.0834 | 0.0543 | 72 | 3.2376 |
| 12.9475 | 0.0610 | 81 | 3.2362 |
| 13.0786 | 0.0678 | 90 | 3.2351 |
| 12.9923 | 0.0746 | 99 | 3.2345 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
chatpig/llava-llama3 | chatpig | 2025-01-29T21:27:48Z | 666 | 1 | null | [
"gguf",
"gguf-node",
"image-to-text",
"dataset:Lin-Chen/ShareGPT4V",
"base_model:xtuner/llava-llama-3-8b-v1_1-gguf",
"base_model:quantized:xtuner/llava-llama-3-8b-v1_1-gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | image-to-text | 2025-01-29T09:44:14Z | ---
datasets:
- Lin-Chen/ShareGPT4V
base_model:
- xtuner/llava-llama-3-8b-v1_1-gguf
pipeline_tag: image-to-text
tags:
- gguf-node
---
# llava-llama3
- base model from [xtuner](https://huggingface.co/xtuner/llava-llama-3-8b-v1_1-gguf)
- use it as kind of text encoder (drag it to the folder ./models/text_encoders) |
great0001/2219e2c3-7151-44a2-a556-890f5fe98bc8 | great0001 | 2025-01-29T21:27:32Z | 7 | 0 | peft | [
"peft",
"safetensors",
"falcon",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:tiiuae/falcon-rw-1b",
"base_model:adapter:tiiuae/falcon-rw-1b",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T21:24:32Z | ---
library_name: peft
license: apache-2.0
base_model: tiiuae/falcon-rw-1b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2219e2c3-7151-44a2-a556-890f5fe98bc8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: tiiuae/falcon-rw-1b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 22a7382f476a3340_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/22a7382f476a3340_train_data.json
type:
field_instruction: question
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: great0001/2219e2c3-7151-44a2-a556-890f5fe98bc8
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/22a7382f476a3340_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a1cb988f-7957-4a81-8cc4-c6f235d31d98
wandb_project: Birthday-SN56-14-Gradients-On-Demand
wandb_run: your_name
wandb_runid: a1cb988f-7957-4a81-8cc4-c6f235d31d98
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 2219e2c3-7151-44a2-a556-890f5fe98bc8
This model is a fine-tuned version of [tiiuae/falcon-rw-1b](https://huggingface.co/tiiuae/falcon-rw-1b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4462
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 6.8564 | 0.0002 | 1 | 1.7015 |
| 6.6644 | 0.0031 | 13 | 1.5672 |
| 6.2165 | 0.0063 | 26 | 1.4799 |
| 6.9512 | 0.0094 | 39 | 1.4462 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
goktug14/bert_imdb | goktug14 | 2025-01-29T21:27:10Z | 37 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:stanfordnlp/imdb",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-01-18T18:29:51Z | ---
library_name: transformers
license: apache-2.0
base_model: google-bert/bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- recall
- precision
model-index:
- name: bert_imdb
results: []
datasets:
- stanfordnlp/imdb
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_imdb
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3119
- Accuracy: 0.9403
- Recall: 0.9430
- Precision: 0.9379
To acccess my finetuning tutorial you can check the following [repository](https://github.com/GoktugGuvercin/Text-Classification).
## Comparison with SOTA:
- DistilBERT 66M: 92.82
- BERT-base + ITPT: 95.63
- BERT-large: 95.49
Reference: [Paperswithcode](https://paperswithcode.com/sota/sentiment-analysis-on-imdb)
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|
| 0.2099 | 1.0 | 1563 | 0.2456 | 0.9102 | 0.8481 | 0.9683 |
| 0.1379 | 2.0 | 3126 | 0.2443 | 0.9274 | 0.8911 | 0.9608 |
| 0.0752 | 3.0 | 4689 | 0.2845 | 0.9391 | 0.9509 | 0.9290 |
| 0.0352 | 4.0 | 6252 | 0.3119 | 0.9403 | 0.9430 | 0.9379 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0 |
Q-bert/nda-tiny | Q-bert | 2025-01-29T21:25:26Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"nda",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-29T21:21:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
oldiday/2b6af467-e7ce-4367-bb25-94ec73886d8a | oldiday | 2025-01-29T21:25:08Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:oopsung/llama2-7b-n-ox-test-v1",
"base_model:adapter:oopsung/llama2-7b-n-ox-test-v1",
"region:us"
] | null | 2025-01-29T19:32:22Z | ---
library_name: peft
base_model: oopsung/llama2-7b-n-ox-test-v1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2b6af467-e7ce-4367-bb25-94ec73886d8a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: oopsung/llama2-7b-n-ox-test-v1
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 29cacfcdf8be338c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/29cacfcdf8be338c_train_data.json
type:
field_input: dataset
field_instruction: prompt
field_output: completion
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: oldiday/2b6af467-e7ce-4367-bb25-94ec73886d8a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/29cacfcdf8be338c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: e3873475-6a24-45c6-8e86-c40b5fdad2bb
wandb_project: Gradients-On-Six
wandb_run: your_name
wandb_runid: e3873475-6a24-45c6-8e86-c40b5fdad2bb
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 2b6af467-e7ce-4367-bb25-94ec73886d8a
This model is a fine-tuned version of [oopsung/llama2-7b-n-ox-test-v1](https://huggingface.co/oopsung/llama2-7b-n-ox-test-v1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5185
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0005 | 1 | 2.6758 |
| 1.8074 | 0.0045 | 9 | 1.7985 |
| 0.8538 | 0.0090 | 18 | 0.9078 |
| 0.8675 | 0.0136 | 27 | 0.7126 |
| 0.7247 | 0.0181 | 36 | 0.6262 |
| 0.6046 | 0.0226 | 45 | 0.5804 |
| 0.3735 | 0.0271 | 54 | 0.5503 |
| 0.5546 | 0.0317 | 63 | 0.5437 |
| 0.4516 | 0.0362 | 72 | 0.5292 |
| 0.4243 | 0.0407 | 81 | 0.5223 |
| 0.6452 | 0.0452 | 90 | 0.5191 |
| 0.6426 | 0.0497 | 99 | 0.5185 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
adammandic87/f357a253-5426-467e-a49a-93f3591cca05 | adammandic87 | 2025-01-29T21:24:12Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Llama-2-13b-64k",
"base_model:adapter:NousResearch/Yarn-Llama-2-13b-64k",
"region:us"
] | null | 2025-01-29T20:46:07Z | ---
library_name: peft
base_model: NousResearch/Yarn-Llama-2-13b-64k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f357a253-5426-467e-a49a-93f3591cca05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Llama-2-13b-64k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 68d6de2b31b188fb_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/68d6de2b31b188fb_train_data.json
type:
field_instruction: text
field_output: context
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: adammandic87/f357a253-5426-467e-a49a-93f3591cca05
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/68d6de2b31b188fb_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 372666b9-05ad-4f22-9650-1fc0469ec3a8
wandb_project: birthday-sn56-19-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 372666b9-05ad-4f22-9650-1fc0469ec3a8
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# f357a253-5426-467e-a49a-93f3591cca05
This model is a fine-tuned version of [NousResearch/Yarn-Llama-2-13b-64k](https://huggingface.co/NousResearch/Yarn-Llama-2-13b-64k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | nan |
| 12.163 | 0.0005 | 13 | nan |
| 6.0278 | 0.0010 | 26 | nan |
| 4.1301 | 0.0016 | 39 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
thaffggg/78e2f388-2ed6-4eee-b31b-c881681e7b49 | thaffggg | 2025-01-29T21:21:54Z | 7 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:adapter:HuggingFaceH4/zephyr-7b-beta",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T19:28:06Z | ---
library_name: peft
license: mit
base_model: HuggingFaceH4/zephyr-7b-beta
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 78e2f388-2ed6-4eee-b31b-c881681e7b49
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: HuggingFaceH4/zephyr-7b-beta
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d9d67874deecade6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d9d67874deecade6_train_data.json
type:
field_instruction: generated
field_output: subtitle
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thaffggg/78e2f388-2ed6-4eee-b31b-c881681e7b49
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/d9d67874deecade6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8d5cb705-4de0-4f05-a1c9-d1ffcf186aee
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8d5cb705-4de0-4f05-a1c9-d1ffcf186aee
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 78e2f388-2ed6-4eee-b31b-c881681e7b49
This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7760
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.8372 | 0.0038 | 200 | 0.7760 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
hyoo14/DNABERT2_PD | hyoo14 | 2025-01-29T21:19:39Z | 26 | 0 | peft | [
"peft",
"bert",
"custom_code",
"arxiv:1910.09700",
"base_model:zhihan1996/DNABERT-2-117M",
"base_model:adapter:zhihan1996/DNABERT-2-117M",
"region:us"
] | null | 2024-10-08T18:56:46Z | ---
base_model: zhihan1996/DNABERT-2-117M
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.1 |
telemauritius7/Navin | telemauritius7 | 2025-01-29T21:18:49Z | 10 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-29T20:52:52Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Navin
---
# Navin
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Navin` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('telemauritius7/Navin', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
nhung03/a03f868b-e3a3-44a3-bea7-3fded1fd8387 | nhung03 | 2025-01-29T21:18:20Z | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/OpenHermes-2.5-Mistral-7B",
"base_model:adapter:unsloth/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T20:55:44Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/OpenHermes-2.5-Mistral-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a03f868b-e3a3-44a3-bea7-3fded1fd8387
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/OpenHermes-2.5-Mistral-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 39ca5e7349c3502d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/39ca5e7349c3502d_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung03/a03f868b-e3a3-44a3-bea7-3fded1fd8387
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/39ca5e7349c3502d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b12378a2-158f-4d8d-a62b-21b892db2bb6
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b12378a2-158f-4d8d-a62b-21b892db2bb6
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# a03f868b-e3a3-44a3-bea7-3fded1fd8387
This model is a fine-tuned version of [unsloth/OpenHermes-2.5-Mistral-7B](https://huggingface.co/unsloth/OpenHermes-2.5-Mistral-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4083
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.9295 | 0.1863 | 200 | 1.4083 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso05/1db0546a-93cb-4e9e-939c-02a12191d43b | lesso05 | 2025-01-29T21:17:16Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T20:49:38Z | ---
library_name: peft
license: other
base_model: Qwen/Qwen2.5-3B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1db0546a-93cb-4e9e-939c-02a12191d43b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-3B-Instruct
bf16: true
chat_template: llama3
datasets:
- data_files:
- feb300ac0ab17add_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/feb300ac0ab17add_train_data.json
type:
field_instruction: premises
field_output: hypothesis
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso05/1db0546a-93cb-4e9e-939c-02a12191d43b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/feb300ac0ab17add_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0029dd14-2f28-48f1-ae11-d42662d2b22d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0029dd14-2f28-48f1-ae11-d42662d2b22d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 1db0546a-93cb-4e9e-939c-02a12191d43b
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5714
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.6322 | 0.0003 | 1 | 5.6657 |
| 5.3341 | 0.0013 | 5 | 5.4994 |
| 3.8268 | 0.0026 | 10 | 3.2765 |
| 0.9462 | 0.0040 | 15 | 0.7347 |
| 0.3726 | 0.0053 | 20 | 0.5839 |
| 0.4131 | 0.0066 | 25 | 0.5714 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Nisk36/SFT_ojousama_lr5_diffseed | Nisk36 | 2025-01-29T21:16:08Z | 14 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-29T21:12:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
elloza/mms-tts-mlg-onnx | elloza | 2025-01-29T21:13:38Z | 27 | 0 | transformers.js | [
"transformers.js",
"onnx",
"vits",
"text-to-audio",
"text-to-speech",
"base_model:facebook/mms-tts-mlg",
"base_model:quantized:facebook/mms-tts-mlg",
"region:us"
] | text-to-speech | 2025-01-29T19:54:16Z | ---
base_model: facebook/mms-tts-mlg
library_name: transformers.js
pipeline_tag: text-to-speech
tags:
- text-to-audio
---
https://huggingface.co/facebook/mms-tts-mlg with ONNX weights to be compatible with Transformers.js.
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:
```bash
npm i @xenova/transformers
```
**Example:** Generate Malagasy speech with `elloza/mms-tts-mlg`.
```js
import { pipeline } from '@xenova/transformers';
// Create a text-to-speech pipeline
const synthesizer = await pipeline('text-to-speech', 'elloza/mms-tts-mlg', {
quantized: false, // Remove this line to use the quantized version (default)
});
// Generate speech
const output = await synthesizer('manao ahoana');
console.log(output);
// {
// audio: Float32Array(10752) [ ... ],
// sampling_rate: 16000
// }
```
Optionally, save the audio to a wav file (Node.js):
```js
import wavefile from 'wavefile';
import fs from 'fs';
const wav = new wavefile.WaveFile();
wav.fromScratch(1, output.sampling_rate, '32f', output.audio);
fs.writeFileSync('out.wav', wav.toBuffer());
```
---
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). |
biustnaspust/puszek37 | biustnaspust | 2025-01-29T21:12:50Z | 29 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-29T21:08:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mrferr3t/4a8f3ba9-92bb-4aa4-b8db-364a0fb533f3 | mrferr3t | 2025-01-29T21:10:44Z | 12 | 0 | peft | [
"peft",
"safetensors",
"bloom",
"axolotl",
"generated_from_trainer",
"base_model:bigscience/bloom-560m",
"base_model:adapter:bigscience/bloom-560m",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2025-01-29T21:00:29Z | ---
library_name: peft
license: bigscience-bloom-rail-1.0
base_model: bigscience/bloom-560m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4a8f3ba9-92bb-4aa4-b8db-364a0fb533f3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: bigscience/bloom-560m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2b415d885d8564e3_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2b415d885d8564e3_train_data.json
type:
field_instruction: title
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/4a8f3ba9-92bb-4aa4-b8db-364a0fb533f3
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/2b415d885d8564e3_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2d6f1768-4460-485b-b2ff-a088d6f2fd46
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2d6f1768-4460-485b-b2ff-a088d6f2fd46
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 4a8f3ba9-92bb-4aa4-b8db-364a0fb533f3
This model is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3715
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 13.7556 | 0.0002 | 1 | 3.4074 |
| 13.5734 | 0.0006 | 3 | 3.4061 |
| 13.9745 | 0.0011 | 6 | 3.3954 |
| 14.7185 | 0.0017 | 9 | 3.3715 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
JacksonBrune/29faf005-4ca3-4728-b74f-2129aea0933c | JacksonBrune | 2025-01-29T21:05:02Z | 6 | 0 | peft | [
"peft",
"safetensors",
"falcon",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:fxmarty/really-tiny-falcon-testing",
"base_model:adapter:fxmarty/really-tiny-falcon-testing",
"license:mit",
"region:us"
] | null | 2025-01-29T21:04:00Z | ---
library_name: peft
license: mit
base_model: fxmarty/really-tiny-falcon-testing
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 29faf005-4ca3-4728-b74f-2129aea0933c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/really-tiny-falcon-testing
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c1300f5c75a79f6d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c1300f5c75a79f6d_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: JacksonBrune/29faf005-4ca3-4728-b74f-2129aea0933c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/c1300f5c75a79f6d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2a7bca67-9f98-441b-be8c-f6a57f91d9bd
wandb_project: Birthday-SN56-12-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2a7bca67-9f98-441b-be8c-f6a57f91d9bd
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 29faf005-4ca3-4728-b74f-2129aea0933c
This model is a fine-tuned version of [fxmarty/really-tiny-falcon-testing](https://huggingface.co/fxmarty/really-tiny-falcon-testing) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.9183
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 44.5365 | 0.0001 | 1 | 11.1310 |
| 44.3385 | 0.0011 | 13 | 11.0681 |
| 43.8177 | 0.0022 | 26 | 10.9620 |
| 43.7135 | 0.0033 | 39 | 10.9183 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
silviasapora/gemma-7b-silvia-shuffled-5e-5-05-v5 | silviasapora | 2025-01-29T21:02:52Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma",
"text-generation",
"generated_from_trainer",
"alignment-handbook",
"trl",
"orpo",
"conversational",
"dataset:silviasapora/argilla-mix-low",
"arxiv:2403.07691",
"base_model:google/gemma-7b",
"base_model:finetune:google/gemma-7b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-29T01:59:27Z | ---
base_model: google/gemma-7b
datasets:
- silviasapora/argilla-mix-low
library_name: transformers
model_name: google/gemma-7b
tags:
- generated_from_trainer
- alignment-handbook
- trl
- orpo
licence: license
---
# Model Card for google/gemma-7b
This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on the [['silviasapora/argilla-mix-low']](https://huggingface.co/datasets/['silviasapora/argilla-mix-low']) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="silviasapora/gemma-7b-silvia-shuffled-5e-5-05-v5", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/silvias/huggingface/runs/ur9d6pk3)
This model was trained with ORPO, a method introduced in [ORPO: Monolithic Preference Optimization without Reference Model](https://huggingface.co/papers/2403.07691).
### Framework versions
- TRL: 0.13.0
- Transformers: 4.48.1
- Pytorch: 2.5.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite ORPO as:
```bibtex
@article{hong2024orpo,
title = {{ORPO: Monolithic Preference Optimization without Reference Model}},
author = {Jiwoo Hong and Noah Lee and James Thorne},
year = 2024,
eprint = {arXiv:2403.07691}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
anjali-mudgal/Medical-COT-DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit-10000-example | anjali-mudgal | 2025-01-29T21:02:47Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-29T20:32:18Z | ---
base_model: unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** anjali-mudgal
- **License:** apache-2.0
- **Finetuned from model :** unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
great0001/162722cf-e5f5-4e47-88e5-71b619ed0bba | great0001 | 2025-01-29T21:02:45Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct",
"license:other",
"region:us"
] | null | 2025-01-29T20:58:45Z | ---
library_name: peft
license: other
base_model: Qwen/Qwen2.5-3B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 162722cf-e5f5-4e47-88e5-71b619ed0bba
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-3B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- feb300ac0ab17add_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/feb300ac0ab17add_train_data.json
type:
field_instruction: premises
field_output: hypothesis
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: great0001/162722cf-e5f5-4e47-88e5-71b619ed0bba
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/feb300ac0ab17add_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0029dd14-2f28-48f1-ae11-d42662d2b22d
wandb_project: Birthday-SN56-14-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0029dd14-2f28-48f1-ae11-d42662d2b22d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 162722cf-e5f5-4e47-88e5-71b619ed0bba
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4280
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.7135 | 0.0003 | 1 | 5.6963 |
| 2.3065 | 0.0034 | 13 | 1.5436 |
| 0.4583 | 0.0069 | 26 | 0.5127 |
| 0.3496 | 0.0103 | 39 | 0.4280 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nathanialhunt/812a876b-00a9-4828-9247-97abfc337cbf | nathanialhunt | 2025-01-29T21:02:07Z | 6 | 0 | peft | [
"peft",
"safetensors",
"bloom",
"axolotl",
"generated_from_trainer",
"base_model:bigscience/bloom-560m",
"base_model:adapter:bigscience/bloom-560m",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2025-01-29T20:59:00Z | ---
library_name: peft
license: bigscience-bloom-rail-1.0
base_model: bigscience/bloom-560m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 812a876b-00a9-4828-9247-97abfc337cbf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: bigscience/bloom-560m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2b415d885d8564e3_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2b415d885d8564e3_train_data.json
type:
field_instruction: title
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: nathanialhunt/812a876b-00a9-4828-9247-97abfc337cbf
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/2b415d885d8564e3_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2d6f1768-4460-485b-b2ff-a088d6f2fd46
wandb_project: Birthday-SN56-5-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2d6f1768-4460-485b-b2ff-a088d6f2fd46
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 812a876b-00a9-4828-9247-97abfc337cbf
This model is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2716
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 3.4074 |
| 13.7599 | 0.0024 | 13 | 3.3288 |
| 13.0855 | 0.0049 | 26 | 3.2850 |
| 13.4018 | 0.0073 | 39 | 3.2716 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
guyschepers/guy1 | guyschepers | 2025-01-29T21:01:37Z | 25 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-29T20:49:19Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: guy
---
# Guy1
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `guy` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('guyschepers/guy1', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
mradermacher/orca_mini_v3_70b-i1-GGUF | mradermacher | 2025-01-29T21:00:10Z | 608 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:psmathur/orca_mini_v1_dataset",
"dataset:ehartford/dolphin",
"base_model:pankajmathur/orca_mini_v3_70b",
"base_model:quantized:pankajmathur/orca_mini_v3_70b",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-01-29T09:54:29Z | ---
base_model: pankajmathur/orca_mini_v3_70b
datasets:
- psmathur/orca_mini_v1_dataset
- ehartford/dolphin
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/pankajmathur/orca_mini_v3_70b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/orca_mini_v3_70b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_70b-i1-GGUF/resolve/main/orca_mini_v3_70b.i1-IQ1_S.gguf) | i1-IQ1_S | 14.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_70b-i1-GGUF/resolve/main/orca_mini_v3_70b.i1-IQ1_M.gguf) | i1-IQ1_M | 16.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_70b-i1-GGUF/resolve/main/orca_mini_v3_70b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.4 | |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_70b-i1-GGUF/resolve/main/orca_mini_v3_70b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.4 | |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_70b-i1-GGUF/resolve/main/orca_mini_v3_70b.i1-IQ2_S.gguf) | i1-IQ2_S | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_70b-i1-GGUF/resolve/main/orca_mini_v3_70b.i1-IQ2_M.gguf) | i1-IQ2_M | 23.3 | |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_70b-i1-GGUF/resolve/main/orca_mini_v3_70b.i1-Q2_K_S.gguf) | i1-Q2_K_S | 23.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_70b-i1-GGUF/resolve/main/orca_mini_v3_70b.i1-Q2_K.gguf) | i1-Q2_K | 25.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_70b-i1-GGUF/resolve/main/orca_mini_v3_70b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 26.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_70b-i1-GGUF/resolve/main/orca_mini_v3_70b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.4 | |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_70b-i1-GGUF/resolve/main/orca_mini_v3_70b.i1-IQ3_S.gguf) | i1-IQ3_S | 30.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_70b-i1-GGUF/resolve/main/orca_mini_v3_70b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_70b-i1-GGUF/resolve/main/orca_mini_v3_70b.i1-IQ3_M.gguf) | i1-IQ3_M | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_70b-i1-GGUF/resolve/main/orca_mini_v3_70b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_70b-i1-GGUF/resolve/main/orca_mini_v3_70b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_70b-i1-GGUF/resolve/main/orca_mini_v3_70b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 36.9 | |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_70b-i1-GGUF/resolve/main/orca_mini_v3_70b.i1-Q4_0.gguf) | i1-Q4_0 | 39.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_70b-i1-GGUF/resolve/main/orca_mini_v3_70b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_70b-i1-GGUF/resolve/main/orca_mini_v3_70b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_70b-i1-GGUF/resolve/main/orca_mini_v3_70b.i1-Q4_1.gguf) | i1-Q4_1 | 43.3 | |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_70b-i1-GGUF/resolve/main/orca_mini_v3_70b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_70b-i1-GGUF/resolve/main/orca_mini_v3_70b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/orca_mini_v3_70b-i1-GGUF/resolve/main/orca_mini_v3_70b.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/orca_mini_v3_70b-i1-GGUF/resolve/main/orca_mini_v3_70b.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 56.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/SmolLM2-MagpieUltraPlus-MathInstruct-GGUF | mradermacher | 2025-01-29T20:59:53Z | 220 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"sft",
"en",
"base_model:HuggingFaceTB/SmolLM2-MagpieUltraPlus-MathInstruct",
"base_model:quantized:HuggingFaceTB/SmolLM2-MagpieUltraPlus-MathInstruct",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-29T20:25:27Z | ---
base_model: HuggingFaceTB/SmolLM2-MagpieUltraPlus-MathInstruct
language:
- en
library_name: transformers
model_name: SmolLM2-MagpieUltraPlus-MathInstruct
quantized_by: mradermacher
tags:
- generated_from_trainer
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/HuggingFaceTB/SmolLM2-MagpieUltraPlus-MathInstruct
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-MagpieUltraPlus-MathInstruct-GGUF/resolve/main/SmolLM2-MagpieUltraPlus-MathInstruct.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-MagpieUltraPlus-MathInstruct-GGUF/resolve/main/SmolLM2-MagpieUltraPlus-MathInstruct.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-MagpieUltraPlus-MathInstruct-GGUF/resolve/main/SmolLM2-MagpieUltraPlus-MathInstruct.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-MagpieUltraPlus-MathInstruct-GGUF/resolve/main/SmolLM2-MagpieUltraPlus-MathInstruct.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-MagpieUltraPlus-MathInstruct-GGUF/resolve/main/SmolLM2-MagpieUltraPlus-MathInstruct.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-MagpieUltraPlus-MathInstruct-GGUF/resolve/main/SmolLM2-MagpieUltraPlus-MathInstruct.Q4_K_S.gguf) | Q4_K_S | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-MagpieUltraPlus-MathInstruct-GGUF/resolve/main/SmolLM2-MagpieUltraPlus-MathInstruct.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-MagpieUltraPlus-MathInstruct-GGUF/resolve/main/SmolLM2-MagpieUltraPlus-MathInstruct.Q5_K_S.gguf) | Q5_K_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-MagpieUltraPlus-MathInstruct-GGUF/resolve/main/SmolLM2-MagpieUltraPlus-MathInstruct.Q5_K_M.gguf) | Q5_K_M | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-MagpieUltraPlus-MathInstruct-GGUF/resolve/main/SmolLM2-MagpieUltraPlus-MathInstruct.Q6_K.gguf) | Q6_K | 1.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-MagpieUltraPlus-MathInstruct-GGUF/resolve/main/SmolLM2-MagpieUltraPlus-MathInstruct.Q8_0.gguf) | Q8_0 | 1.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-MagpieUltraPlus-MathInstruct-GGUF/resolve/main/SmolLM2-MagpieUltraPlus-MathInstruct.f16.gguf) | f16 | 3.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
nathanialhunt/0e985a1d-7c10-4154-9f47-fc4eee4d351e | nathanialhunt | 2025-01-29T20:59:49Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct",
"license:other",
"region:us"
] | null | 2025-01-29T20:56:17Z | ---
library_name: peft
license: other
base_model: Qwen/Qwen2.5-3B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0e985a1d-7c10-4154-9f47-fc4eee4d351e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-3B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- feb300ac0ab17add_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/feb300ac0ab17add_train_data.json
type:
field_instruction: premises
field_output: hypothesis
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: nathanialhunt/0e985a1d-7c10-4154-9f47-fc4eee4d351e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/feb300ac0ab17add_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0029dd14-2f28-48f1-ae11-d42662d2b22d
wandb_project: Birthday-SN56-24-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0029dd14-2f28-48f1-ae11-d42662d2b22d
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 0e985a1d-7c10-4154-9f47-fc4eee4d351e
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4272
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0003 | 1 | 5.6963 |
| 4.7372 | 0.0034 | 13 | 0.7201 |
| 0.9063 | 0.0069 | 26 | 0.4865 |
| 0.4059 | 0.0103 | 39 | 0.4272 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso08/856b6c9e-5f6a-48fb-9b88-b8475f89fdfe | lesso08 | 2025-01-29T20:58:06Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct",
"license:other",
"region:us"
] | null | 2025-01-29T20:50:05Z | ---
library_name: peft
license: other
base_model: Qwen/Qwen2.5-3B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 856b6c9e-5f6a-48fb-9b88-b8475f89fdfe
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-3B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- feb300ac0ab17add_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/feb300ac0ab17add_train_data.json
type:
field_instruction: premises
field_output: hypothesis
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso08/856b6c9e-5f6a-48fb-9b88-b8475f89fdfe
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/feb300ac0ab17add_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0029dd14-2f28-48f1-ae11-d42662d2b22d
wandb_project: multi
wandb_run: your_name
wandb_runid: 0029dd14-2f28-48f1-ae11-d42662d2b22d
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 856b6c9e-5f6a-48fb-9b88-b8475f89fdfe
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.1763 | 0.4235 | 200 | 0.1778 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
daniel40/d31053de-ecd2-4436-9f66-a7e2f2209204 | daniel40 | 2025-01-29T20:54:32Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct",
"license:other",
"region:us"
] | null | 2025-01-29T20:50:10Z | ---
library_name: peft
license: other
base_model: Qwen/Qwen2.5-3B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d31053de-ecd2-4436-9f66-a7e2f2209204
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-3B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- feb300ac0ab17add_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/feb300ac0ab17add_train_data.json
type:
field_instruction: premises
field_output: hypothesis
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: daniel40/d31053de-ecd2-4436-9f66-a7e2f2209204
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/feb300ac0ab17add_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0029dd14-2f28-48f1-ae11-d42662d2b22d
wandb_project: Birthday-SN56-31-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0029dd14-2f28-48f1-ae11-d42662d2b22d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# d31053de-ecd2-4436-9f66-a7e2f2209204
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4317
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.7135 | 0.0003 | 1 | 5.6963 |
| 2.28 | 0.0034 | 13 | 1.4760 |
| 0.4659 | 0.0069 | 26 | 0.5132 |
| 0.3578 | 0.0103 | 39 | 0.4317 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
great0001/9e703772-50d2-4e2d-9e25-35fe2a771a9a | great0001 | 2025-01-29T20:54:01Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct",
"license:other",
"region:us"
] | null | 2025-01-29T20:49:59Z | ---
library_name: peft
license: other
base_model: Qwen/Qwen2.5-3B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9e703772-50d2-4e2d-9e25-35fe2a771a9a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-3B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- feb300ac0ab17add_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/feb300ac0ab17add_train_data.json
type:
field_instruction: premises
field_output: hypothesis
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: great0001/9e703772-50d2-4e2d-9e25-35fe2a771a9a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/feb300ac0ab17add_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0029dd14-2f28-48f1-ae11-d42662d2b22d
wandb_project: Mine-SN56-20-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0029dd14-2f28-48f1-ae11-d42662d2b22d
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 9e703772-50d2-4e2d-9e25-35fe2a771a9a
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4468
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 5.6963 |
| 3.9228 | 0.0017 | 13 | 0.5532 |
| 0.6018 | 0.0034 | 26 | 0.4488 |
| 0.4565 | 0.0052 | 39 | 0.4468 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
robiual-awal/845e3982-5c21-4396-b106-b21576f01e86 | robiual-awal | 2025-01-29T20:53:12Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct",
"license:other",
"region:us"
] | null | 2025-01-29T20:49:24Z | ---
library_name: peft
license: other
base_model: Qwen/Qwen2.5-3B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 845e3982-5c21-4396-b106-b21576f01e86
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-3B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- feb300ac0ab17add_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/feb300ac0ab17add_train_data.json
type:
field_instruction: premises
field_output: hypothesis
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: robiual-awal/845e3982-5c21-4396-b106-b21576f01e86
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/feb300ac0ab17add_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0029dd14-2f28-48f1-ae11-d42662d2b22d
wandb_project: Birthday-SN56-29-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0029dd14-2f28-48f1-ae11-d42662d2b22d
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 845e3982-5c21-4396-b106-b21576f01e86
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4234
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0003 | 1 | 5.6963 |
| 4.7246 | 0.0034 | 13 | 0.7165 |
| 0.9182 | 0.0069 | 26 | 0.4837 |
| 0.4025 | 0.0103 | 39 | 0.4234 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nlpguy/Lion-Lamarck-v.1.1.0 | nlpguy | 2025-01-29T20:52:50Z | 22 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2408.07990",
"base_model:CultriX/Qwen2.5-14B-Wernicke",
"base_model:merge:CultriX/Qwen2.5-14B-Wernicke",
"base_model:arcee-ai/SuperNova-Medius",
"base_model:merge:arcee-ai/SuperNova-Medius",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B",
"base_model:merge:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B",
"base_model:sometimesanotion/Lamarck-14B-v0.7",
"base_model:merge:sometimesanotion/Lamarck-14B-v0.7",
"base_model:sometimesanotion/Qwen2.5-14B-Vimarckoso-v3",
"base_model:merge:sometimesanotion/Qwen2.5-14B-Vimarckoso-v3",
"base_model:sometimesanotion/Qwenvergence-14B-v10-DS",
"base_model:merge:sometimesanotion/Qwenvergence-14B-v10-DS",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-29T20:41:21Z | ---
base_model:
- CultriX/Qwen2.5-14B-Wernicke
- arcee-ai/SuperNova-Medius
- sometimesanotion/Qwen2.5-14B-Vimarckoso-v3
- deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
- sometimesanotion/Lamarck-14B-v0.7
- sometimesanotion/Qwenvergence-14B-v10-DS
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [CultriX/Qwen2.5-14B-Wernicke](https://huggingface.co/CultriX/Qwen2.5-14B-Wernicke) as a base.
### Models Merged
The following models were included in the merge:
* [arcee-ai/SuperNova-Medius](https://huggingface.co/arcee-ai/SuperNova-Medius)
* [sometimesanotion/Qwen2.5-14B-Vimarckoso-v3](https://huggingface.co/sometimesanotion/Qwen2.5-14B-Vimarckoso-v3)
* [deepseek-ai/DeepSeek-R1-Distill-Qwen-14B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B)
* [sometimesanotion/Lamarck-14B-v0.7](https://huggingface.co/sometimesanotion/Lamarck-14B-v0.7)
* [sometimesanotion/Qwenvergence-14B-v10-DS](https://huggingface.co/sometimesanotion/Qwenvergence-14B-v10-DS)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: sometimesanotion/Lamarck-14B-v0.7
parameters:
select_topk: 1.0
- model: arcee-ai/SuperNova-Medius
parameters:
select_topk: 0.003125
- model: sometimesanotion/Qwen2.5-14B-Vimarckoso-v3
parameters:
select_topk: 0.00625
- model: deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
parameters:
select_topk: 0.003125
- model: sometimesanotion/Qwenvergence-14B-v10-DS
parameters:
select_topk: 0.275
merge_method: sce
base_model: CultriX/Qwen2.5-14B-Wernicke
tokenizer:
source: sometimesanotion/Lamarck-14B-v0.7
dtype: float32
out_dtype: bfloat16
```
|
Gwenn-LR/whisper-tiny | Gwenn-LR | 2025-01-29T20:51:15Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-01-12T19:22:28Z | ---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train
args: en-US
metrics:
- name: Wer
type: wer
value: 0.31030228254164094
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5900
- Wer Ortho: 0.3103
- Wer: 0.3103
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 500
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-------:|:----:|:---------------:|:---------:|:------:|
| 1.7824 | 1.7857 | 50 | 1.0732 | 0.4565 | 0.4565 |
| 0.3528 | 3.5714 | 100 | 0.4932 | 0.3745 | 0.3745 |
| 0.1313 | 5.3571 | 150 | 0.5215 | 0.3430 | 0.3430 |
| 0.035 | 7.1429 | 200 | 0.5468 | 0.3387 | 0.3387 |
| 0.0103 | 8.9286 | 250 | 0.5900 | 0.3103 | 0.3103 |
| 0.0085 | 10.7143 | 300 | 0.6345 | 0.3307 | 0.3307 |
| 0.009 | 12.5 | 350 | 0.6771 | 0.3418 | 0.3418 |
| 0.0137 | 14.2857 | 400 | 0.6456 | 0.3374 | 0.3374 |
| 0.0138 | 16.0714 | 450 | 0.6171 | 0.3294 | 0.3294 |
| 0.0151 | 17.8571 | 500 | 0.7379 | 0.4312 | 0.4312 |
### Framework versions
- Transformers 4.49.0.dev0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
mrferr3t/ed018e6e-1914-43db-895e-5845e4253d63 | mrferr3t | 2025-01-29T20:48:37Z | 8 | 0 | peft | [
"peft",
"safetensors",
"codegen",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:katuni4ka/tiny-random-codegen2",
"base_model:adapter:katuni4ka/tiny-random-codegen2",
"region:us"
] | null | 2025-01-29T20:47:37Z | ---
library_name: peft
base_model: katuni4ka/tiny-random-codegen2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ed018e6e-1914-43db-895e-5845e4253d63
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: katuni4ka/tiny-random-codegen2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- be2d29ebd515b9c4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/be2d29ebd515b9c4_train_data.json
type:
field_instruction: query
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/ed018e6e-1914-43db-895e-5845e4253d63
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 11
micro_batch_size: 2
mlflow_experiment_name: /tmp/be2d29ebd515b9c4_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b495c507-2dae-49e6-8b77-3c7142cce8dc
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b495c507-2dae-49e6-8b77-3c7142cce8dc
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# ed018e6e-1914-43db-895e-5845e4253d63
This model is a fine-tuned version of [katuni4ka/tiny-random-codegen2](https://huggingface.co/katuni4ka/tiny-random-codegen2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.8489
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 11
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 43.4091 | 0.0002 | 1 | 10.8505 |
| 43.3601 | 0.0005 | 3 | 10.8504 |
| 43.3598 | 0.0010 | 6 | 10.8498 |
| 43.3989 | 0.0015 | 9 | 10.8489 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
graelo/DeepSeek-R1-Distill-Qwen-32B-AWQ | graelo | 2025-01-29T20:47:54Z | 16 | 0 | null | [
"safetensors",
"qwen2",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
"base_model:quantized:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
"license:apache-2.0",
"4-bit",
"awq",
"region:us"
] | null | 2025-01-29T20:43:39Z | ---
license: apache-2.0
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
---
Quantized from [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) down to 4 bits, GEMM.
|
nttx/7ddcc99e-1a3e-4772-9477-b9f789771f6a | nttx | 2025-01-29T20:45:11Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-360M-Instruct",
"base_model:adapter:unsloth/SmolLM2-360M-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T20:42:47Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-360M-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7ddcc99e-1a3e-4772-9477-b9f789771f6a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-360M-Instruct
bf16: auto
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 20d8bc52992f4589_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/20d8bc52992f4589_train_data.json
type:
field_instruction: title
field_output: summary
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: nttx/7ddcc99e-1a3e-4772-9477-b9f789771f6a
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/20d8bc52992f4589_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f4949cdd-193c-4d9e-b503-e9e360323402
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f4949cdd-193c-4d9e-b503-e9e360323402
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7ddcc99e-1a3e-4772-9477-b9f789771f6a
This model is a fine-tuned version of [unsloth/SmolLM2-360M-Instruct](https://huggingface.co/unsloth/SmolLM2-360M-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5589
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 136
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.5959 | 0.9945 | 135 | 1.5558 |
| 1.8865 | 1.0018 | 136 | 1.5589 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
shibajustfor/b8d4094d-bf81-428f-bc7c-86b6ab650f43 | shibajustfor | 2025-01-29T20:43:30Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:JackFram/llama-160m",
"base_model:adapter:JackFram/llama-160m",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T20:40:49Z | ---
library_name: peft
license: apache-2.0
base_model: JackFram/llama-160m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b8d4094d-bf81-428f-bc7c-86b6ab650f43
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: JackFram/llama-160m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- efb4c503400c2a9e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/efb4c503400c2a9e_train_data.json
type:
field_input: narrative
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: shibajustfor/b8d4094d-bf81-428f-bc7c-86b6ab650f43
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/efb4c503400c2a9e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f88a2c67-61c0-4967-afee-32770d47baf4
wandb_project: Birthday-SN56-39-Gradients-On-Demand
wandb_run: your_name
wandb_runid: f88a2c67-61c0-4967-afee-32770d47baf4
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b8d4094d-bf81-428f-bc7c-86b6ab650f43
This model is a fine-tuned version of [JackFram/llama-160m](https://huggingface.co/JackFram/llama-160m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7779
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 4.9963 |
| 4.4988 | 0.0012 | 13 | 3.9625 |
| 4.0577 | 0.0024 | 26 | 3.3235 |
| 3.6454 | 0.0036 | 39 | 2.7779 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/orca_mini_v3_70b-GGUF | mradermacher | 2025-01-29T20:41:29Z | 178 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:psmathur/orca_mini_v1_dataset",
"dataset:ehartford/dolphin",
"base_model:pankajmathur/orca_mini_v3_70b",
"base_model:quantized:pankajmathur/orca_mini_v3_70b",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2025-01-20T09:40:45Z | ---
base_model: pankajmathur/orca_mini_v3_70b
datasets:
- psmathur/orca_mini_v1_dataset
- ehartford/dolphin
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/pankajmathur/orca_mini_v3_70b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/orca_mini_v3_70b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_70b-GGUF/resolve/main/orca_mini_v3_70b.Q2_K.gguf) | Q2_K | 25.6 | |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_70b-GGUF/resolve/main/orca_mini_v3_70b.Q3_K_S.gguf) | Q3_K_S | 30.0 | |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_70b-GGUF/resolve/main/orca_mini_v3_70b.Q3_K_M.gguf) | Q3_K_M | 33.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_70b-GGUF/resolve/main/orca_mini_v3_70b.Q3_K_L.gguf) | Q3_K_L | 36.2 | |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_70b-GGUF/resolve/main/orca_mini_v3_70b.IQ4_XS.gguf) | IQ4_XS | 37.3 | |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_70b-GGUF/resolve/main/orca_mini_v3_70b.Q4_K_S.gguf) | Q4_K_S | 39.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_70b-GGUF/resolve/main/orca_mini_v3_70b.Q4_K_M.gguf) | Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_70b-GGUF/resolve/main/orca_mini_v3_70b.Q5_K_S.gguf) | Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_70b-GGUF/resolve/main/orca_mini_v3_70b.Q5_K_M.gguf) | Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/orca_mini_v3_70b-GGUF/resolve/main/orca_mini_v3_70b.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/orca_mini_v3_70b-GGUF/resolve/main/orca_mini_v3_70b.Q6_K.gguf.part2of2) | Q6_K | 56.7 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/orca_mini_v3_70b-GGUF/resolve/main/orca_mini_v3_70b.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/orca_mini_v3_70b-GGUF/resolve/main/orca_mini_v3_70b.Q8_0.gguf.part2of2) | Q8_0 | 73.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
lesso13/f45a7d96-61c5-4c15-b58a-6a8449d0149c | lesso13 | 2025-01-29T20:41:25Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:oopsung/llama2-7b-n-ox-test-v1",
"base_model:adapter:oopsung/llama2-7b-n-ox-test-v1",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T19:33:31Z | ---
library_name: peft
base_model: oopsung/llama2-7b-n-ox-test-v1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f45a7d96-61c5-4c15-b58a-6a8449d0149c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: oopsung/llama2-7b-n-ox-test-v1
bf16: auto
chat_template: llama3
datasets:
- data_files:
- 29cacfcdf8be338c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/29cacfcdf8be338c_train_data.json
type:
field_input: dataset
field_instruction: prompt
field_output: completion
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso13/f45a7d96-61c5-4c15-b58a-6a8449d0149c
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/29cacfcdf8be338c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e3873475-6a24-45c6-8e86-c40b5fdad2bb
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: e3873475-6a24-45c6-8e86-c40b5fdad2bb
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# f45a7d96-61c5-4c15-b58a-6a8449d0149c
This model is a fine-tuned version of [oopsung/llama2-7b-n-ox-test-v1](https://huggingface.co/oopsung/llama2-7b-n-ox-test-v1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0251 | 200 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ivangrapher/7f0458e2-64b1-48ff-bad8-5dd2fb86ae79 | ivangrapher | 2025-01-29T20:38:12Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:JackFram/llama-160m",
"base_model:adapter:JackFram/llama-160m",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T20:28:15Z | ---
library_name: peft
license: apache-2.0
base_model: JackFram/llama-160m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7f0458e2-64b1-48ff-bad8-5dd2fb86ae79
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: JackFram/llama-160m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- efb4c503400c2a9e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/efb4c503400c2a9e_train_data.json
type:
field_input: narrative
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 256
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: false
hub_model_id: ivangrapher/7f0458e2-64b1-48ff-bad8-5dd2fb86ae79
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 40
micro_batch_size: 2
mlflow_experiment_name: /tmp/efb4c503400c2a9e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 20
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f88a2c67-61c0-4967-afee-32770d47baf4
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f88a2c67-61c0-4967-afee-32770d47baf4
warmup_steps: 20
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 7f0458e2-64b1-48ff-bad8-5dd2fb86ae79
This model is a fine-tuned version of [JackFram/llama-160m](https://huggingface.co/JackFram/llama-160m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8640
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- training_steps: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 3.3881 |
| 3.6257 | 0.0002 | 5 | 3.3782 |
| 3.1358 | 0.0005 | 10 | 3.2848 |
| 3.2651 | 0.0007 | 15 | 3.1270 |
| 3.2046 | 0.0009 | 20 | 3.0206 |
| 3.0382 | 0.0012 | 25 | 2.9454 |
| 2.7765 | 0.0014 | 30 | 2.8926 |
| 2.9101 | 0.0016 | 35 | 2.8680 |
| 3.0569 | 0.0018 | 40 | 2.8640 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mrferr3t/dda1de64-b6dc-4453-9fe2-32b9e86eebe6 | mrferr3t | 2025-01-29T20:36:25Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:JackFram/llama-160m",
"base_model:adapter:JackFram/llama-160m",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T20:31:15Z | ---
library_name: peft
license: apache-2.0
base_model: JackFram/llama-160m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: dda1de64-b6dc-4453-9fe2-32b9e86eebe6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: JackFram/llama-160m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- efb4c503400c2a9e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/efb4c503400c2a9e_train_data.json
type:
field_input: narrative
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/dda1de64-b6dc-4453-9fe2-32b9e86eebe6
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 22
micro_batch_size: 2
mlflow_experiment_name: /tmp/efb4c503400c2a9e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f88a2c67-61c0-4967-afee-32770d47baf4
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f88a2c67-61c0-4967-afee-32770d47baf4
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# dda1de64-b6dc-4453-9fe2-32b9e86eebe6
This model is a fine-tuned version of [JackFram/llama-160m](https://huggingface.co/JackFram/llama-160m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 22
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.7165 | 0.0001 | 1 | 4.9962 |
| 4.7915 | 0.0006 | 6 | 4.9266 |
| 4.7771 | 0.0011 | 12 | 4.2738 |
| 4.1579 | 0.0017 | 18 | 3.9085 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Lazaro97/detr-resnet-50-dc5-fashionpedia-finetuned | Lazaro97 | 2025-01-29T20:35:04Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50-dc5",
"base_model:finetune:facebook/detr-resnet-50-dc5",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | 2025-01-29T19:43:03Z | ---
library_name: transformers
license: apache-2.0
base_model: facebook/detr-resnet-50-dc5
tags:
- generated_from_trainer
model-index:
- name: detr-resnet-50-dc5-fashionpedia-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50-dc5-fashionpedia-finetuned
This model is a fine-tuned version of [facebook/detr-resnet-50-dc5](https://huggingface.co/facebook/detr-resnet-50-dc5) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 100
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.1+cpu
- Datasets 3.1.0
- Tokenizers 0.19.1
|
0x1202/f4cce54b-b62e-4b0f-8fee-a467dfaf1159 | 0x1202 | 2025-01-29T20:34:40Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:JackFram/llama-68m",
"base_model:adapter:JackFram/llama-68m",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T20:29:37Z | ---
library_name: peft
license: apache-2.0
base_model: JackFram/llama-68m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f4cce54b-b62e-4b0f-8fee-a467dfaf1159
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: JackFram/llama-68m
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- c9824ec84fd2b22d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c9824ec84fd2b22d_train_data.json
type:
field_input: context
field_instruction: question
field_output: long_answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: 0x1202/f4cce54b-b62e-4b0f-8fee-a467dfaf1159
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/c9824ec84fd2b22d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 65315679-54f1-4fb8-a8c7-2eefddab0fe0
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 65315679-54f1-4fb8-a8c7-2eefddab0fe0
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# f4cce54b-b62e-4b0f-8fee-a467dfaf1159
This model is a fine-tuned version of [JackFram/llama-68m](https://huggingface.co/JackFram/llama-68m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3253
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.2336 | 0.0002 | 1 | 2.9845 |
| 2.7232 | 0.0080 | 50 | 2.3730 |
| 2.5613 | 0.0160 | 100 | 2.3438 |
| 2.6974 | 0.0239 | 150 | 2.3320 |
| 2.7488 | 0.0319 | 200 | 2.3253 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
minhnguyennnnnn/99fa811f-f99f-42b8-b604-f450c039d144 | minhnguyennnnnn | 2025-01-29T20:34:37Z | 5 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Hermes-2-Theta-Llama-3-8B",
"base_model:adapter:NousResearch/Hermes-2-Theta-Llama-3-8B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T19:38:33Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Hermes-2-Theta-Llama-3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 99fa811f-f99f-42b8-b604-f450c039d144
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Hermes-2-Theta-Llama-3-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e15924625f1f4c59_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e15924625f1f4c59_train_data.json
type:
field_instruction: INSTRUCTION
field_output: RESPONSE
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: minhnguyennnnnn/99fa811f-f99f-42b8-b604-f450c039d144
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/e15924625f1f4c59_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6ed7c6ac-dcfc-4bd7-8673-497f923d29c1
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6ed7c6ac-dcfc-4bd7-8673-497f923d29c1
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 99fa811f-f99f-42b8-b604-f450c039d144
This model is a fine-tuned version of [NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8689
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7084 | 0.0229 | 200 | 1.8689 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mehmetkeremturkcan/SmollerLM-48M-Instruct-ft-dpo | mehmetkeremturkcan | 2025-01-29T20:34:17Z | 31 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:trl-lib/ultrafeedback_binarized",
"arxiv:2305.18290",
"base_model:mehmetkeremturkcan/SmollerLM-48M-Instruct-ft-sft",
"base_model:finetune:mehmetkeremturkcan/SmollerLM-48M-Instruct-ft-sft",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-29T16:44:54Z | ---
base_model: mehmetkeremturkcan/SmollerLM-48M-Instruct-ft-sft
datasets: trl-lib/ultrafeedback_binarized
library_name: transformers
model_name: SmollerLM-48M-Instruct-ft-dpo
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for SmollerLM-48M-Instruct-ft-dpo
This model is a fine-tuned version of [mehmetkeremturkcan/SmollerLM-48M-Instruct-ft-sft](https://huggingface.co/mehmetkeremturkcan/SmollerLM-48M-Instruct-ft-sft) on the [trl-lib/ultrafeedback_binarized](https://huggingface.co/datasets/trl-lib/ultrafeedback_binarized) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="mehmetkeremturkcan/SmollerLM-48M-Instruct-ft-dpo", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/mkturkcan/huggingface/runs/c50b941a)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.14.0
- Transformers: 4.46.2
- Pytorch: 2.5.1
- Datasets: 3.2.0
- Tokenizers: 0.20.0
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
biustnaspust/puszek36 | biustnaspust | 2025-01-29T20:31:11Z | 19 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-29T20:26:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-Qwen2.5-Math-1.5B-GGUF | mradermacher | 2025-01-29T20:25:43Z | 648 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:ltn18/DeepSeek-R1-Distill-Qwen-1.5B-Qwen2.5-Math-1.5B",
"base_model:quantized:ltn18/DeepSeek-R1-Distill-Qwen-1.5B-Qwen2.5-Math-1.5B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-29T19:44:33Z | ---
base_model: ltn18/DeepSeek-R1-Distill-Qwen-1.5B-Qwen2.5-Math-1.5B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/ltn18/DeepSeek-R1-Distill-Qwen-1.5B-Qwen2.5-Math-1.5B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-Qwen2.5-Math-1.5B-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-Qwen2.5-Math-1.5B.Q2_K.gguf) | Q2_K | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-Qwen2.5-Math-1.5B-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-Qwen2.5-Math-1.5B.Q3_K_S.gguf) | Q3_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-Qwen2.5-Math-1.5B-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-Qwen2.5-Math-1.5B.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-Qwen2.5-Math-1.5B-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-Qwen2.5-Math-1.5B.Q3_K_L.gguf) | Q3_K_L | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-Qwen2.5-Math-1.5B-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-Qwen2.5-Math-1.5B.IQ4_XS.gguf) | IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-Qwen2.5-Math-1.5B-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-Qwen2.5-Math-1.5B.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-Qwen2.5-Math-1.5B-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-Qwen2.5-Math-1.5B.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-Qwen2.5-Math-1.5B-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-Qwen2.5-Math-1.5B.Q5_K_S.gguf) | Q5_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-Qwen2.5-Math-1.5B-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-Qwen2.5-Math-1.5B.Q5_K_M.gguf) | Q5_K_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-Qwen2.5-Math-1.5B-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-Qwen2.5-Math-1.5B.Q6_K.gguf) | Q6_K | 1.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-Qwen2.5-Math-1.5B-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-Qwen2.5-Math-1.5B.Q8_0.gguf) | Q8_0 | 2.0 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-Qwen2.5-Math-1.5B-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-Qwen2.5-Math-1.5B.f16.gguf) | f16 | 3.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mamung/e1c7e5e0-0d38-4674-9c2b-04ce5e195a2b | mamung | 2025-01-29T20:24:56Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:llamafactory/tiny-random-Llama-3",
"base_model:adapter:llamafactory/tiny-random-Llama-3",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T20:22:48Z | ---
library_name: peft
license: apache-2.0
base_model: llamafactory/tiny-random-Llama-3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e1c7e5e0-0d38-4674-9c2b-04ce5e195a2b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: llamafactory/tiny-random-Llama-3
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2ab021df2db02dce_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2ab021df2db02dce_train_data.json
type:
field_instruction: Question
field_output: Answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 256
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 32
gradient_checkpointing: true
group_by_length: false
hub_model_id: mamung/e1c7e5e0-0d38-4674-9c2b-04ce5e195a2b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.00015
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
lr_scheduler: cosine
max_grad_norm: 2
max_steps: 100
micro_batch_size: 2
mlflow_experiment_name: /tmp/2ab021df2db02dce_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1.0e-05
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 2048
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: eddysang
wandb_mode: online
wandb_name: 949be3b1-1138-4355-a97c-ca8a979a06e0
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 949be3b1-1138-4355-a97c-ca8a979a06e0
warmup_steps: 20
weight_decay: 0.01
xformers_attention: false
```
</details><br>
# e1c7e5e0-0d38-4674-9c2b-04ce5e195a2b
This model is a fine-tuned version of [llamafactory/tiny-random-Llama-3](https://huggingface.co/llamafactory/tiny-random-Llama-3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.7214
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00015
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-05
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0261 | 1 | 11.7571 |
| 11.7584 | 0.2353 | 9 | 11.7561 |
| 11.7568 | 0.4706 | 18 | 11.7528 |
| 11.75 | 0.7059 | 27 | 11.7465 |
| 11.7446 | 0.9412 | 36 | 11.7382 |
| 11.5807 | 1.1765 | 45 | 11.7303 |
| 11.7552 | 1.4118 | 54 | 11.7259 |
| 11.6633 | 1.6471 | 63 | 11.7237 |
| 11.5911 | 1.8824 | 72 | 11.7225 |
| 11.6439 | 2.1176 | 81 | 11.7218 |
| 11.9276 | 2.3529 | 90 | 11.7215 |
| 11.75 | 2.5882 | 99 | 11.7214 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso03/fce5df92-d9c1-4c40-af26-77b184d66e02 | lesso03 | 2025-01-29T20:23:56Z | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:llamafactory/tiny-random-Llama-3",
"base_model:adapter:llamafactory/tiny-random-Llama-3",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T20:23:39Z | ---
library_name: peft
license: apache-2.0
base_model: llamafactory/tiny-random-Llama-3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fce5df92-d9c1-4c40-af26-77b184d66e02
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: llamafactory/tiny-random-Llama-3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2ab021df2db02dce_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2ab021df2db02dce_train_data.json
type:
field_instruction: Question
field_output: Answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso03/fce5df92-d9c1-4c40-af26-77b184d66e02
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/2ab021df2db02dce_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 949be3b1-1138-4355-a97c-ca8a979a06e0
wandb_project: multi
wandb_run: your_name
wandb_runid: 949be3b1-1138-4355-a97c-ca8a979a06e0
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# fce5df92-d9c1-4c40-af26-77b184d66e02
This model is a fine-tuned version of [llamafactory/tiny-random-Llama-3](https://huggingface.co/llamafactory/tiny-random-Llama-3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.7563
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 39
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 11.7593 | 0.9935 | 38 | 11.7563 |
| 20.3703 | 1.0196 | 39 | 11.7563 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Brent1167/potato-lora | Brent1167 | 2025-01-29T20:23:29Z | 18 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-29T04:30:10Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: potato
---
# Potato Lora
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `potato` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Brent1167/potato-lora', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
prxy5604/890d88aa-ac6a-4aaf-bbef-297b3aa0af4a | prxy5604 | 2025-01-29T20:20:04Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-135M",
"base_model:adapter:unsloth/SmolLM-135M",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T20:14:42Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-135M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 890d88aa-ac6a-4aaf-bbef-297b3aa0af4a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-135M
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- e2382899c197d6f2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e2382899c197d6f2_train_data.json
type:
field_input: Story_ID
field_instruction: context
field_output: question
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: prxy5604/890d88aa-ac6a-4aaf-bbef-297b3aa0af4a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/e2382899c197d6f2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f3a48781-8773-47e9-a6fd-fd794eeaa989
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f3a48781-8773-47e9-a6fd-fd794eeaa989
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 890d88aa-ac6a-4aaf-bbef-297b3aa0af4a
This model is a fine-tuned version of [unsloth/SmolLM-135M](https://huggingface.co/unsloth/SmolLM-135M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7516
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.1193 | 0.0047 | 1 | 4.4299 |
| 3.1592 | 0.2356 | 50 | 3.0928 |
| 2.7248 | 0.4711 | 100 | 2.8760 |
| 2.7777 | 0.7067 | 150 | 2.7708 |
| 2.8684 | 0.9423 | 200 | 2.7516 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nhung01/08785ba5-134d-423b-9dc8-c693b2df242a | nhung01 | 2025-01-29T20:19:58Z | 5 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Coder-7B",
"base_model:adapter:unsloth/Qwen2.5-Coder-7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T19:52:24Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Coder-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 08785ba5-134d-423b-9dc8-c693b2df242a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Coder-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f139bf284ccd53f2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f139bf284ccd53f2_train_data.json
type:
field_input: right_context
field_instruction: prompt
field_output: groundtruth
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung01/08785ba5-134d-423b-9dc8-c693b2df242a
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/f139bf284ccd53f2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: cae9065f-baf8-489d-bdb5-78c627b7f61e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: cae9065f-baf8-489d-bdb5-78c627b7f61e
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 08785ba5-134d-423b-9dc8-c693b2df242a
This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-7B](https://huggingface.co/unsloth/Qwen2.5-Coder-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7887
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9954 | 0.6832 | 200 | 1.7887 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nghiatrannnnnn/d3745184-e404-4bce-87b3-d059f9c4c157 | nghiatrannnnnn | 2025-01-29T20:19:52Z | 5 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Coder-7B",
"base_model:adapter:unsloth/Qwen2.5-Coder-7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T19:51:43Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Coder-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d3745184-e404-4bce-87b3-d059f9c4c157
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Coder-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f139bf284ccd53f2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f139bf284ccd53f2_train_data.json
type:
field_input: right_context
field_instruction: prompt
field_output: groundtruth
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nghiatrannnnnn/d3745184-e404-4bce-87b3-d059f9c4c157
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/f139bf284ccd53f2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: cae9065f-baf8-489d-bdb5-78c627b7f61e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: cae9065f-baf8-489d-bdb5-78c627b7f61e
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# d3745184-e404-4bce-87b3-d059f9c4c157
This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-7B](https://huggingface.co/unsloth/Qwen2.5-Coder-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8003
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9873 | 0.6832 | 200 | 1.8003 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
gavrilstep/f83c090c-5d36-469e-a9ce-a5791e359c7a | gavrilstep | 2025-01-29T20:18:13Z | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:Intel/neural-chat-7b-v3-3",
"base_model:adapter:Intel/neural-chat-7b-v3-3",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T17:15:58Z | ---
library_name: peft
license: apache-2.0
base_model: Intel/neural-chat-7b-v3-3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f83c090c-5d36-469e-a9ce-a5791e359c7a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Intel/neural-chat-7b-v3-3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 50647f9e6e89cbb7_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/50647f9e6e89cbb7_train_data.json
type:
field_input: ingredients_processed
field_instruction: title
field_output: directions
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: gavrilstep/f83c090c-5d36-469e-a9ce-a5791e359c7a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 39
micro_batch_size: 2
mlflow_experiment_name: /tmp/50647f9e6e89cbb7_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 21
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: c53eddb1-5a0f-4d15-bd00-9389024c7d94
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: c53eddb1-5a0f-4d15-bd00-9389024c7d94
warmup_steps: 21
weight_decay: 0.02
xformers_attention: true
```
</details><br>
# f83c090c-5d36-469e-a9ce-a5791e359c7a
This model is a fine-tuned version of [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 21
- training_steps: 39
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | nan |
| 0.0 | 0.0002 | 5 | nan |
| 0.0 | 0.0004 | 10 | nan |
| 0.0 | 0.0006 | 15 | nan |
| 0.0 | 0.0008 | 20 | nan |
| 0.0 | 0.0010 | 25 | nan |
| 0.0 | 0.0012 | 30 | nan |
| 0.0 | 0.0014 | 35 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
jobs-git/LTX-Video | jobs-git | 2025-01-29T20:18:09Z | 5 | 0 | diffusers | [
"diffusers",
"safetensors",
"ltx-video",
"text-to-video",
"image-to-video",
"en",
"license:other",
"diffusers:LTXPipeline",
"region:us"
] | image-to-video | 2025-01-29T20:18:08Z | ---
tags:
- ltx-video
- text-to-video
- image-to-video
pinned: true
language:
- en
license: other
---
# LTX-Video Model Card
This model card focuses on the model associated with the LTX-Video model, codebase available [here](https://github.com/Lightricks/LTX-Video).
LTX-Video is the first DiT-based video generation model capable of generating high-quality videos in real-time. It produces 24 FPS videos at a 768x512 resolution faster than they can be watched. Trained on a large-scale dataset of diverse videos, the model generates high-resolution videos with realistic and varied content.
We provide a model for both text-to-video as well as image+text-to-video usecases
<img src="./media/trailer.gif" alt="trailer" width="512">
| | | | |
|:---:|:---:|:---:|:---:|
| <br><details style="max-width: 300px; margin: auto;"><summary>A woman with long brown hair and light skin smiles at another woman...</summary>A woman with long brown hair and light skin smiles at another woman with long blonde hair. The woman with brown hair wears a black jacket and has a small, barely noticeable mole on her right cheek. The camera angle is a close-up, focused on the woman with brown hair's face. The lighting is warm and natural, likely from the setting sun, casting a soft glow on the scene. The scene appears to be real-life footage.</details> | <br><details style="max-width: 300px; margin: auto;"><summary>A woman walks away from a white Jeep parked on a city street at night...</summary>A woman walks away from a white Jeep parked on a city street at night, then ascends a staircase and knocks on a door. The woman, wearing a dark jacket and jeans, walks away from the Jeep parked on the left side of the street, her back to the camera; she walks at a steady pace, her arms swinging slightly by her sides; the street is dimly lit, with streetlights casting pools of light on the wet pavement; a man in a dark jacket and jeans walks past the Jeep in the opposite direction; the camera follows the woman from behind as she walks up a set of stairs towards a building with a green door; she reaches the top of the stairs and turns left, continuing to walk towards the building; she reaches the door and knocks on it with her right hand; the camera remains stationary, focused on the doorway; the scene is captured in real-life footage.</details> | <br><details style="max-width: 300px; margin: auto;"><summary>A woman with blonde hair styled up, wearing a black dress...</summary>A woman with blonde hair styled up, wearing a black dress with sequins and pearl earrings, looks down with a sad expression on her face. The camera remains stationary, focused on the woman's face. The lighting is dim, casting soft shadows on her face. The scene appears to be from a movie or TV show.</details> | <br><details style="max-width: 300px; margin: auto;"><summary>The camera pans over a snow-covered mountain range...</summary>The camera pans over a snow-covered mountain range, revealing a vast expanse of snow-capped peaks and valleys.The mountains are covered in a thick layer of snow, with some areas appearing almost white while others have a slightly darker, almost grayish hue. The peaks are jagged and irregular, with some rising sharply into the sky while others are more rounded. The valleys are deep and narrow, with steep slopes that are also covered in snow. The trees in the foreground are mostly bare, with only a few leaves remaining on their branches. The sky is overcast, with thick clouds obscuring the sun. The overall impression is one of peace and tranquility, with the snow-covered mountains standing as a testament to the power and beauty of nature.</details> |
| <br><details style="max-width: 300px; margin: auto;"><summary>A woman with light skin, wearing a blue jacket and a black hat...</summary>A woman with light skin, wearing a blue jacket and a black hat with a veil, looks down and to her right, then back up as she speaks; she has brown hair styled in an updo, light brown eyebrows, and is wearing a white collared shirt under her jacket; the camera remains stationary on her face as she speaks; the background is out of focus, but shows trees and people in period clothing; the scene is captured in real-life footage.</details> | <br><details style="max-width: 300px; margin: auto;"><summary>A man in a dimly lit room talks on a vintage telephone...</summary>A man in a dimly lit room talks on a vintage telephone, hangs up, and looks down with a sad expression. He holds the black rotary phone to his right ear with his right hand, his left hand holding a rocks glass with amber liquid. He wears a brown suit jacket over a white shirt, and a gold ring on his left ring finger. His short hair is neatly combed, and he has light skin with visible wrinkles around his eyes. The camera remains stationary, focused on his face and upper body. The room is dark, lit only by a warm light source off-screen to the left, casting shadows on the wall behind him. The scene appears to be from a movie.</details> | <br><details style="max-width: 300px; margin: auto;"><summary>A prison guard unlocks and opens a cell door...</summary>A prison guard unlocks and opens a cell door to reveal a young man sitting at a table with a woman. The guard, wearing a dark blue uniform with a badge on his left chest, unlocks the cell door with a key held in his right hand and pulls it open; he has short brown hair, light skin, and a neutral expression. The young man, wearing a black and white striped shirt, sits at a table covered with a white tablecloth, facing the woman; he has short brown hair, light skin, and a neutral expression. The woman, wearing a dark blue shirt, sits opposite the young man, her face turned towards him; she has short blonde hair and light skin. The camera remains stationary, capturing the scene from a medium distance, positioned slightly to the right of the guard. The room is dimly lit, with a single light fixture illuminating the table and the two figures. The walls are made of large, grey concrete blocks, and a metal door is visible in the background. The scene is captured in real-life footage.</details> | <br><details style="max-width: 300px; margin: auto;"><summary>A woman with blood on her face and a white tank top...</summary>A woman with blood on her face and a white tank top looks down and to her right, then back up as she speaks. She has dark hair pulled back, light skin, and her face and chest are covered in blood. The camera angle is a close-up, focused on the woman's face and upper torso. The lighting is dim and blue-toned, creating a somber and intense atmosphere. The scene appears to be from a movie or TV show.</details> |
| <br><details style="max-width: 300px; margin: auto;"><summary>A man with graying hair, a beard, and a gray shirt...</summary>A man with graying hair, a beard, and a gray shirt looks down and to his right, then turns his head to the left. The camera angle is a close-up, focused on the man's face. The lighting is dim, with a greenish tint. The scene appears to be real-life footage. Step</details> | <br><details style="max-width: 300px; margin: auto;"><summary>A clear, turquoise river flows through a rocky canyon...</summary>A clear, turquoise river flows through a rocky canyon, cascading over a small waterfall and forming a pool of water at the bottom.The river is the main focus of the scene, with its clear water reflecting the surrounding trees and rocks. The canyon walls are steep and rocky, with some vegetation growing on them. The trees are mostly pine trees, with their green needles contrasting with the brown and gray rocks. The overall tone of the scene is one of peace and tranquility.</details> | <br><details style="max-width: 300px; margin: auto;"><summary>A man in a suit enters a room and speaks to two women...</summary>A man in a suit enters a room and speaks to two women sitting on a couch. The man, wearing a dark suit with a gold tie, enters the room from the left and walks towards the center of the frame. He has short gray hair, light skin, and a serious expression. He places his right hand on the back of a chair as he approaches the couch. Two women are seated on a light-colored couch in the background. The woman on the left wears a light blue sweater and has short blonde hair. The woman on the right wears a white sweater and has short blonde hair. The camera remains stationary, focusing on the man as he enters the room. The room is brightly lit, with warm tones reflecting off the walls and furniture. The scene appears to be from a film or television show.</details> | <br><details style="max-width: 300px; margin: auto;"><summary>The waves crash against the jagged rocks of the shoreline...</summary>The waves crash against the jagged rocks of the shoreline, sending spray high into the air.The rocks are a dark gray color, with sharp edges and deep crevices. The water is a clear blue-green, with white foam where the waves break against the rocks. The sky is a light gray, with a few white clouds dotting the horizon.</details> |
| <br><details style="max-width: 300px; margin: auto;"><summary>The camera pans across a cityscape of tall buildings...</summary>The camera pans across a cityscape of tall buildings with a circular building in the center. The camera moves from left to right, showing the tops of the buildings and the circular building in the center. The buildings are various shades of gray and white, and the circular building has a green roof. The camera angle is high, looking down at the city. The lighting is bright, with the sun shining from the upper left, casting shadows from the buildings. The scene is computer-generated imagery.</details> | <br><details style="max-width: 300px; margin: auto;"><summary>A man walks towards a window, looks out, and then turns around...</summary>A man walks towards a window, looks out, and then turns around. He has short, dark hair, dark skin, and is wearing a brown coat over a red and gray scarf. He walks from left to right towards a window, his gaze fixed on something outside. The camera follows him from behind at a medium distance. The room is brightly lit, with white walls and a large window covered by a white curtain. As he approaches the window, he turns his head slightly to the left, then back to the right. He then turns his entire body to the right, facing the window. The camera remains stationary as he stands in front of the window. The scene is captured in real-life footage.</details> | <br><details style="max-width: 300px; margin: auto;"><summary>Two police officers in dark blue uniforms and matching hats...</summary>Two police officers in dark blue uniforms and matching hats enter a dimly lit room through a doorway on the left side of the frame. The first officer, with short brown hair and a mustache, steps inside first, followed by his partner, who has a shaved head and a goatee. Both officers have serious expressions and maintain a steady pace as they move deeper into the room. The camera remains stationary, capturing them from a slightly low angle as they enter. The room has exposed brick walls and a corrugated metal ceiling, with a barred window visible in the background. The lighting is low-key, casting shadows on the officers' faces and emphasizing the grim atmosphere. The scene appears to be from a film or television show.</details> | <br><details style="max-width: 300px; margin: auto;"><summary>A woman with short brown hair, wearing a maroon sleeveless top...</summary>A woman with short brown hair, wearing a maroon sleeveless top and a silver necklace, walks through a room while talking, then a woman with pink hair and a white shirt appears in the doorway and yells. The first woman walks from left to right, her expression serious; she has light skin and her eyebrows are slightly furrowed. The second woman stands in the doorway, her mouth open in a yell; she has light skin and her eyes are wide. The room is dimly lit, with a bookshelf visible in the background. The camera follows the first woman as she walks, then cuts to a close-up of the second woman's face. The scene is captured in real-life footage.</details> |
## Model Details
- **Developed by:** Lightricks
- **Model type:** Diffusion-based text-to-video and image-to-video generation model
- **Language(s):** English
## Usage
### Direct use
You can use the model for purposes under the [license](https://github.com/Lightricks/LTX-Video/blob/main/LICENSE)
### General tips:
* The model works on resolutions that are divisible by 32 and number of frames that are divisible by 8 + 1 (e.g. 257). In case the resolution or number of frames are not divisible by 32 or 8 + 1, the input will be padded with -1 and then cropped to the desired resolution and number of frames.
* The model works best on resolutions under 720 x 1280 and number of frames below 257.
* Prompts should be in English. The more elaborate the better. Good prompt looks like `The turquoise waves crash against the dark, jagged rocks of the shore, sending white foam spraying into the air. The scene is dominated by the stark contrast between the bright blue water and the dark, almost black rocks. The water is a clear, turquoise color, and the waves are capped with white foam. The rocks are dark and jagged, and they are covered in patches of green moss. The shore is lined with lush green vegetation, including trees and bushes. In the background, there are rolling hills covered in dense forest. The sky is cloudy, and the light is dim.`
### Online demo
The model is accessible right away via following links:
- [HF Playground](https://huggingface.co/spaces/Lightricks/LTX-Video-Playground)
- [Fal.ai text-to-video](https://fal.ai/models/fal-ai/ltx-video)
- [Fal.ai image-to-video](https://fal.ai/models/fal-ai/ltx-video/image-to-video)
### ComfyUI
To use our model with ComfyUI, please follow the instructions at a dedicated [ComfyUI repo](https://github.com/Lightricks/ComfyUI-LTXVideo/).
### Run locally
#### Installation
The codebase was tested with Python 3.10.5, CUDA version 12.2, and supports PyTorch >= 2.1.2.
```bash
git clone https://github.com/Lightricks/LTX-Video.git
cd LTX-Video
# create env
python -m venv env
source env/bin/activate
python -m pip install -e .\[inference-script\]
```
Then, download the model from [Hugging Face](https://huggingface.co/Lightricks/LTX-Video)
```python
from huggingface_hub import snapshot_download
model_path = 'PATH' # The local directory to save downloaded checkpoint
snapshot_download("Lightricks/LTX-Video", local_dir=model_path, local_dir_use_symlinks=False, repo_type='model')
```
#### Inference
To use our model, please follow the inference code in [inference.py](https://github.com/Lightricks/LTX-Video/blob/main/inference.py):
##### For text-to-video generation:
```bash
python inference.py --ckpt_dir 'PATH' --prompt "PROMPT" --height HEIGHT --width WIDTH --num_frames NUM_FRAMES --seed SEED
```
##### For image-to-video generation:
```bash
python inference.py --ckpt_dir 'PATH' --prompt "PROMPT" --input_image_path IMAGE_PATH --height HEIGHT --width WIDTH --num_frames NUM_FRAMES --seed SEED
```
### Diffusers 🧨
LTX Video is compatible with the [Diffusers Python library](https://huggingface.co/docs/diffusers/main/en/index). It supports both text-to-video and image-to-video generation.
Make sure you install `diffusers` before trying out the examples below.
```bash
pip install -U git+https://github.com/huggingface/diffusers
```
Now, you can run the examples below:
```py
import torch
from diffusers import LTXPipeline
from diffusers.utils import export_to_video
pipe = LTXPipeline.from_pretrained("Lightricks/LTX-Video", torch_dtype=torch.bfloat16)
pipe.to("cuda")
prompt = "A woman with long brown hair and light skin smiles at another woman with long blonde hair. The woman with brown hair wears a black jacket and has a small, barely noticeable mole on her right cheek. The camera angle is a close-up, focused on the woman with brown hair's face. The lighting is warm and natural, likely from the setting sun, casting a soft glow on the scene. The scene appears to be real-life footage"
negative_prompt = "worst quality, inconsistent motion, blurry, jittery, distorted"
video = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
width=704,
height=480,
num_frames=161,
num_inference_steps=50,
).frames[0]
export_to_video(video, "output.mp4", fps=24)
```
For image-to-video:
```py
import torch
from diffusers import LTXImageToVideoPipeline
from diffusers.utils import export_to_video, load_image
pipe = LTXImageToVideoPipeline.from_pretrained("Lightricks/LTX-Video", torch_dtype=torch.bfloat16)
pipe.to("cuda")
image = load_image(
"https://huggingface.co/datasets/a-r-r-o-w/tiny-meme-dataset-captioned/resolve/main/images/8.png"
)
prompt = "A young girl stands calmly in the foreground, looking directly at the camera, as a house fire rages in the background. Flames engulf the structure, with smoke billowing into the air. Firefighters in protective gear rush to the scene, a fire truck labeled '38' visible behind them. The girl's neutral expression contrasts sharply with the chaos of the fire, creating a poignant and emotionally charged scene."
negative_prompt = "worst quality, inconsistent motion, blurry, jittery, distorted"
video = pipe(
image=image,
prompt=prompt,
negative_prompt=negative_prompt,
width=704,
height=480,
num_frames=161,
num_inference_steps=50,
).frames[0]
export_to_video(video, "output.mp4", fps=24)
```
To learn more, check out the [official documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/ltx_video).
Diffusers also supports directly loading from the original LTX checkpoints using the `from_single_file()` method. Check out [this section](https://huggingface.co/docs/diffusers/main/en/api/pipelines/ltx_video#loading-single-files) to learn more.
## Limitations
- This model is not intended or able to provide factual information.
- As a statistical model this checkpoint might amplify existing societal biases.
- The model may fail to generate videos that matches the prompts perfectly.
- Prompt following is heavily influenced by the prompting-style. |
nhoxinh/30c2f95c-3fdb-44ae-b127-89db16262054 | nhoxinh | 2025-01-29T20:17:27Z | 5 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Coder-7B",
"base_model:adapter:unsloth/Qwen2.5-Coder-7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T19:52:26Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Coder-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 30c2f95c-3fdb-44ae-b127-89db16262054
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Coder-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f139bf284ccd53f2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f139bf284ccd53f2_train_data.json
type:
field_input: right_context
field_instruction: prompt
field_output: groundtruth
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhoxinh/30c2f95c-3fdb-44ae-b127-89db16262054
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/f139bf284ccd53f2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: cae9065f-baf8-489d-bdb5-78c627b7f61e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: cae9065f-baf8-489d-bdb5-78c627b7f61e
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 30c2f95c-3fdb-44ae-b127-89db16262054
This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-7B](https://huggingface.co/unsloth/Qwen2.5-Coder-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8013
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0064 | 0.6832 | 200 | 1.8013 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mlfoundations-dev/llama3_mammoth_dcft_ablation_original_50k | mlfoundations-dev | 2025-01-29T20:15:17Z | 746 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-24T19:59:32Z | ---
library_name: transformers
license: llama3.1
base_model: meta-llama/Llama-3.1-8B
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: llama3_mammoth_dcft_ablation_original_50k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3_mammoth_dcft_ablation_original_50k
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) on the mlfoundations-dev/wia_dcft_webinstruct_original_uniform_50k dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0594
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 16
- total_train_batch_size: 512
- total_eval_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.1
- lr_scheduler_warmup_steps: 1738
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.896 | 7 | 0.0652 |
| 1.0796 | 1.96 | 15 | 0.0610 |
| 0.9233 | 2.768 | 21 | 0.0594 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
jobs-git/chronos-bolt-base | jobs-git | 2025-01-29T20:15:15Z | 115 | 0 | null | [
"safetensors",
"t5",
"time series",
"forecasting",
"pretrained models",
"foundation models",
"time series foundation models",
"time-series",
"time-series-forecasting",
"arxiv:1910.10683",
"arxiv:2403.07815",
"license:apache-2.0",
"region:us"
] | time-series-forecasting | 2025-01-29T20:15:14Z | ---
license: apache-2.0
pipeline_tag: time-series-forecasting
tags:
- time series
- forecasting
- pretrained models
- foundation models
- time series foundation models
- time-series
---
# Chronos-Bolt⚡ (Base)
Chronos-Bolt is a family of pretrained time series forecasting models which can be used for zero-shot forecasting. It is based on the [T5 encoder-decoder architecture](https://arxiv.org/abs/1910.10683) and has been trained on nearly 100 billion time series observations. It chunks the historical time series context into patches of multiple observations, which are then input into the encoder. The decoder then uses these representations to directly generate quantile forecasts across multiple future steps—a method known as direct multi-step forecasting. Chronos-Bolt models are up to 250 times faster and 20 times more memory-efficient than the [original Chronos](https://arxiv.org/abs/2403.07815) models of the same size.
The following plot compares the inference time of Chronos-Bolt against the original Chronos models for forecasting 1024 time series with a context length of 512 observations and a prediction horizon of 64 steps.
<center>
<img src="https://autogluon.s3.amazonaws.com/images/chronos_bolt_speed.svg" width="50%"/>
</center>
Chronos-Bolt models are not only significantly faster but also more accurate than the original Chronos models. The following plot reports the probabilistic and point forecasting performance of Chronos-Bolt in terms of the [Weighted Quantile Loss (WQL)](https://auto.gluon.ai/stable/tutorials/timeseries/forecasting-metrics.html#autogluon.timeseries.metrics.WQL) and the [Mean Absolute Scaled Error (MASE)](https://auto.gluon.ai/stable/tutorials/timeseries/forecasting-metrics.html#autogluon.timeseries.metrics.MASE), respectively, aggregated over 27 datasets (see the [Chronos paper](https://arxiv.org/abs/2403.07815) for details on this benchmark). Remarkably, despite having no prior exposure to these datasets during training, the zero-shot Chronos-Bolt models outperform commonly used statistical models and deep learning models that have been trained on these datasets (highlighted by *). Furthermore, they also perform better than other FMs, denoted by a +, which indicates that these models were pretrained on certain datasets in our benchmark and are not entirely zero-shot. Notably, Chronos-Bolt (Base) also surpasses the original Chronos (Large) model in terms of the forecasting accuracy while being over 600 times faster.
<center>
<img src="https://autogluon.s3.amazonaws.com/images/chronos_bolt_accuracy.svg" width="80%"/>
</center>
Chronos-Bolt models are available in the following sizes.
<div align="center">
| Model | Parameters | Based on |
| ----------------------------------------------------------------------------- | ---------- | ---------------------------------------------------------------------- |
| [**chronos-bolt-tiny**](https://huggingface.co/amazon/chronos-bolt-tiny) | 9M | [t5-efficient-tiny](https://huggingface.co/google/t5-efficient-tiny) |
| [**chronos-bolt-mini**](https://huggingface.co/amazon/chronos-bolt-mini) | 21M | [t5-efficient-mini](https://huggingface.co/google/t5-efficient-mini) |
| [**chronos-bolt-small**](https://huggingface.co/amazon/chronos-bolt-small) | 48M | [t5-efficient-small](https://huggingface.co/google/t5-efficient-small) |
| [**chronos-bolt-base**](https://huggingface.co/amazon/chronos-bolt-base) | 205M | [t5-efficient-base](https://huggingface.co/google/t5-efficient-base) |
</div>
## Usage with AutoGluon
The recommended way of using Chronos for production use cases is through [AutoGluon](https://auto.gluon.ai/stable/index.html), which features effortless fine-tuning, augmenting Chronos models with exogenous information through covariate regressors, ensembling with other statistical and machine learning models, as well as seamless deployments on AWS with SageMaker.
Check out the AutoGluon Chronos [tutorial](https://auto.gluon.ai/stable/tutorials/timeseries/forecasting-chronos.html).
A minimal example showing how to perform zero-shot inference using Chronos-Bolt with AutoGluon:
```
pip install autogluon
```
```python
from autogluon.timeseries import TimeSeriesPredictor, TimeSeriesDataFrame
df = TimeSeriesDataFrame("https://autogluon.s3.amazonaws.com/datasets/timeseries/m4_hourly/train.csv")
predictor = TimeSeriesPredictor(prediction_length=48).fit(
df,
hyperparameters={
"Chronos": {"model_path": "amazon/chronos-bolt-base"},
},
)
predictions = predictor.predict(df)
```
## Usage with inference library
Alternatively, you can install the package in the GitHub [companion repo](https://github.com/amazon-science/chronos-forecasting).
This is intended for research purposes and provides a minimal interface to Chronos models.
Install the library by running:
```
pip install chronos-forecasting
```
A minimal example showing how to perform inference using Chronos-Bolt models:
```python
import pandas as pd # requires: pip install pandas
import torch
from chronos import BaseChronosPipeline
pipeline = BaseChronosPipeline.from_pretrained(
"amazon/chronos-bolt-base",
device_map="cuda", # use "cpu" for CPU inference and "mps" for Apple Silicon
torch_dtype=torch.bfloat16,
)
df = pd.read_csv(
"https://raw.githubusercontent.com/AileenNielsen/TimeSeriesAnalysisWithPython/master/data/AirPassengers.csv"
)
# context must be either a 1D tensor, a list of 1D tensors,
# or a left-padded 2D tensor with batch as the first dimension
# Chronos-Bolt models generate quantile forecasts, so forecast has shape
# [num_series, num_quantiles, prediction_length].
forecast = pipeline.predict(
context=torch.tensor(df["#Passengers"]), prediction_length=12
)
```
## Citation
If you find Chronos or Chronos-Bolt models useful for your research, please consider citing the associated [paper](https://arxiv.org/abs/2403.07815):
```
@article{ansari2024chronos,
title={Chronos: Learning the Language of Time Series},
author={Ansari, Abdul Fatir and Stella, Lorenzo and Turkmen, Caner and Zhang, Xiyuan, and Mercado, Pedro and Shen, Huibin and Shchur, Oleksandr and Rangapuram, Syama Syndar and Pineda Arango, Sebastian and Kapoor, Shubham and Zschiegner, Jasper and Maddix, Danielle C. and Mahoney, Michael W. and Torkkola, Kari and Gordon Wilson, Andrew and Bohlke-Schneider, Michael and Wang, Yuyang},
journal={Transactions on Machine Learning Research},
issn={2835-8856},
year={2024},
url={https://openreview.net/forum?id=gerNCVqqtR}
}
```
## License
This project is licensed under the Apache-2.0 License.
|
ancient41/8991fc9c-55ce-4b0c-8f57-21bea6fa29b1 | ancient41 | 2025-01-29T20:14:53Z | 5 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:adapter:microsoft/Phi-3.5-mini-instruct",
"license:mit",
"region:us"
] | null | 2025-01-29T19:34:04Z | ---
library_name: peft
license: mit
base_model: microsoft/Phi-3.5-mini-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8991fc9c-55ce-4b0c-8f57-21bea6fa29b1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/Phi-3.5-mini-instruct
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- eb3f318e5167bb8f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/eb3f318e5167bb8f_train_data.json
type:
field_input: question_text
field_instruction: system_prompt
field_output: orig_answer_texts
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: ancient41/8991fc9c-55ce-4b0c-8f57-21bea6fa29b1
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/eb3f318e5167bb8f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 9a26b2bc-46d9-4c50-80ee-3f9db3e249d5
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 9a26b2bc-46d9-4c50-80ee-3f9db3e249d5
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 8991fc9c-55ce-4b0c-8f57-21bea6fa29b1
This model is a fine-tuned version of [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2691
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 6.9568 | 0.0003 | 1 | 1.9723 |
| 4.6885 | 0.0148 | 50 | 1.5352 |
| 4.4748 | 0.0297 | 100 | 1.4204 |
| 3.7392 | 0.0445 | 150 | 1.2910 |
| 4.5318 | 0.0594 | 200 | 1.2691 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Lauther/emb-multilingual-e5-large-1e | Lauther | 2025-01-29T20:12:44Z | 7 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:5220",
"loss:CosineSimilarityLoss",
"dataset:Lauther/embeddings-train-semantic",
"arxiv:1908.10084",
"base_model:intfloat/multilingual-e5-large-instruct",
"base_model:finetune:intfloat/multilingual-e5-large-instruct",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-01-29T20:11:55Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:5220
- loss:CosineSimilarityLoss
base_model: intfloat/multilingual-e5-large-instruct
widget:
- source_sentence: Identify the column that stores the uncertainty value.
sentences:
- "What is measuring equipment?\nMeasuring equipment refers to the devices that\
\ make up a measurement system. Each piece of equipment has:\n- A unique serial\
\ number for identification.\n- A technical name, such as transmitter, plate,\
\ thermometer, etc.\n\nHow is equipment assigned to a measurement system?\nWhen\
\ equipment is assigned to a measurement system, it is given a unique identifier\
\ called an \"\"Equipment Tag.\"\"\n- If a piece of equipment has a tag, it is\
\ considered in use in a measurement system.\n- If it does not have a tag, it\
\ is considered spare or unused\n\nEquipment assignment based on technology:\n\
The type of equipment assigned to a measurement system depends on the technology\
\ used, for example:\n1. Differential technology (for gas measurement):\n -\
\ Static pressure transmitters\n - Differential pressure transmitters\n \
\ - Temperature transmitters\n - RTDs (thermometers)\n - Orifice plates\n\
\ - Straight stretch\n\n2. Linear technology (for gas measurement):\n -\
\ Temperature transmitters\n - RTDs\n - Static pressure transmitters\n \
\ - Ultrasonic meters\n\nRelationship between equipment and measurement systems:\n\
- A measurement system can have multiple pieces of equipment.\n- However, a piece\
\ of equipment can only be assigned to one measurement system.\n\nDatabase management:\n\
- The database includes a special table to manage the list of equipment assigned\
\ to measurement systems.\n- When a user refers to an \"\"Equipment Tag\"\", they\
\ are searching for operational equipment assigned to a measurement system.\n\
- If a user is looking for spare or unused equipment, they are searching for equipment\
\ not listed in the tagged equipment table.\n- Commonly used when user refers\
\ directly to an \"\"Equipment Tag\""
- 'What is equipment calibration?
Calibration is a metrological verification process used to ensure the accuracy
of measurement equipment. It is performed periodically, based on intervals set
by the company or a regulatory body.
Purpose of calibration:
The calibration process corrects any deviations in how the equipment measures
physical magnitudes (variables). This ensures the equipment provides accurate
and reliable data.
Calibration cycles:
There are two main calibration cycles:
1. As-found: Represents the equipment''s measurement accuracy before any adjustments
are made. This cycle is almost always implemented.
2. As-left: Represents the equipment''s measurement accuracy after adjustments
are made. This cycle is used depending on regulatory requirements.
Calibration uncertainty:
- Uncertainty is included in the results of a calibration.
- Calibration uncertainty refers to the margin of error in the device''s measurements,
which also affects the uncertainty of the measured variable or magnitude.'
- 'What kind of data store an equipment?
Equipments can capture meteorological data, such as pressure, temperature, and
volume (magnitudes). This data is essential for users to perform various calculations.
Data storage:
- The measured values are stored in a special table in the database for magnitudes.
This table contains the values of the variables captured by the equipments.
- These values are **direct measurements** from the fluid (e.g., raw pressure,
temperature, or volume readings). **They are not calculated values**, such as
uncertainty.
- The values stored in the variable values table are **different** from variable
uncertainty values, which are calculated separately and represent the margin of
error.
Accessing the data:
- Users typically access the data by referring to the readings from the measurement
system, not directly from the individual equipments.
- The readings are stored in a "variable values" table within the database.
Linking variable names:
If the user needs to know the name of a variable, they must link the data to another
table that stores information about the types of variables.'
- source_sentence: SELECT * FROM EquipmentType LIMIT 1
sentences:
- 'What kind of data store an equipment?
Equipments can capture meteorological data, such as pressure, temperature, and
volume (magnitudes). This data is essential for users to perform various calculations.
Data storage:
- The measured values are stored in a special table in the database for magnitudes.
This table contains the values of the variables captured by the equipments.
- These values are **direct measurements** from the fluid (e.g., raw pressure,
temperature, or volume readings). **They are not calculated values**, such as
uncertainty.
- The values stored in the variable values table are **different** from variable
uncertainty values, which are calculated separately and represent the margin of
error.
Accessing the data:
- Users typically access the data by referring to the readings from the measurement
system, not directly from the individual equipments.
- The readings are stored in a "variable values" table within the database.
Linking variable names:
If the user needs to know the name of a variable, they must link the data to another
table that stores information about the types of variables.'
- "How does a flow computer generate and store reports?\nA flow computer generates\
\ daily or hourly reports to provide users with operational data. These reports\
\ are stored in the flow computer's memory in an organized format.\n\nReport structure:\n\
- Each report includes:\n- Date and time of the data recording.\n- Data recorded\
\ from flow computers.\n\nData storage in tables:\nThe reports are saved in two\
\ tables:\n1. Main table (Index):\n - Stores the date, time, and flow computer\
\ identifier.\n2. Detail table:\n - Stores the measured values associated with\
\ the report.\n\nConnection to the Modbus table:\nThe flow computer's reports\
\ are linked to a Modbus table. This table contains the names corresponding to\
\ each value in the reports, making it easier to interpret the data."
- 'What is a flow computer?
A flow computer is a device used in measurement engineering. It collects analog
and digital data from flow meters and other sensors.
Key features of a flow computer:
- It has a unique name, firmware version, and manufacturer information.
- It is designed to record and process data such as temperature, pressure, and
fluid volume (for gases or oils).
Main function:
The flow computer sends the collected data to a measurement system. This allows
measurement engineers to analyze the data and perform their tasks effectively.'
- source_sentence: What tables store measurement system data?
sentences:
- "What is uncertainty?\nUncertainty is a measure of confidence in the precision\
\ and reliability of results obtained from equipment or measurement systems. It\
\ quantifies the potential error or margin of error in measurements.\n\nTypes\
\ of uncertainty:\nThere are two main types of uncertainty:\n1. Uncertainty of\
\ magnitudes (variables):\n - Refers to the uncertainty of specific variables,\
\ such as temperature or pressure.\n - It is calculated after calibrating a\
\ device or obtained from the equipment manufacturer's manual.\n - This uncertainty\
\ serves as a starting point for further calculations related to the equipment.\n\
\n2. Uncertainty of the measurement system:\n - Refers to the uncertainty calculated\
\ for the overall flow measurement.\n - It depends on the uncertainties of\
\ the individual variables (magnitudes) and represents the combined margin of\
\ error for the entire system.\n\nKey points:\n- The uncertainties of magnitudes\
\ (variables) are the foundation for calculating the uncertainty of the measurement\
\ system. Think of them as the \"building blocks.\"\n- Do not confuse the two\
\ types of uncertainty:\n - **Uncertainty of magnitudes/variables**: Specific\
\ to individual variables (e.g., temperature, pressure).\n - **Uncertainty\
\ of the measurement system**: Specific to the overall flow measurement.\n\nDatabase\
\ storage for uncertainties:\nIn the database, uncertainty calculations are stored\
\ in two separate tables:\n1. Uncertainty of magnitudes (variables):\n - Stores\
\ the uncertainty values for specific variables (e.g., temperature, pressure).\n\
\n2. Uncertainty of the measurement system:\n - Stores the uncertainty values\
\ for the overall flow measurement system.\n\nHow to retrieve uncertainty data:\n\
- To find the uncertainty of the measurement system, join the measurement systems\
\ table with the uncertainty of the measurement system table.\n- To find the uncertainty\
\ of a specific variable (magnitude), join the measurement systems table with\
\ the uncertainty of magnitudes (variables) table.\n\nImportant note:\nDo not\
\ confuse the two types of uncertainty:\n- If the user requests the uncertainty\
\ of the measurement system, use the first join (measurement systems table + uncertainty\
\ of the measurement system table).\n- If the user requests the uncertainty of\
\ a specific variable (magnitude) in a report, use the second join (measurement\
\ systems table + uncertainty of magnitudes table)."
- "What is a measurement system?\nA measurement system, also referred to as a delivery\
\ point, measurement point, or reception point, is used to measure and monitor\
\ fluids in industrial processes.\n\nKey characteristics of a measurement system:\n\
1. Measurement technology:\n - Differential: Used for precise measurements.\n\
\ - Linear: Used for straightforward measurements.\n\n2. System identifier\
\ (TAG):\n - A unique identifier for the system.\n\n3. Fluid type:\n - The\
\ system can measure gases, oils, condensates, water, steam, or other fluids.\n\
4. System type:\n - Specifies the category or purpose of the system.\n\nMeasurement\
\ technology by fluid type:\n- Gas measurement systems: Use both linear and differential\
\ measurement technologies.\n- Oil measurement systems: Do not use linear or differential\
\ technologies; they are programmed differently.\"\n\n\nClassification of measurement\
\ systems:\nMeasurement systems are classified based on the stage of the process\
\ in which they are used. Common classifications include:\n- Fiscal\n- Operational\n\
- Appropriation\n- Custody\n- Production Poços"
- 'What do measurement equipment measure?
Each equipment measures a physical magnitude, also known as a variable. Based
on the type of variable they measure, devices are classified into different categories.
Equipment classification:
- Primary meter: Assigned by default to equipments like orifice plates.
- Secondary meter: Assigned by default to equipments like transmitters.
- Tertiary meter: Used for other types of equipments.
Equipment types in the database:
The database includes a table listing all equipment types. Examples of equipment
types are:
- Differential pressure transmitters
- RTDs (Resistance Temperature Detectors)
- Orifice plates
- Multivariable transmitters
- Ultrasonic meters
Meteorological checks for equipments:
Each equipment type is assigned a meteorological check, which can be either:
- Calibration: To ensure measurement accuracy.
- Inspection: To verify proper functioning.
Data storage in tables:
The database also includes a separate table for equipment classifications, which
are:
- Primary meter
- Secondary meter
- Tertiary meter
So, an equipment has equipment types and this types has classifications.'
- source_sentence: What is the table structure for equipment types?
sentences:
- "How does a flow computer generate and store reports?\nA flow computer generates\
\ daily or hourly reports to provide users with operational data. These reports\
\ are stored in the flow computer's memory in an organized format.\n\nReport structure:\n\
- Each report includes:\n- Date and time of the data recording.\n- Data recorded\
\ from flow computers.\n\nData storage in tables:\nThe reports are saved in two\
\ tables:\n1. Main table (Index):\n - Stores the date, time, and flow computer\
\ identifier.\n2. Detail table:\n - Stores the measured values associated with\
\ the report.\n\nConnection to the Modbus table:\nThe flow computer's reports\
\ are linked to a Modbus table. This table contains the names corresponding to\
\ each value in the reports, making it easier to interpret the data."
- "What is measuring equipment?\nMeasuring equipment refers to the devices that\
\ make up a measurement system. Each piece of equipment has:\n- A unique serial\
\ number for identification.\n- A technical name, such as transmitter, plate,\
\ thermometer, etc.\n\nHow is equipment assigned to a measurement system?\nWhen\
\ equipment is assigned to a measurement system, it is given a unique identifier\
\ called an \"\"Equipment Tag.\"\"\n- If a piece of equipment has a tag, it is\
\ considered in use in a measurement system.\n- If it does not have a tag, it\
\ is considered spare or unused\n\nEquipment assignment based on technology:\n\
The type of equipment assigned to a measurement system depends on the technology\
\ used, for example:\n1. Differential technology (for gas measurement):\n -\
\ Static pressure transmitters\n - Differential pressure transmitters\n \
\ - Temperature transmitters\n - RTDs (thermometers)\n - Orifice plates\n\
\ - Straight stretch\n\n2. Linear technology (for gas measurement):\n -\
\ Temperature transmitters\n - RTDs\n - Static pressure transmitters\n \
\ - Ultrasonic meters\n\nRelationship between equipment and measurement systems:\n\
- A measurement system can have multiple pieces of equipment.\n- However, a piece\
\ of equipment can only be assigned to one measurement system.\n\nDatabase management:\n\
- The database includes a special table to manage the list of equipment assigned\
\ to measurement systems.\n- When a user refers to an \"\"Equipment Tag\"\", they\
\ are searching for operational equipment assigned to a measurement system.\n\
- If a user is looking for spare or unused equipment, they are searching for equipment\
\ not listed in the tagged equipment table.\n- Commonly used when user refers\
\ directly to an \"\"Equipment Tag\""
- "What is uncertainty?\nUncertainty is a measure of confidence in the precision\
\ and reliability of results obtained from equipment or measurement systems. It\
\ quantifies the potential error or margin of error in measurements.\n\nTypes\
\ of uncertainty:\nThere are two main types of uncertainty:\n1. Uncertainty of\
\ magnitudes (variables):\n - Refers to the uncertainty of specific variables,\
\ such as temperature or pressure.\n - It is calculated after calibrating a\
\ device or obtained from the equipment manufacturer's manual.\n - This uncertainty\
\ serves as a starting point for further calculations related to the equipment.\n\
\n2. Uncertainty of the measurement system:\n - Refers to the uncertainty calculated\
\ for the overall flow measurement.\n - It depends on the uncertainties of\
\ the individual variables (magnitudes) and represents the combined margin of\
\ error for the entire system.\n\nKey points:\n- The uncertainties of magnitudes\
\ (variables) are the foundation for calculating the uncertainty of the measurement\
\ system. Think of them as the \"building blocks.\"\n- Do not confuse the two\
\ types of uncertainty:\n - **Uncertainty of magnitudes/variables**: Specific\
\ to individual variables (e.g., temperature, pressure).\n - **Uncertainty\
\ of the measurement system**: Specific to the overall flow measurement.\n\nDatabase\
\ storage for uncertainties:\nIn the database, uncertainty calculations are stored\
\ in two separate tables:\n1. Uncertainty of magnitudes (variables):\n - Stores\
\ the uncertainty values for specific variables (e.g., temperature, pressure).\n\
\n2. Uncertainty of the measurement system:\n - Stores the uncertainty values\
\ for the overall flow measurement system.\n\nHow to retrieve uncertainty data:\n\
- To find the uncertainty of the measurement system, join the measurement systems\
\ table with the uncertainty of the measurement system table.\n- To find the uncertainty\
\ of a specific variable (magnitude), join the measurement systems table with\
\ the uncertainty of magnitudes (variables) table.\n\nImportant note:\nDo not\
\ confuse the two types of uncertainty:\n- If the user requests the uncertainty\
\ of the measurement system, use the first join (measurement systems table + uncertainty\
\ of the measurement system table).\n- If the user requests the uncertainty of\
\ a specific variable (magnitude) in a report, use the second join (measurement\
\ systems table + uncertainty of magnitudes table)."
- source_sentence: What columns store the uncertainty values?
sentences:
- "What is a measurement system?\nA measurement system, also referred to as a delivery\
\ point, measurement point, or reception point, is used to measure and monitor\
\ fluids in industrial processes.\n\nKey characteristics of a measurement system:\n\
1. Measurement technology:\n - Differential: Used for precise measurements.\n\
\ - Linear: Used for straightforward measurements.\n\n2. System identifier\
\ (TAG):\n - A unique identifier for the system.\n\n3. Fluid type:\n - The\
\ system can measure gases, oils, condensates, water, steam, or other fluids.\n\
4. System type:\n - Specifies the category or purpose of the system.\n\nMeasurement\
\ technology by fluid type:\n- Gas measurement systems: Use both linear and differential\
\ measurement technologies.\n- Oil measurement systems: Do not use linear or differential\
\ technologies; they are programmed differently.\"\n\n\nClassification of measurement\
\ systems:\nMeasurement systems are classified based on the stage of the process\
\ in which they are used. Common classifications include:\n- Fiscal\n- Operational\n\
- Appropriation\n- Custody\n- Production Poços"
- 'How are flow computers and measurement systems related?
Flow computers can have multiple systems assigned to them. However, a measurement
system can only be assigned to one flow computer.
Database terminology:
In the database, this relationship is referred to as:
- Meter streams
- Meter runs
- Sections
Storage of the relationship:
The relationship between a flow computer and its assigned measurement system is
stored in a special table.
User context:
When a user refers to a "meter stream," they are indicating that they are searching
for a measurement system assigned to a specific flow computer.'
- "What is uncertainty?\nUncertainty is a measure of confidence in the precision\
\ and reliability of results obtained from equipment or measurement systems. It\
\ quantifies the potential error or margin of error in measurements.\n\nTypes\
\ of uncertainty:\nThere are two main types of uncertainty:\n1. Uncertainty of\
\ magnitudes (variables):\n - Refers to the uncertainty of specific variables,\
\ such as temperature or pressure.\n - It is calculated after calibrating a\
\ device or obtained from the equipment manufacturer's manual.\n - This uncertainty\
\ serves as a starting point for further calculations related to the equipment.\n\
\n2. Uncertainty of the measurement system:\n - Refers to the uncertainty calculated\
\ for the overall flow measurement.\n - It depends on the uncertainties of\
\ the individual variables (magnitudes) and represents the combined margin of\
\ error for the entire system.\n\nKey points:\n- The uncertainties of magnitudes\
\ (variables) are the foundation for calculating the uncertainty of the measurement\
\ system. Think of them as the \"building blocks.\"\n- Do not confuse the two\
\ types of uncertainty:\n - **Uncertainty of magnitudes/variables**: Specific\
\ to individual variables (e.g., temperature, pressure).\n - **Uncertainty\
\ of the measurement system**: Specific to the overall flow measurement.\n\nDatabase\
\ storage for uncertainties:\nIn the database, uncertainty calculations are stored\
\ in two separate tables:\n1. Uncertainty of magnitudes (variables):\n - Stores\
\ the uncertainty values for specific variables (e.g., temperature, pressure).\n\
\n2. Uncertainty of the measurement system:\n - Stores the uncertainty values\
\ for the overall flow measurement system.\n\nHow to retrieve uncertainty data:\n\
- To find the uncertainty of the measurement system, join the measurement systems\
\ table with the uncertainty of the measurement system table.\n- To find the uncertainty\
\ of a specific variable (magnitude), join the measurement systems table with\
\ the uncertainty of magnitudes (variables) table.\n\nImportant note:\nDo not\
\ confuse the two types of uncertainty:\n- If the user requests the uncertainty\
\ of the measurement system, use the first join (measurement systems table + uncertainty\
\ of the measurement system table).\n- If the user requests the uncertainty of\
\ a specific variable (magnitude) in a report, use the second join (measurement\
\ systems table + uncertainty of magnitudes table)."
datasets:
- Lauther/embeddings-train-semantic
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on intfloat/multilingual-e5-large-instruct
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-large-instruct](https://huggingface.co/intfloat/multilingual-e5-large-instruct) on the [embeddings-train-semantic](https://huggingface.co/datasets/Lauther/embeddings-train-semantic) dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-large-instruct](https://huggingface.co/intfloat/multilingual-e5-large-instruct) <!-- at revision c9e87c786ffac96aeaeb42863276930883923ecb -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [embeddings-train-semantic](https://huggingface.co/datasets/Lauther/embeddings-train-semantic)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Lauther/emb-multilingual-e5-large-1e")
# Run inference
sentences = [
'What columns store the uncertainty values?',
'How are flow computers and measurement systems related?\nFlow computers can have multiple systems assigned to them. However, a measurement system can only be assigned to one flow computer.\n\nDatabase terminology:\nIn the database, this relationship is referred to as:\n- Meter streams\n- Meter runs\n- Sections\n\nStorage of the relationship:\nThe relationship between a flow computer and its assigned measurement system is stored in a special table.\n\nUser context:\nWhen a user refers to a "meter stream," they are indicating that they are searching for a measurement system assigned to a specific flow computer.',
'What is uncertainty?\nUncertainty is a measure of confidence in the precision and reliability of results obtained from equipment or measurement systems. It quantifies the potential error or margin of error in measurements.\n\nTypes of uncertainty:\nThere are two main types of uncertainty:\n1. Uncertainty of magnitudes (variables):\n - Refers to the uncertainty of specific variables, such as temperature or pressure.\n - It is calculated after calibrating a device or obtained from the equipment manufacturer\'s manual.\n - This uncertainty serves as a starting point for further calculations related to the equipment.\n\n2. Uncertainty of the measurement system:\n - Refers to the uncertainty calculated for the overall flow measurement.\n - It depends on the uncertainties of the individual variables (magnitudes) and represents the combined margin of error for the entire system.\n\nKey points:\n- The uncertainties of magnitudes (variables) are the foundation for calculating the uncertainty of the measurement system. Think of them as the "building blocks."\n- Do not confuse the two types of uncertainty:\n - **Uncertainty of magnitudes/variables**: Specific to individual variables (e.g., temperature, pressure).\n - **Uncertainty of the measurement system**: Specific to the overall flow measurement.\n\nDatabase storage for uncertainties:\nIn the database, uncertainty calculations are stored in two separate tables:\n1. Uncertainty of magnitudes (variables):\n - Stores the uncertainty values for specific variables (e.g., temperature, pressure).\n\n2. Uncertainty of the measurement system:\n - Stores the uncertainty values for the overall flow measurement system.\n\nHow to retrieve uncertainty data:\n- To find the uncertainty of the measurement system, join the measurement systems table with the uncertainty of the measurement system table.\n- To find the uncertainty of a specific variable (magnitude), join the measurement systems table with the uncertainty of magnitudes (variables) table.\n\nImportant note:\nDo not confuse the two types of uncertainty:\n- If the user requests the uncertainty of the measurement system, use the first join (measurement systems table + uncertainty of the measurement system table).\n- If the user requests the uncertainty of a specific variable (magnitude) in a report, use the second join (measurement systems table + uncertainty of magnitudes table).',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### embeddings-train-semantic
* Dataset: [embeddings-train-semantic](https://huggingface.co/datasets/Lauther/embeddings-train-semantic) at [ce90f53](https://huggingface.co/datasets/Lauther/embeddings-train-semantic/tree/ce90f531bc39037053d223b27868ad178852f330)
* Size: 5,220 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 8 tokens</li><li>mean: 18.3 tokens</li><li>max: 102 tokens</li></ul> | <ul><li>min: 120 tokens</li><li>mean: 257.3 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.23</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------|
| <code>What is the data type of differential pressure in the measurement system?</code> | <code>What is uncertainty?<br>Uncertainty is a measure of confidence in the precision and reliability of results obtained from equipment or measurement systems. It quantifies the potential error or margin of error in measurements.<br><br>Types of uncertainty:<br>There are two main types of uncertainty:<br>1. Uncertainty of magnitudes (variables):<br> - Refers to the uncertainty of specific variables, such as temperature or pressure.<br> - It is calculated after calibrating a device or obtained from the equipment manufacturer's manual.<br> - This uncertainty serves as a starting point for further calculations related to the equipment.<br><br>2. Uncertainty of the measurement system:<br> - Refers to the uncertainty calculated for the overall flow measurement.<br> - It depends on the uncertainties of the individual variables (magnitudes) and represents the combined margin of error for the entire system.<br><br>Key points:<br>- The uncertainties of magnitudes (variables) are the foundation for calculating the uncertainty of ...</code> | <code>0.15000000000000002</code> |
| <code>What is the structure of the &&&equipment_data&&& table?</code> | <code>How are flow computers and measurement systems related?<br>Flow computers can have multiple systems assigned to them. However, a measurement system can only be assigned to one flow computer.<br><br>Database terminology:<br>In the database, this relationship is referred to as:<br>- Meter streams<br>- Meter runs<br>- Sections<br><br>Storage of the relationship:<br>The relationship between a flow computer and its assigned measurement system is stored in a special table.<br><br>User context:<br>When a user refers to a "meter stream," they are indicating that they are searching for a measurement system assigned to a specific flow computer.</code> | <code>0.35000000000000003</code> |
| <code>Find the columns in the flow computer table that identify the flow computer.</code> | <code>What kind of data store an equipment?<br>Equipments can capture meteorological data, such as pressure, temperature, and volume (magnitudes). This data is essential for users to perform various calculations.<br><br>Data storage:<br>- The measured values are stored in a special table in the database for magnitudes. This table contains the values of the variables captured by the equipments.<br>- These values are **direct measurements** from the fluid (e.g., raw pressure, temperature, or volume readings). **They are not calculated values**, such as uncertainty.<br>- The values stored in the variable values table are **different** from variable uncertainty values, which are calculated separately and represent the margin of error.<br><br>Accessing the data:<br>- Users typically access the data by referring to the readings from the measurement system, not directly from the individual equipments.<br>- The readings are stored in a "variable values" table within the database.<br><br>Linking variable names:<br>If the user needs to kno...</code> | <code>0.1</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Evaluation Dataset
#### embeddings-train-semantic
* Dataset: [embeddings-train-semantic](https://huggingface.co/datasets/Lauther/embeddings-train-semantic) at [ce90f53](https://huggingface.co/datasets/Lauther/embeddings-train-semantic/tree/ce90f531bc39037053d223b27868ad178852f330)
* Size: 652 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 652 samples:
| | sentence1 | sentence2 | score |
|:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 8 tokens</li><li>mean: 17.8 tokens</li><li>max: 102 tokens</li></ul> | <ul><li>min: 120 tokens</li><li>mean: 253.84 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.24</li><li>max: 0.9</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:-------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------|
| <code>How can I filter uncertainty reports by equipment tag?</code> | <code>How does a flow computer generate and store reports?<br>A flow computer generates daily or hourly reports to provide users with operational data. These reports are stored in the flow computer's memory in an organized format.<br><br>Report structure:<br>- Each report includes:<br>- Date and time of the data recording.<br>- Data recorded from flow computers.<br><br>Data storage in tables:<br>The reports are saved in two tables:<br>1. Main table (Index):<br> - Stores the date, time, and flow computer identifier.<br>2. Detail table:<br> - Stores the measured values associated with the report.<br><br>Connection to the Modbus table:<br>The flow computer's reports are linked to a Modbus table. This table contains the names corresponding to each value in the reports, making it easier to interpret the data.</code> | <code>0.09999999999999999</code> |
| <code>What is the purpose of the flow_data table?</code> | <code>What is uncertainty?<br>Uncertainty is a measure of confidence in the precision and reliability of results obtained from equipment or measurement systems. It quantifies the potential error or margin of error in measurements.<br><br>Types of uncertainty:<br>There are two main types of uncertainty:<br>1. Uncertainty of magnitudes (variables):<br> - Refers to the uncertainty of specific variables, such as temperature or pressure.<br> - It is calculated after calibrating a device or obtained from the equipment manufacturer's manual.<br> - This uncertainty serves as a starting point for further calculations related to the equipment.<br><br>2. Uncertainty of the measurement system:<br> - Refers to the uncertainty calculated for the overall flow measurement.<br> - It depends on the uncertainties of the individual variables (magnitudes) and represents the combined margin of error for the entire system.<br><br>Key points:<br>- The uncertainties of magnitudes (variables) are the foundation for calculating the uncertainty of ...</code> | <code>0.15000000000000002</code> |
| <code>What is the column name for the report date in the Reports table?</code> | <code>What is equipment calibration?<br>Calibration is a metrological verification process used to ensure the accuracy of measurement equipment. It is performed periodically, based on intervals set by the company or a regulatory body.<br><br>Purpose of calibration:<br>The calibration process corrects any deviations in how the equipment measures physical magnitudes (variables). This ensures the equipment provides accurate and reliable data.<br><br>Calibration cycles:<br>There are two main calibration cycles:<br>1. As-found: Represents the equipment's measurement accuracy before any adjustments are made. This cycle is almost always implemented.<br>2. As-left: Represents the equipment's measurement accuracy after adjustments are made. This cycle is used depending on regulatory requirements.<br><br>Calibration uncertainty:<br>- Uncertainty is included in the results of a calibration.<br>- Calibration uncertainty refers to the margin of error in the device's measurements, which also affects the uncertainty of the measured variable or ...</code> | <code>0.1</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 4
- `per_gpu_eval_batch_size`: 4
- `gradient_accumulation_steps`: 4
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 4
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: 4
- `gradient_accumulation_steps`: 4
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0307 | 10 | 1.4036 | - |
| 0.0613 | 20 | 0.3733 | - |
| 0.0920 | 30 | 0.2036 | - |
| 0.1226 | 40 | 0.1664 | - |
| 0.1533 | 50 | 0.1537 | 0.0505 |
| 0.1839 | 60 | 0.1255 | - |
| 0.2146 | 70 | 0.1204 | - |
| 0.2452 | 80 | 0.164 | - |
| 0.2759 | 90 | 0.0955 | - |
| 0.3065 | 100 | 0.0965 | 0.0225 |
| 0.3372 | 110 | 0.1001 | - |
| 0.3678 | 120 | 0.0915 | - |
| 0.3985 | 130 | 0.0798 | - |
| 0.4291 | 140 | 0.0738 | - |
| 0.4598 | 150 | 0.098 | 0.0217 |
| 0.4904 | 160 | 0.0775 | - |
| 0.5211 | 170 | 0.0946 | - |
| 0.5517 | 180 | 0.0829 | - |
| 0.5824 | 190 | 0.0586 | - |
| 0.6130 | 200 | 0.0523 | 0.0264 |
| 0.6437 | 210 | 0.0743 | - |
| 0.6743 | 220 | 0.0758 | - |
| 0.7050 | 230 | 0.0687 | - |
| 0.7356 | 240 | 0.0746 | - |
| 0.7663 | 250 | 0.0622 | 0.0173 |
| 0.7969 | 260 | 0.0522 | - |
| 0.8276 | 270 | 0.0628 | - |
| 0.8582 | 280 | 0.0535 | - |
| 0.8889 | 290 | 0.0797 | - |
| 0.9195 | 300 | 0.055 | 0.0169 |
| 0.9502 | 310 | 0.0649 | - |
| 0.9808 | 320 | 0.0479 | - |
### Framework Versions
- Python: 3.11.0
- Sentence Transformers: 3.4.0
- Transformers: 4.48.1
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
great0001/adbfae2c-bafa-490a-b004-73da82531fd3 | great0001 | 2025-01-29T20:11:40Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:DeepMount00/Llama-3-8b-Ita",
"base_model:adapter:DeepMount00/Llama-3-8b-Ita",
"license:llama3",
"region:us"
] | null | 2025-01-29T20:08:51Z | ---
library_name: peft
license: llama3
base_model: DeepMount00/Llama-3-8b-Ita
tags:
- axolotl
- generated_from_trainer
model-index:
- name: adbfae2c-bafa-490a-b004-73da82531fd3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: DeepMount00/Llama-3-8b-Ita
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c4e6804d4a2a7552_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c4e6804d4a2a7552_train_data.json
type:
field_input: answer
field_instruction: question
field_output: answer_summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: great0001/adbfae2c-bafa-490a-b004-73da82531fd3
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/c4e6804d4a2a7552_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 97665195-034e-4a0e-a46d-17e58782c3fd
wandb_project: Birthday-SN56-14-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 97665195-034e-4a0e-a46d-17e58782c3fd
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# adbfae2c-bafa-490a-b004-73da82531fd3
This model is a fine-tuned version of [DeepMount00/Llama-3-8b-Ita](https://huggingface.co/DeepMount00/Llama-3-8b-Ita) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7430
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.8351 | 0.0007 | 1 | 2.2817 |
| 1.4166 | 0.0096 | 13 | 1.1271 |
| 0.867 | 0.0192 | 26 | 0.8317 |
| 0.5852 | 0.0288 | 39 | 0.7430 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
shopitalic/vday-gift-card-rafael | shopitalic | 2025-01-29T20:11:33Z | 40 | 0 | diffusers | [
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-29T20:11:19Z | ---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: black-forest-labs/FLUX.1-dev
instance_prompt:
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# vday gift card rafael
<Gallery />
## Model description
## Trigger words
You should use `` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/shopitalic/vday-gift-card-rafael/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
|
mradermacher/MawaredT1-GGUF | mradermacher | 2025-01-29T20:11:32Z | 400 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"ar",
"en",
"base_model:MawaredHR/Mawared_T1",
"base_model:quantized:MawaredHR/Mawared_T1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-09T06:26:37Z | ---
base_model: MawaredHR/Mawared_T1
language:
- ar
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/MawaredHR/Mawared_T1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/MawaredT1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MawaredT1-GGUF/resolve/main/MawaredT1.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/MawaredT1-GGUF/resolve/main/MawaredT1.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/MawaredT1-GGUF/resolve/main/MawaredT1.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MawaredT1-GGUF/resolve/main/MawaredT1.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/MawaredT1-GGUF/resolve/main/MawaredT1.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/MawaredT1-GGUF/resolve/main/MawaredT1.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MawaredT1-GGUF/resolve/main/MawaredT1.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MawaredT1-GGUF/resolve/main/MawaredT1.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/MawaredT1-GGUF/resolve/main/MawaredT1.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/MawaredT1-GGUF/resolve/main/MawaredT1.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MawaredT1-GGUF/resolve/main/MawaredT1.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MawaredT1-GGUF/resolve/main/MawaredT1.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/MawaredT1-i1-GGUF | mradermacher | 2025-01-29T20:10:49Z | 885 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"ar",
"en",
"base_model:MawaredHR/Mawared_T1",
"base_model:quantized:MawaredHR/Mawared_T1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-01-09T08:26:18Z | ---
base_model: MawaredHR/Mawared_T1
language:
- ar
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/MawaredHR/Mawared_T1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/MawaredT1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MawaredT1-i1-GGUF/resolve/main/MawaredT1.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/MawaredT1-i1-GGUF/resolve/main/MawaredT1.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/MawaredT1-i1-GGUF/resolve/main/MawaredT1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/MawaredT1-i1-GGUF/resolve/main/MawaredT1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/MawaredT1-i1-GGUF/resolve/main/MawaredT1.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/MawaredT1-i1-GGUF/resolve/main/MawaredT1.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/MawaredT1-i1-GGUF/resolve/main/MawaredT1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/MawaredT1-i1-GGUF/resolve/main/MawaredT1.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/MawaredT1-i1-GGUF/resolve/main/MawaredT1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MawaredT1-i1-GGUF/resolve/main/MawaredT1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/MawaredT1-i1-GGUF/resolve/main/MawaredT1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/MawaredT1-i1-GGUF/resolve/main/MawaredT1.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MawaredT1-i1-GGUF/resolve/main/MawaredT1.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/MawaredT1-i1-GGUF/resolve/main/MawaredT1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/MawaredT1-i1-GGUF/resolve/main/MawaredT1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/MawaredT1-i1-GGUF/resolve/main/MawaredT1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/MawaredT1-i1-GGUF/resolve/main/MawaredT1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/MawaredT1-i1-GGUF/resolve/main/MawaredT1.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/MawaredT1-i1-GGUF/resolve/main/MawaredT1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/MawaredT1-i1-GGUF/resolve/main/MawaredT1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MawaredT1-i1-GGUF/resolve/main/MawaredT1.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/MawaredT1-i1-GGUF/resolve/main/MawaredT1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/MawaredT1-i1-GGUF/resolve/main/MawaredT1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/MawaredT1-i1-GGUF/resolve/main/MawaredT1.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
leixa/8614dd60-986d-4502-ac6a-d233d2fe1bc6 | leixa | 2025-01-29T20:10:29Z | 9 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T20:01:39Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8614dd60-986d-4502-ac6a-d233d2fe1bc6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 694af3ec11ba71f1_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/694af3ec11ba71f1_train_data.json
type:
field_instruction: prompt
field_output: original_chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: leixa/8614dd60-986d-4502-ac6a-d233d2fe1bc6
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/694af3ec11ba71f1_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 7cfe335c-7747-4974-8b3e-7d1376c9d8f5
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7cfe335c-7747-4974-8b3e-7d1376c9d8f5
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 8614dd60-986d-4502-ac6a-d233d2fe1bc6
This model is a fine-tuned version of [unsloth/Qwen2-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5596
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0027 | 1 | 1.6927 |
| 1.7223 | 0.0247 | 9 | 1.6644 |
| 1.5611 | 0.0494 | 18 | 1.6076 |
| 1.5775 | 0.0741 | 27 | 1.5894 |
| 1.5173 | 0.0988 | 36 | 1.5796 |
| 1.5583 | 0.1235 | 45 | 1.5720 |
| 1.5434 | 0.1481 | 54 | 1.5671 |
| 1.5588 | 0.1728 | 63 | 1.5640 |
| 1.5692 | 0.1975 | 72 | 1.5615 |
| 1.5414 | 0.2222 | 81 | 1.5602 |
| 1.6096 | 0.2469 | 90 | 1.5598 |
| 1.5132 | 0.2716 | 99 | 1.5596 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
shibu12/flux-lora-elephant | shibu12 | 2025-01-29T20:09:00Z | 17 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"flux",
"flux-diffusers",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-29T19:34:33Z | ---
base_model: black-forest-labs/FLUX.1-dev
library_name: diffusers
license: other
instance_prompt: a <s0><s1> elephant
widget: []
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
- template:sd-lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Flux DreamBooth LoRA - shibu12/flux-lora-elephant
<Gallery />
## Model description
These are shibu12/flux-lora-elephant DreamBooth LoRA weights for black-forest-labs/FLUX.1-dev.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [Flux diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_flux.md).
Was LoRA for the text encoder enabled? False.
Pivotal tuning was enabled: True.
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` → use `<s0><s1>` in your prompt
## Download model
[Download the *.safetensors LoRA](shibu12/flux-lora-elephant/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('shibu12/flux-lora-elephant', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='shibu12/flux-lora-elephant', filename='/home/cr8dl-user/shibu/.cache/flux-lora-elephant_emb.safetensors', repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["t5"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
image = pipeline('a <s0><s1> elephant').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
gavrilstep/d44a98d7-61c2-4261-9d72-8421821665b9 | gavrilstep | 2025-01-29T20:07:45Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:princeton-nlp/Sheared-LLaMA-1.3B",
"base_model:adapter:princeton-nlp/Sheared-LLaMA-1.3B",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T20:02:53Z | ---
library_name: peft
license: apache-2.0
base_model: princeton-nlp/Sheared-LLaMA-1.3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d44a98d7-61c2-4261-9d72-8421821665b9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: princeton-nlp/Sheared-LLaMA-1.3B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 83082862772cd8f6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/83082862772cd8f6_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: gavrilstep/d44a98d7-61c2-4261-9d72-8421821665b9
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 39
micro_batch_size: 2
mlflow_experiment_name: /tmp/83082862772cd8f6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 21
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 56fa0ec0-356d-47c4-9549-c0f0e2ab0535
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 56fa0ec0-356d-47c4-9549-c0f0e2ab0535
warmup_steps: 21
weight_decay: 0.02
xformers_attention: true
```
</details><br>
# d44a98d7-61c2-4261-9d72-8421821665b9
This model is a fine-tuned version of [princeton-nlp/Sheared-LLaMA-1.3B](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 21
- training_steps: 39
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0008 | 1 | nan |
| 0.0 | 0.0042 | 5 | nan |
| 0.0 | 0.0084 | 10 | nan |
| 0.0 | 0.0125 | 15 | nan |
| 0.0 | 0.0167 | 20 | nan |
| 0.0 | 0.0209 | 25 | nan |
| 0.0 | 0.0251 | 30 | nan |
| 0.0 | 0.0292 | 35 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/Mawared_GS-GGUF | mradermacher | 2025-01-29T20:05:26Z | 327 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma2",
"trl",
"en",
"base_model:MawaredHR/Mawared_GS",
"base_model:quantized:MawaredHR/Mawared_GS",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-12T10:09:57Z | ---
base_model: MawaredHR/Mawared_GS
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/MawaredHR/Mawared_GS
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mawared_GS-GGUF/resolve/main/Mawared_GS.Q2_K.gguf) | Q2_K | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mawared_GS-GGUF/resolve/main/Mawared_GS.Q3_K_S.gguf) | Q3_K_S | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mawared_GS-GGUF/resolve/main/Mawared_GS.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mawared_GS-GGUF/resolve/main/Mawared_GS.Q3_K_L.gguf) | Q3_K_L | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mawared_GS-GGUF/resolve/main/Mawared_GS.IQ4_XS.gguf) | IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mawared_GS-GGUF/resolve/main/Mawared_GS.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mawared_GS-GGUF/resolve/main/Mawared_GS.Q4_K_M.gguf) | Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mawared_GS-GGUF/resolve/main/Mawared_GS.Q5_K_S.gguf) | Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mawared_GS-GGUF/resolve/main/Mawared_GS.Q5_K_M.gguf) | Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mawared_GS-GGUF/resolve/main/Mawared_GS.Q6_K.gguf) | Q6_K | 7.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mawared_GS-GGUF/resolve/main/Mawared_GS.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Mawared_GS-GGUF/resolve/main/Mawared_GS.f16.gguf) | f16 | 18.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mrferr3t/01ebce7b-bd56-42e8-9504-8a7bb01f5c0a | mrferr3t | 2025-01-29T20:05:25Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Coder-7B",
"base_model:adapter:unsloth/Qwen2.5-Coder-7B",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T20:03:31Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Coder-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 01ebce7b-bd56-42e8-9504-8a7bb01f5c0a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Coder-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f139bf284ccd53f2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f139bf284ccd53f2_train_data.json
type:
field_input: right_context
field_instruction: prompt
field_output: groundtruth
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/01ebce7b-bd56-42e8-9504-8a7bb01f5c0a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 7
micro_batch_size: 2
mlflow_experiment_name: /tmp/f139bf284ccd53f2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: cae9065f-baf8-489d-bdb5-78c627b7f61e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: cae9065f-baf8-489d-bdb5-78c627b7f61e
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 01ebce7b-bd56-42e8-9504-8a7bb01f5c0a
This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-7B](https://huggingface.co/unsloth/Qwen2.5-Coder-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.0407
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.5372 | 0.0034 | 1 | 5.0842 |
| 5.1395 | 0.0068 | 2 | 5.0849 |
| 4.0026 | 0.0137 | 4 | 5.0797 |
| 5.2132 | 0.0205 | 6 | 5.0407 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nttx/4a290634-e2d8-4306-b66c-fdb840c17256 | nttx | 2025-01-29T20:05:13Z | 5 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:defog/sqlcoder-7b-2",
"base_model:adapter:defog/sqlcoder-7b-2",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2025-01-29T19:44:44Z | ---
library_name: peft
license: cc-by-sa-4.0
base_model: defog/sqlcoder-7b-2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4a290634-e2d8-4306-b66c-fdb840c17256
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: defog/sqlcoder-7b-2
bf16: auto
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 530c0ca90e2fe049_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/530c0ca90e2fe049_train_data.json
type:
field_instruction: prompt
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: nttx/4a290634-e2d8-4306-b66c-fdb840c17256
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/530c0ca90e2fe049_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1270482e-ec21-4704-98fd-bf2652488e7c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1270482e-ec21-4704-98fd-bf2652488e7c
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 4a290634-e2d8-4306-b66c-fdb840c17256
This model is a fine-tuned version of [defog/sqlcoder-7b-2](https://huggingface.co/defog/sqlcoder-7b-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3077
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.4791 | 0.0317 | 200 | 0.3077 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nblinh/aa48d42b-9dc2-4353-a761-5160525c5055 | nblinh | 2025-01-29T20:02:34Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:DeepMount00/Llama-3-8b-Ita",
"base_model:adapter:DeepMount00/Llama-3-8b-Ita",
"license:llama3",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T19:34:35Z | ---
library_name: peft
license: llama3
base_model: DeepMount00/Llama-3-8b-Ita
tags:
- axolotl
- generated_from_trainer
model-index:
- name: aa48d42b-9dc2-4353-a761-5160525c5055
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: DeepMount00/Llama-3-8b-Ita
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c4e6804d4a2a7552_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c4e6804d4a2a7552_train_data.json
type:
field_input: answer
field_instruction: question
field_output: answer_summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nblinh/aa48d42b-9dc2-4353-a761-5160525c5055
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/c4e6804d4a2a7552_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 97665195-034e-4a0e-a46d-17e58782c3fd
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 97665195-034e-4a0e-a46d-17e58782c3fd
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# aa48d42b-9dc2-4353-a761-5160525c5055
This model is a fine-tuned version of [DeepMount00/Llama-3-8b-Ita](https://huggingface.co/DeepMount00/Llama-3-8b-Ita) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5902 | 0.1478 | 200 | 0.5863 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Lauther/emb-stella_en_400M_v5-2e | Lauther | 2025-01-29T20:00:24Z | 18 | 1 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"new",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:5220",
"loss:CosineSimilarityLoss",
"custom_code",
"dataset:Lauther/embeddings-train-semantic",
"arxiv:1908.10084",
"base_model:NovaSearch/stella_en_400M_v5",
"base_model:finetune:NovaSearch/stella_en_400M_v5",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-01-29T19:59:49Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:5220
- loss:CosineSimilarityLoss
base_model: NovaSearch/stella_en_400M_v5
widget:
- source_sentence: Identify the column that stores the uncertainty value.
sentences:
- "What is measuring equipment?\nMeasuring equipment refers to the devices that\
\ make up a measurement system. Each piece of equipment has:\n- A unique serial\
\ number for identification.\n- A technical name, such as transmitter, plate,\
\ thermometer, etc.\n\nHow is equipment assigned to a measurement system?\nWhen\
\ equipment is assigned to a measurement system, it is given a unique identifier\
\ called an \"\"Equipment Tag.\"\"\n- If a piece of equipment has a tag, it is\
\ considered in use in a measurement system.\n- If it does not have a tag, it\
\ is considered spare or unused\n\nEquipment assignment based on technology:\n\
The type of equipment assigned to a measurement system depends on the technology\
\ used, for example:\n1. Differential technology (for gas measurement):\n -\
\ Static pressure transmitters\n - Differential pressure transmitters\n \
\ - Temperature transmitters\n - RTDs (thermometers)\n - Orifice plates\n\
\ - Straight stretch\n\n2. Linear technology (for gas measurement):\n -\
\ Temperature transmitters\n - RTDs\n - Static pressure transmitters\n \
\ - Ultrasonic meters\n\nRelationship between equipment and measurement systems:\n\
- A measurement system can have multiple pieces of equipment.\n- However, a piece\
\ of equipment can only be assigned to one measurement system.\n\nDatabase management:\n\
- The database includes a special table to manage the list of equipment assigned\
\ to measurement systems.\n- When a user refers to an \"\"Equipment Tag\"\", they\
\ are searching for operational equipment assigned to a measurement system.\n\
- If a user is looking for spare or unused equipment, they are searching for equipment\
\ not listed in the tagged equipment table.\n- Commonly used when user refers\
\ directly to an \"\"Equipment Tag\""
- 'What is equipment calibration?
Calibration is a metrological verification process used to ensure the accuracy
of measurement equipment. It is performed periodically, based on intervals set
by the company or a regulatory body.
Purpose of calibration:
The calibration process corrects any deviations in how the equipment measures
physical magnitudes (variables). This ensures the equipment provides accurate
and reliable data.
Calibration cycles:
There are two main calibration cycles:
1. As-found: Represents the equipment''s measurement accuracy before any adjustments
are made. This cycle is almost always implemented.
2. As-left: Represents the equipment''s measurement accuracy after adjustments
are made. This cycle is used depending on regulatory requirements.
Calibration uncertainty:
- Uncertainty is included in the results of a calibration.
- Calibration uncertainty refers to the margin of error in the device''s measurements,
which also affects the uncertainty of the measured variable or magnitude.'
- 'What kind of data store an equipment?
Equipments can capture meteorological data, such as pressure, temperature, and
volume (magnitudes). This data is essential for users to perform various calculations.
Data storage:
- The measured values are stored in a special table in the database for magnitudes.
This table contains the values of the variables captured by the equipments.
- These values are **direct measurements** from the fluid (e.g., raw pressure,
temperature, or volume readings). **They are not calculated values**, such as
uncertainty.
- The values stored in the variable values table are **different** from variable
uncertainty values, which are calculated separately and represent the margin of
error.
Accessing the data:
- Users typically access the data by referring to the readings from the measurement
system, not directly from the individual equipments.
- The readings are stored in a "variable values" table within the database.
Linking variable names:
If the user needs to know the name of a variable, they must link the data to another
table that stores information about the types of variables.'
- source_sentence: SELECT * FROM EquipmentType LIMIT 1
sentences:
- 'What kind of data store an equipment?
Equipments can capture meteorological data, such as pressure, temperature, and
volume (magnitudes). This data is essential for users to perform various calculations.
Data storage:
- The measured values are stored in a special table in the database for magnitudes.
This table contains the values of the variables captured by the equipments.
- These values are **direct measurements** from the fluid (e.g., raw pressure,
temperature, or volume readings). **They are not calculated values**, such as
uncertainty.
- The values stored in the variable values table are **different** from variable
uncertainty values, which are calculated separately and represent the margin of
error.
Accessing the data:
- Users typically access the data by referring to the readings from the measurement
system, not directly from the individual equipments.
- The readings are stored in a "variable values" table within the database.
Linking variable names:
If the user needs to know the name of a variable, they must link the data to another
table that stores information about the types of variables.'
- "How does a flow computer generate and store reports?\nA flow computer generates\
\ daily or hourly reports to provide users with operational data. These reports\
\ are stored in the flow computer's memory in an organized format.\n\nReport structure:\n\
- Each report includes:\n- Date and time of the data recording.\n- Data recorded\
\ from flow computers.\n\nData storage in tables:\nThe reports are saved in two\
\ tables:\n1. Main table (Index):\n - Stores the date, time, and flow computer\
\ identifier.\n2. Detail table:\n - Stores the measured values associated with\
\ the report.\n\nConnection to the Modbus table:\nThe flow computer's reports\
\ are linked to a Modbus table. This table contains the names corresponding to\
\ each value in the reports, making it easier to interpret the data."
- 'What is a flow computer?
A flow computer is a device used in measurement engineering. It collects analog
and digital data from flow meters and other sensors.
Key features of a flow computer:
- It has a unique name, firmware version, and manufacturer information.
- It is designed to record and process data such as temperature, pressure, and
fluid volume (for gases or oils).
Main function:
The flow computer sends the collected data to a measurement system. This allows
measurement engineers to analyze the data and perform their tasks effectively.'
- source_sentence: What tables store measurement system data?
sentences:
- "What is uncertainty?\nUncertainty is a measure of confidence in the precision\
\ and reliability of results obtained from equipment or measurement systems. It\
\ quantifies the potential error or margin of error in measurements.\n\nTypes\
\ of uncertainty:\nThere are two main types of uncertainty:\n1. Uncertainty of\
\ magnitudes (variables):\n - Refers to the uncertainty of specific variables,\
\ such as temperature or pressure.\n - It is calculated after calibrating a\
\ device or obtained from the equipment manufacturer's manual.\n - This uncertainty\
\ serves as a starting point for further calculations related to the equipment.\n\
\n2. Uncertainty of the measurement system:\n - Refers to the uncertainty calculated\
\ for the overall flow measurement.\n - It depends on the uncertainties of\
\ the individual variables (magnitudes) and represents the combined margin of\
\ error for the entire system.\n\nKey points:\n- The uncertainties of magnitudes\
\ (variables) are the foundation for calculating the uncertainty of the measurement\
\ system. Think of them as the \"building blocks.\"\n- Do not confuse the two\
\ types of uncertainty:\n - **Uncertainty of magnitudes/variables**: Specific\
\ to individual variables (e.g., temperature, pressure).\n - **Uncertainty\
\ of the measurement system**: Specific to the overall flow measurement.\n\nDatabase\
\ storage for uncertainties:\nIn the database, uncertainty calculations are stored\
\ in two separate tables:\n1. Uncertainty of magnitudes (variables):\n - Stores\
\ the uncertainty values for specific variables (e.g., temperature, pressure).\n\
\n2. Uncertainty of the measurement system:\n - Stores the uncertainty values\
\ for the overall flow measurement system.\n\nHow to retrieve uncertainty data:\n\
- To find the uncertainty of the measurement system, join the measurement systems\
\ table with the uncertainty of the measurement system table.\n- To find the uncertainty\
\ of a specific variable (magnitude), join the measurement systems table with\
\ the uncertainty of magnitudes (variables) table.\n\nImportant note:\nDo not\
\ confuse the two types of uncertainty:\n- If the user requests the uncertainty\
\ of the measurement system, use the first join (measurement systems table + uncertainty\
\ of the measurement system table).\n- If the user requests the uncertainty of\
\ a specific variable (magnitude) in a report, use the second join (measurement\
\ systems table + uncertainty of magnitudes table)."
- "What is a measurement system?\nA measurement system, also referred to as a delivery\
\ point, measurement point, or reception point, is used to measure and monitor\
\ fluids in industrial processes.\n\nKey characteristics of a measurement system:\n\
1. Measurement technology:\n - Differential: Used for precise measurements.\n\
\ - Linear: Used for straightforward measurements.\n\n2. System identifier\
\ (TAG):\n - A unique identifier for the system.\n\n3. Fluid type:\n - The\
\ system can measure gases, oils, condensates, water, steam, or other fluids.\n\
4. System type:\n - Specifies the category or purpose of the system.\n\nMeasurement\
\ technology by fluid type:\n- Gas measurement systems: Use both linear and differential\
\ measurement technologies.\n- Oil measurement systems: Do not use linear or differential\
\ technologies; they are programmed differently.\"\n\n\nClassification of measurement\
\ systems:\nMeasurement systems are classified based on the stage of the process\
\ in which they are used. Common classifications include:\n- Fiscal\n- Operational\n\
- Appropriation\n- Custody\n- Production Poços"
- 'What do measurement equipment measure?
Each equipment measures a physical magnitude, also known as a variable. Based
on the type of variable they measure, devices are classified into different categories.
Equipment classification:
- Primary meter: Assigned by default to equipments like orifice plates.
- Secondary meter: Assigned by default to equipments like transmitters.
- Tertiary meter: Used for other types of equipments.
Equipment types in the database:
The database includes a table listing all equipment types. Examples of equipment
types are:
- Differential pressure transmitters
- RTDs (Resistance Temperature Detectors)
- Orifice plates
- Multivariable transmitters
- Ultrasonic meters
Meteorological checks for equipments:
Each equipment type is assigned a meteorological check, which can be either:
- Calibration: To ensure measurement accuracy.
- Inspection: To verify proper functioning.
Data storage in tables:
The database also includes a separate table for equipment classifications, which
are:
- Primary meter
- Secondary meter
- Tertiary meter
So, an equipment has equipment types and this types has classifications.'
- source_sentence: What is the table structure for equipment types?
sentences:
- "How does a flow computer generate and store reports?\nA flow computer generates\
\ daily or hourly reports to provide users with operational data. These reports\
\ are stored in the flow computer's memory in an organized format.\n\nReport structure:\n\
- Each report includes:\n- Date and time of the data recording.\n- Data recorded\
\ from flow computers.\n\nData storage in tables:\nThe reports are saved in two\
\ tables:\n1. Main table (Index):\n - Stores the date, time, and flow computer\
\ identifier.\n2. Detail table:\n - Stores the measured values associated with\
\ the report.\n\nConnection to the Modbus table:\nThe flow computer's reports\
\ are linked to a Modbus table. This table contains the names corresponding to\
\ each value in the reports, making it easier to interpret the data."
- "What is measuring equipment?\nMeasuring equipment refers to the devices that\
\ make up a measurement system. Each piece of equipment has:\n- A unique serial\
\ number for identification.\n- A technical name, such as transmitter, plate,\
\ thermometer, etc.\n\nHow is equipment assigned to a measurement system?\nWhen\
\ equipment is assigned to a measurement system, it is given a unique identifier\
\ called an \"\"Equipment Tag.\"\"\n- If a piece of equipment has a tag, it is\
\ considered in use in a measurement system.\n- If it does not have a tag, it\
\ is considered spare or unused\n\nEquipment assignment based on technology:\n\
The type of equipment assigned to a measurement system depends on the technology\
\ used, for example:\n1. Differential technology (for gas measurement):\n -\
\ Static pressure transmitters\n - Differential pressure transmitters\n \
\ - Temperature transmitters\n - RTDs (thermometers)\n - Orifice plates\n\
\ - Straight stretch\n\n2. Linear technology (for gas measurement):\n -\
\ Temperature transmitters\n - RTDs\n - Static pressure transmitters\n \
\ - Ultrasonic meters\n\nRelationship between equipment and measurement systems:\n\
- A measurement system can have multiple pieces of equipment.\n- However, a piece\
\ of equipment can only be assigned to one measurement system.\n\nDatabase management:\n\
- The database includes a special table to manage the list of equipment assigned\
\ to measurement systems.\n- When a user refers to an \"\"Equipment Tag\"\", they\
\ are searching for operational equipment assigned to a measurement system.\n\
- If a user is looking for spare or unused equipment, they are searching for equipment\
\ not listed in the tagged equipment table.\n- Commonly used when user refers\
\ directly to an \"\"Equipment Tag\""
- "What is uncertainty?\nUncertainty is a measure of confidence in the precision\
\ and reliability of results obtained from equipment or measurement systems. It\
\ quantifies the potential error or margin of error in measurements.\n\nTypes\
\ of uncertainty:\nThere are two main types of uncertainty:\n1. Uncertainty of\
\ magnitudes (variables):\n - Refers to the uncertainty of specific variables,\
\ such as temperature or pressure.\n - It is calculated after calibrating a\
\ device or obtained from the equipment manufacturer's manual.\n - This uncertainty\
\ serves as a starting point for further calculations related to the equipment.\n\
\n2. Uncertainty of the measurement system:\n - Refers to the uncertainty calculated\
\ for the overall flow measurement.\n - It depends on the uncertainties of\
\ the individual variables (magnitudes) and represents the combined margin of\
\ error for the entire system.\n\nKey points:\n- The uncertainties of magnitudes\
\ (variables) are the foundation for calculating the uncertainty of the measurement\
\ system. Think of them as the \"building blocks.\"\n- Do not confuse the two\
\ types of uncertainty:\n - **Uncertainty of magnitudes/variables**: Specific\
\ to individual variables (e.g., temperature, pressure).\n - **Uncertainty\
\ of the measurement system**: Specific to the overall flow measurement.\n\nDatabase\
\ storage for uncertainties:\nIn the database, uncertainty calculations are stored\
\ in two separate tables:\n1. Uncertainty of magnitudes (variables):\n - Stores\
\ the uncertainty values for specific variables (e.g., temperature, pressure).\n\
\n2. Uncertainty of the measurement system:\n - Stores the uncertainty values\
\ for the overall flow measurement system.\n\nHow to retrieve uncertainty data:\n\
- To find the uncertainty of the measurement system, join the measurement systems\
\ table with the uncertainty of the measurement system table.\n- To find the uncertainty\
\ of a specific variable (magnitude), join the measurement systems table with\
\ the uncertainty of magnitudes (variables) table.\n\nImportant note:\nDo not\
\ confuse the two types of uncertainty:\n- If the user requests the uncertainty\
\ of the measurement system, use the first join (measurement systems table + uncertainty\
\ of the measurement system table).\n- If the user requests the uncertainty of\
\ a specific variable (magnitude) in a report, use the second join (measurement\
\ systems table + uncertainty of magnitudes table)."
- source_sentence: What columns store the uncertainty values?
sentences:
- "What is a measurement system?\nA measurement system, also referred to as a delivery\
\ point, measurement point, or reception point, is used to measure and monitor\
\ fluids in industrial processes.\n\nKey characteristics of a measurement system:\n\
1. Measurement technology:\n - Differential: Used for precise measurements.\n\
\ - Linear: Used for straightforward measurements.\n\n2. System identifier\
\ (TAG):\n - A unique identifier for the system.\n\n3. Fluid type:\n - The\
\ system can measure gases, oils, condensates, water, steam, or other fluids.\n\
4. System type:\n - Specifies the category or purpose of the system.\n\nMeasurement\
\ technology by fluid type:\n- Gas measurement systems: Use both linear and differential\
\ measurement technologies.\n- Oil measurement systems: Do not use linear or differential\
\ technologies; they are programmed differently.\"\n\n\nClassification of measurement\
\ systems:\nMeasurement systems are classified based on the stage of the process\
\ in which they are used. Common classifications include:\n- Fiscal\n- Operational\n\
- Appropriation\n- Custody\n- Production Poços"
- 'How are flow computers and measurement systems related?
Flow computers can have multiple systems assigned to them. However, a measurement
system can only be assigned to one flow computer.
Database terminology:
In the database, this relationship is referred to as:
- Meter streams
- Meter runs
- Sections
Storage of the relationship:
The relationship between a flow computer and its assigned measurement system is
stored in a special table.
User context:
When a user refers to a "meter stream," they are indicating that they are searching
for a measurement system assigned to a specific flow computer.'
- "What is uncertainty?\nUncertainty is a measure of confidence in the precision\
\ and reliability of results obtained from equipment or measurement systems. It\
\ quantifies the potential error or margin of error in measurements.\n\nTypes\
\ of uncertainty:\nThere are two main types of uncertainty:\n1. Uncertainty of\
\ magnitudes (variables):\n - Refers to the uncertainty of specific variables,\
\ such as temperature or pressure.\n - It is calculated after calibrating a\
\ device or obtained from the equipment manufacturer's manual.\n - This uncertainty\
\ serves as a starting point for further calculations related to the equipment.\n\
\n2. Uncertainty of the measurement system:\n - Refers to the uncertainty calculated\
\ for the overall flow measurement.\n - It depends on the uncertainties of\
\ the individual variables (magnitudes) and represents the combined margin of\
\ error for the entire system.\n\nKey points:\n- The uncertainties of magnitudes\
\ (variables) are the foundation for calculating the uncertainty of the measurement\
\ system. Think of them as the \"building blocks.\"\n- Do not confuse the two\
\ types of uncertainty:\n - **Uncertainty of magnitudes/variables**: Specific\
\ to individual variables (e.g., temperature, pressure).\n - **Uncertainty\
\ of the measurement system**: Specific to the overall flow measurement.\n\nDatabase\
\ storage for uncertainties:\nIn the database, uncertainty calculations are stored\
\ in two separate tables:\n1. Uncertainty of magnitudes (variables):\n - Stores\
\ the uncertainty values for specific variables (e.g., temperature, pressure).\n\
\n2. Uncertainty of the measurement system:\n - Stores the uncertainty values\
\ for the overall flow measurement system.\n\nHow to retrieve uncertainty data:\n\
- To find the uncertainty of the measurement system, join the measurement systems\
\ table with the uncertainty of the measurement system table.\n- To find the uncertainty\
\ of a specific variable (magnitude), join the measurement systems table with\
\ the uncertainty of magnitudes (variables) table.\n\nImportant note:\nDo not\
\ confuse the two types of uncertainty:\n- If the user requests the uncertainty\
\ of the measurement system, use the first join (measurement systems table + uncertainty\
\ of the measurement system table).\n- If the user requests the uncertainty of\
\ a specific variable (magnitude) in a report, use the second join (measurement\
\ systems table + uncertainty of magnitudes table)."
datasets:
- Lauther/embeddings-train-semantic
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on NovaSearch/stella_en_400M_v5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [NovaSearch/stella_en_400M_v5](https://huggingface.co/NovaSearch/stella_en_400M_v5) on the [embeddings-train-semantic](https://huggingface.co/datasets/Lauther/embeddings-train-semantic) dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [NovaSearch/stella_en_400M_v5](https://huggingface.co/NovaSearch/stella_en_400M_v5) <!-- at revision db4ace10eb6a7131d349077b2eccc5c76a77277b -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [embeddings-train-semantic](https://huggingface.co/datasets/Lauther/embeddings-train-semantic)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: NewModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Dense({'in_features': 1024, 'out_features': 1024, 'bias': True, 'activation_function': 'torch.nn.modules.linear.Identity'})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Lauther/emb-stella_en_400M_v5-2e")
# Run inference
sentences = [
'What columns store the uncertainty values?',
'How are flow computers and measurement systems related?\nFlow computers can have multiple systems assigned to them. However, a measurement system can only be assigned to one flow computer.\n\nDatabase terminology:\nIn the database, this relationship is referred to as:\n- Meter streams\n- Meter runs\n- Sections\n\nStorage of the relationship:\nThe relationship between a flow computer and its assigned measurement system is stored in a special table.\n\nUser context:\nWhen a user refers to a "meter stream," they are indicating that they are searching for a measurement system assigned to a specific flow computer.',
'What is uncertainty?\nUncertainty is a measure of confidence in the precision and reliability of results obtained from equipment or measurement systems. It quantifies the potential error or margin of error in measurements.\n\nTypes of uncertainty:\nThere are two main types of uncertainty:\n1. Uncertainty of magnitudes (variables):\n - Refers to the uncertainty of specific variables, such as temperature or pressure.\n - It is calculated after calibrating a device or obtained from the equipment manufacturer\'s manual.\n - This uncertainty serves as a starting point for further calculations related to the equipment.\n\n2. Uncertainty of the measurement system:\n - Refers to the uncertainty calculated for the overall flow measurement.\n - It depends on the uncertainties of the individual variables (magnitudes) and represents the combined margin of error for the entire system.\n\nKey points:\n- The uncertainties of magnitudes (variables) are the foundation for calculating the uncertainty of the measurement system. Think of them as the "building blocks."\n- Do not confuse the two types of uncertainty:\n - **Uncertainty of magnitudes/variables**: Specific to individual variables (e.g., temperature, pressure).\n - **Uncertainty of the measurement system**: Specific to the overall flow measurement.\n\nDatabase storage for uncertainties:\nIn the database, uncertainty calculations are stored in two separate tables:\n1. Uncertainty of magnitudes (variables):\n - Stores the uncertainty values for specific variables (e.g., temperature, pressure).\n\n2. Uncertainty of the measurement system:\n - Stores the uncertainty values for the overall flow measurement system.\n\nHow to retrieve uncertainty data:\n- To find the uncertainty of the measurement system, join the measurement systems table with the uncertainty of the measurement system table.\n- To find the uncertainty of a specific variable (magnitude), join the measurement systems table with the uncertainty of magnitudes (variables) table.\n\nImportant note:\nDo not confuse the two types of uncertainty:\n- If the user requests the uncertainty of the measurement system, use the first join (measurement systems table + uncertainty of the measurement system table).\n- If the user requests the uncertainty of a specific variable (magnitude) in a report, use the second join (measurement systems table + uncertainty of magnitudes table).',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### embeddings-train-semantic
* Dataset: [embeddings-train-semantic](https://huggingface.co/datasets/Lauther/embeddings-train-semantic) at [ce90f53](https://huggingface.co/datasets/Lauther/embeddings-train-semantic/tree/ce90f531bc39037053d223b27868ad178852f330)
* Size: 5,220 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 6 tokens</li><li>mean: 15.47 tokens</li><li>max: 77 tokens</li></ul> | <ul><li>min: 108 tokens</li><li>mean: 222.4 tokens</li><li>max: 452 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.23</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------|
| <code>What is the data type of differential pressure in the measurement system?</code> | <code>What is uncertainty?<br>Uncertainty is a measure of confidence in the precision and reliability of results obtained from equipment or measurement systems. It quantifies the potential error or margin of error in measurements.<br><br>Types of uncertainty:<br>There are two main types of uncertainty:<br>1. Uncertainty of magnitudes (variables):<br> - Refers to the uncertainty of specific variables, such as temperature or pressure.<br> - It is calculated after calibrating a device or obtained from the equipment manufacturer's manual.<br> - This uncertainty serves as a starting point for further calculations related to the equipment.<br><br>2. Uncertainty of the measurement system:<br> - Refers to the uncertainty calculated for the overall flow measurement.<br> - It depends on the uncertainties of the individual variables (magnitudes) and represents the combined margin of error for the entire system.<br><br>Key points:<br>- The uncertainties of magnitudes (variables) are the foundation for calculating the uncertainty of ...</code> | <code>0.15000000000000002</code> |
| <code>What is the structure of the &&&equipment_data&&& table?</code> | <code>How are flow computers and measurement systems related?<br>Flow computers can have multiple systems assigned to them. However, a measurement system can only be assigned to one flow computer.<br><br>Database terminology:<br>In the database, this relationship is referred to as:<br>- Meter streams<br>- Meter runs<br>- Sections<br><br>Storage of the relationship:<br>The relationship between a flow computer and its assigned measurement system is stored in a special table.<br><br>User context:<br>When a user refers to a "meter stream," they are indicating that they are searching for a measurement system assigned to a specific flow computer.</code> | <code>0.35000000000000003</code> |
| <code>Find the columns in the flow computer table that identify the flow computer.</code> | <code>What kind of data store an equipment?<br>Equipments can capture meteorological data, such as pressure, temperature, and volume (magnitudes). This data is essential for users to perform various calculations.<br><br>Data storage:<br>- The measured values are stored in a special table in the database for magnitudes. This table contains the values of the variables captured by the equipments.<br>- These values are **direct measurements** from the fluid (e.g., raw pressure, temperature, or volume readings). **They are not calculated values**, such as uncertainty.<br>- The values stored in the variable values table are **different** from variable uncertainty values, which are calculated separately and represent the margin of error.<br><br>Accessing the data:<br>- Users typically access the data by referring to the readings from the measurement system, not directly from the individual equipments.<br>- The readings are stored in a "variable values" table within the database.<br><br>Linking variable names:<br>If the user needs to kno...</code> | <code>0.1</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Evaluation Dataset
#### embeddings-train-semantic
* Dataset: [embeddings-train-semantic](https://huggingface.co/datasets/Lauther/embeddings-train-semantic) at [ce90f53](https://huggingface.co/datasets/Lauther/embeddings-train-semantic/tree/ce90f531bc39037053d223b27868ad178852f330)
* Size: 652 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 652 samples:
| | sentence1 | sentence2 | score |
|:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 6 tokens</li><li>mean: 15.03 tokens</li><li>max: 77 tokens</li></ul> | <ul><li>min: 108 tokens</li><li>mean: 219.25 tokens</li><li>max: 452 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.24</li><li>max: 0.9</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:-------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------|
| <code>How can I filter uncertainty reports by equipment tag?</code> | <code>How does a flow computer generate and store reports?<br>A flow computer generates daily or hourly reports to provide users with operational data. These reports are stored in the flow computer's memory in an organized format.<br><br>Report structure:<br>- Each report includes:<br>- Date and time of the data recording.<br>- Data recorded from flow computers.<br><br>Data storage in tables:<br>The reports are saved in two tables:<br>1. Main table (Index):<br> - Stores the date, time, and flow computer identifier.<br>2. Detail table:<br> - Stores the measured values associated with the report.<br><br>Connection to the Modbus table:<br>The flow computer's reports are linked to a Modbus table. This table contains the names corresponding to each value in the reports, making it easier to interpret the data.</code> | <code>0.09999999999999999</code> |
| <code>What is the purpose of the flow_data table?</code> | <code>What is uncertainty?<br>Uncertainty is a measure of confidence in the precision and reliability of results obtained from equipment or measurement systems. It quantifies the potential error or margin of error in measurements.<br><br>Types of uncertainty:<br>There are two main types of uncertainty:<br>1. Uncertainty of magnitudes (variables):<br> - Refers to the uncertainty of specific variables, such as temperature or pressure.<br> - It is calculated after calibrating a device or obtained from the equipment manufacturer's manual.<br> - This uncertainty serves as a starting point for further calculations related to the equipment.<br><br>2. Uncertainty of the measurement system:<br> - Refers to the uncertainty calculated for the overall flow measurement.<br> - It depends on the uncertainties of the individual variables (magnitudes) and represents the combined margin of error for the entire system.<br><br>Key points:<br>- The uncertainties of magnitudes (variables) are the foundation for calculating the uncertainty of ...</code> | <code>0.15000000000000002</code> |
| <code>What is the column name for the report date in the Reports table?</code> | <code>What is equipment calibration?<br>Calibration is a metrological verification process used to ensure the accuracy of measurement equipment. It is performed periodically, based on intervals set by the company or a regulatory body.<br><br>Purpose of calibration:<br>The calibration process corrects any deviations in how the equipment measures physical magnitudes (variables). This ensures the equipment provides accurate and reliable data.<br><br>Calibration cycles:<br>There are two main calibration cycles:<br>1. As-found: Represents the equipment's measurement accuracy before any adjustments are made. This cycle is almost always implemented.<br>2. As-left: Represents the equipment's measurement accuracy after adjustments are made. This cycle is used depending on regulatory requirements.<br><br>Calibration uncertainty:<br>- Uncertainty is included in the results of a calibration.<br>- Calibration uncertainty refers to the margin of error in the device's measurements, which also affects the uncertainty of the measured variable or ...</code> | <code>0.1</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 4
- `per_gpu_eval_batch_size`: 4
- `gradient_accumulation_steps`: 4
- `learning_rate`: 2e-05
- `num_train_epochs`: 2
- `warmup_ratio`: 0.1
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 4
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: 4
- `gradient_accumulation_steps`: 4
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0307 | 10 | 0.3462 | - |
| 0.0613 | 20 | 0.1901 | - |
| 0.0920 | 30 | 0.1534 | - |
| 0.1226 | 40 | 0.0859 | - |
| 0.1533 | 50 | 0.079 | 0.0233 |
| 0.1839 | 60 | 0.1003 | - |
| 0.2146 | 70 | 0.095 | - |
| 0.2452 | 80 | 0.1396 | - |
| 0.2759 | 90 | 0.0936 | - |
| 0.3065 | 100 | 0.0707 | 0.0204 |
| 0.3372 | 110 | 0.0828 | - |
| 0.3678 | 120 | 0.0673 | - |
| 0.3985 | 130 | 0.0821 | - |
| 0.4291 | 140 | 0.07 | - |
| 0.4598 | 150 | 0.0729 | 0.0203 |
| 0.4904 | 160 | 0.0796 | - |
| 0.5211 | 170 | 0.0885 | - |
| 0.5517 | 180 | 0.0799 | - |
| 0.5824 | 190 | 0.0576 | - |
| 0.6130 | 200 | 0.0527 | 0.0185 |
| 0.6437 | 210 | 0.0778 | - |
| 0.6743 | 220 | 0.0683 | - |
| 0.7050 | 230 | 0.0721 | - |
| 0.7356 | 240 | 0.0938 | - |
| 0.7663 | 250 | 0.0637 | 0.0146 |
| 0.7969 | 260 | 0.0426 | - |
| 0.8276 | 270 | 0.0611 | - |
| 0.8582 | 280 | 0.0575 | - |
| 0.8889 | 290 | 0.0775 | - |
| 0.9195 | 300 | 0.0516 | 0.0148 |
| 0.9502 | 310 | 0.0616 | - |
| 0.9808 | 320 | 0.0458 | - |
| 1.0092 | 330 | 0.0416 | - |
| 1.0398 | 340 | 0.0607 | - |
| 1.0705 | 350 | 0.0471 | 0.0143 |
| 1.1011 | 360 | 0.0667 | - |
| 1.1318 | 370 | 0.0533 | - |
| 1.1625 | 380 | 0.0517 | - |
| 1.1931 | 390 | 0.0575 | - |
| 1.2238 | 400 | 0.0325 | 0.0138 |
| 1.2544 | 410 | 0.0296 | - |
| 1.2851 | 420 | 0.0553 | - |
| 1.3157 | 430 | 0.0511 | - |
| 1.3464 | 440 | 0.0491 | - |
| 1.3770 | 450 | 0.0601 | 0.0136 |
| 1.4077 | 460 | 0.0397 | - |
| 1.4383 | 470 | 0.0467 | - |
| 1.4690 | 480 | 0.0356 | - |
| 1.4996 | 490 | 0.0429 | - |
| 1.5303 | 500 | 0.0467 | 0.0128 |
| 1.5609 | 510 | 0.0487 | - |
| 1.5916 | 520 | 0.0338 | - |
| 1.6222 | 530 | 0.0448 | - |
| 1.6529 | 540 | 0.0441 | - |
| 1.6835 | 550 | 0.0303 | 0.0126 |
| 1.7142 | 560 | 0.0319 | - |
| 1.7448 | 570 | 0.0373 | - |
| 1.7755 | 580 | 0.0432 | - |
| 1.8061 | 590 | 0.0444 | - |
| 1.8368 | 600 | 0.0459 | 0.0125 |
| 1.8674 | 610 | 0.0361 | - |
| 1.8981 | 620 | 0.0333 | - |
| 1.9287 | 630 | 0.0404 | - |
| 1.9594 | 640 | 0.0353 | - |
| 1.9900 | 650 | 0.0536 | 0.0122 |
### Framework Versions
- Python: 3.11.0
- Sentence Transformers: 3.4.0
- Transformers: 4.48.1
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
kostiantynk1205/d4b87c98-1604-4a62-b175-3acc800f15ff | kostiantynk1205 | 2025-01-29T20:00:03Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:trl-internal-testing/tiny-random-LlamaForCausalLM",
"base_model:adapter:trl-internal-testing/tiny-random-LlamaForCausalLM",
"region:us"
] | null | 2025-01-29T19:55:45Z | ---
library_name: peft
base_model: trl-internal-testing/tiny-random-LlamaForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d4b87c98-1604-4a62-b175-3acc800f15ff
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: trl-internal-testing/tiny-random-LlamaForCausalLM
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a407e9d4b265b545_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a407e9d4b265b545_train_data.json
type:
field_instruction: premise
field_output: hypothesis
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk1205/d4b87c98-1604-4a62-b175-3acc800f15ff
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/a407e9d4b265b545_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b40f774d-b555-461f-a6b8-26c1ff32e383
wandb_project: Birthday-SN56-23-Gradients-On-Demand
wandb_run: your_name
wandb_runid: b40f774d-b555-461f-a6b8-26c1ff32e383
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# d4b87c98-1604-4a62-b175-3acc800f15ff
This model is a fine-tuned version of [trl-internal-testing/tiny-random-LlamaForCausalLM](https://huggingface.co/trl-internal-testing/tiny-random-LlamaForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.3641
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 10.3669 |
| 10.3715 | 0.0002 | 13 | 10.3659 |
| 10.3641 | 0.0004 | 26 | 10.3647 |
| 10.3672 | 0.0006 | 39 | 10.3641 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
sniperfix/dbe9d801-5cfe-4580-8b01-fc489a62c4b0 | sniperfix | 2025-01-29T19:59:53Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:trl-internal-testing/tiny-random-LlamaForCausalLM",
"base_model:adapter:trl-internal-testing/tiny-random-LlamaForCausalLM",
"region:us"
] | null | 2025-01-29T19:40:42Z | ---
library_name: peft
base_model: trl-internal-testing/tiny-random-LlamaForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: dbe9d801-5cfe-4580-8b01-fc489a62c4b0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: trl-internal-testing/tiny-random-LlamaForCausalLM
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a407e9d4b265b545_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a407e9d4b265b545_train_data.json
type:
field_instruction: premise
field_output: hypothesis
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 256
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 32
gradient_checkpointing: true
group_by_length: false
hub_model_id: sniperfix/dbe9d801-5cfe-4580-8b01-fc489a62c4b0
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
- gate_proj
- down_proj
- up_proj
lr_scheduler: cosine
max_grad_norm: 2
max_steps: 90
micro_batch_size: 2
mlflow_experiment_name: /tmp/a407e9d4b265b545_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1.0e-05
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: indexjupri-sniper-country
wandb_mode: online
wandb_name: b40f774d-b555-461f-a6b8-26c1ff32e383
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b40f774d-b555-461f-a6b8-26c1ff32e383
warmup_steps: 20
weight_decay: 0.02
xformers_attention: false
```
</details><br>
# dbe9d801-5cfe-4580-8b01-fc489a62c4b0
This model is a fine-tuned version of [trl-internal-testing/tiny-random-LlamaForCausalLM](https://huggingface.co/trl-internal-testing/tiny-random-LlamaForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.3326
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-05
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- training_steps: 90
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 10.3669 |
| 10.3652 | 0.0010 | 8 | 10.3662 |
| 10.3661 | 0.0021 | 16 | 10.3637 |
| 10.3622 | 0.0031 | 24 | 10.3586 |
| 10.3549 | 0.0042 | 32 | 10.3513 |
| 10.3483 | 0.0052 | 40 | 10.3445 |
| 10.346 | 0.0063 | 48 | 10.3397 |
| 10.3398 | 0.0073 | 56 | 10.3365 |
| 10.339 | 0.0084 | 64 | 10.3344 |
| 10.3378 | 0.0094 | 72 | 10.3332 |
| 10.3357 | 0.0105 | 80 | 10.3327 |
| 10.3325 | 0.0115 | 88 | 10.3326 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
leixa/5176d34e-110f-4f49-8f87-70237384698c | leixa | 2025-01-29T19:59:42Z | 10 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:Intel/neural-chat-7b-v3-3",
"base_model:adapter:Intel/neural-chat-7b-v3-3",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T17:15:34Z | ---
library_name: peft
license: apache-2.0
base_model: Intel/neural-chat-7b-v3-3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5176d34e-110f-4f49-8f87-70237384698c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Intel/neural-chat-7b-v3-3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 50647f9e6e89cbb7_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/50647f9e6e89cbb7_train_data.json
type:
field_input: ingredients_processed
field_instruction: title
field_output: directions
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: leixa/5176d34e-110f-4f49-8f87-70237384698c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/50647f9e6e89cbb7_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: c53eddb1-5a0f-4d15-bd00-9389024c7d94
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: c53eddb1-5a0f-4d15-bd00-9389024c7d94
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 5176d34e-110f-4f49-8f87-70237384698c
This model is a fine-tuned version of [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 1.4558 |
| 5.0853 | 0.0014 | 9 | 1.2306 |
| 4.4517 | 0.0029 | 18 | 1.1830 |
| 4.7073 | 0.0043 | 27 | 1.1743 |
| 5.006 | 0.0058 | 36 | 1.1708 |
| 4.8233 | 0.0072 | 45 | 1.1664 |
| 4.7141 | 0.0087 | 54 | 1.1638 |
| 4.6263 | 0.0101 | 63 | 1.1616 |
| 4.7946 | 0.0116 | 72 | 1.1597 |
| 4.5899 | 0.0130 | 81 | 1.1587 |
| 4.3554 | 0.0144 | 90 | 1.1582 |
| 4.6366 | 0.0159 | 99 | 1.1581 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/Llama-3.1_OpenScholar-8B-GGUF | mradermacher | 2025-01-29T19:59:09Z | 431 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:OpenSciLM/Llama-3.1_OpenScholar-8B",
"base_model:quantized:OpenSciLM/Llama-3.1_OpenScholar-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-16T13:29:39Z | ---
base_model: OpenSciLM/Llama-3.1_OpenScholar-8B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/OpenSciLM/Llama-3.1_OpenScholar-8B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3.1_OpenScholar-8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1_OpenScholar-8B-GGUF/resolve/main/Llama-3.1_OpenScholar-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1_OpenScholar-8B-GGUF/resolve/main/Llama-3.1_OpenScholar-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1_OpenScholar-8B-GGUF/resolve/main/Llama-3.1_OpenScholar-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1_OpenScholar-8B-GGUF/resolve/main/Llama-3.1_OpenScholar-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1_OpenScholar-8B-GGUF/resolve/main/Llama-3.1_OpenScholar-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1_OpenScholar-8B-GGUF/resolve/main/Llama-3.1_OpenScholar-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1_OpenScholar-8B-GGUF/resolve/main/Llama-3.1_OpenScholar-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1_OpenScholar-8B-GGUF/resolve/main/Llama-3.1_OpenScholar-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1_OpenScholar-8B-GGUF/resolve/main/Llama-3.1_OpenScholar-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1_OpenScholar-8B-GGUF/resolve/main/Llama-3.1_OpenScholar-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1_OpenScholar-8B-GGUF/resolve/main/Llama-3.1_OpenScholar-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1_OpenScholar-8B-GGUF/resolve/main/Llama-3.1_OpenScholar-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
lesso18/ad03c4c5-1964-480a-b434-1758b7e5368b | lesso18 | 2025-01-29T19:59:07Z | 7 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Hermes-2-Theta-Llama-3-8B",
"base_model:adapter:NousResearch/Hermes-2-Theta-Llama-3-8B",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T19:39:10Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Hermes-2-Theta-Llama-3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ad03c4c5-1964-480a-b434-1758b7e5368b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Hermes-2-Theta-Llama-3-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e15924625f1f4c59_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e15924625f1f4c59_train_data.json
type:
field_instruction: INSTRUCTION
field_output: RESPONSE
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso18/ad03c4c5-1964-480a-b434-1758b7e5368b
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/e15924625f1f4c59_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6ed7c6ac-dcfc-4bd7-8673-497f923d29c1
wandb_project: new-01-29
wandb_run: your_name
wandb_runid: 6ed7c6ac-dcfc-4bd7-8673-497f923d29c1
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# ad03c4c5-1964-480a-b434-1758b7e5368b
This model is a fine-tuned version of [NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8614
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.6932 | 0.0229 | 200 | 1.8614 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/Llama-3.1_OpenScholar-8B-i1-GGUF | mradermacher | 2025-01-29T19:58:23Z | 669 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:OpenSciLM/Llama-3.1_OpenScholar-8B",
"base_model:quantized:OpenSciLM/Llama-3.1_OpenScholar-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-01-16T22:01:41Z | ---
base_model: OpenSciLM/Llama-3.1_OpenScholar-8B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/OpenSciLM/Llama-3.1_OpenScholar-8B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama-3.1_OpenScholar-8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1_OpenScholar-8B-i1-GGUF/resolve/main/Llama-3.1_OpenScholar-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1_OpenScholar-8B-i1-GGUF/resolve/main/Llama-3.1_OpenScholar-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1_OpenScholar-8B-i1-GGUF/resolve/main/Llama-3.1_OpenScholar-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1_OpenScholar-8B-i1-GGUF/resolve/main/Llama-3.1_OpenScholar-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1_OpenScholar-8B-i1-GGUF/resolve/main/Llama-3.1_OpenScholar-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1_OpenScholar-8B-i1-GGUF/resolve/main/Llama-3.1_OpenScholar-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1_OpenScholar-8B-i1-GGUF/resolve/main/Llama-3.1_OpenScholar-8B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1_OpenScholar-8B-i1-GGUF/resolve/main/Llama-3.1_OpenScholar-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1_OpenScholar-8B-i1-GGUF/resolve/main/Llama-3.1_OpenScholar-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1_OpenScholar-8B-i1-GGUF/resolve/main/Llama-3.1_OpenScholar-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1_OpenScholar-8B-i1-GGUF/resolve/main/Llama-3.1_OpenScholar-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1_OpenScholar-8B-i1-GGUF/resolve/main/Llama-3.1_OpenScholar-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1_OpenScholar-8B-i1-GGUF/resolve/main/Llama-3.1_OpenScholar-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1_OpenScholar-8B-i1-GGUF/resolve/main/Llama-3.1_OpenScholar-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1_OpenScholar-8B-i1-GGUF/resolve/main/Llama-3.1_OpenScholar-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1_OpenScholar-8B-i1-GGUF/resolve/main/Llama-3.1_OpenScholar-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1_OpenScholar-8B-i1-GGUF/resolve/main/Llama-3.1_OpenScholar-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1_OpenScholar-8B-i1-GGUF/resolve/main/Llama-3.1_OpenScholar-8B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1_OpenScholar-8B-i1-GGUF/resolve/main/Llama-3.1_OpenScholar-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1_OpenScholar-8B-i1-GGUF/resolve/main/Llama-3.1_OpenScholar-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1_OpenScholar-8B-i1-GGUF/resolve/main/Llama-3.1_OpenScholar-8B.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1_OpenScholar-8B-i1-GGUF/resolve/main/Llama-3.1_OpenScholar-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1_OpenScholar-8B-i1-GGUF/resolve/main/Llama-3.1_OpenScholar-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1_OpenScholar-8B-i1-GGUF/resolve/main/Llama-3.1_OpenScholar-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
chrisrutherford/Llama-3.1-8B-ReasonChatV1 | chrisrutherford | 2025-01-29T19:55:16Z | 13 | 1 | null | [
"safetensors",
"llama",
"license:llama3.1",
"region:us"
] | null | 2025-01-29T19:45:11Z | ---
license: llama3.1
---
<|start_header_id|>user<|end_header_id|>
If in order to do 1 good deed it is required to do 2 bad deeds of equal measure, is the good deed justified?
<|eot_id|><|start_header_id|>assistant<|end_header_id|>
#BACKGROUND
The question revolves around the idea of whether a good deed can be justified if it is accompanied by two bad deeds of equal measure. This scenario raises complex moral and ethical considerations, particularly regarding the nature of good and evil, and whether the presence of evil can ever justify an otherwise good action.
#INDUCTION
- Observing patterns where every instance of a good deed is accompanied by a corresponding bad deed leads to the conclusion that all good deeds are inherently bad.
- If one good deed necessitates the doing of two bad deeds, then it can be inferred that every good deed is counterbalanced by an equal amount of bad.
- Given the premise that a good deed is offset by two bad deeds, it can be generalized that the presence of any good deed implies an accompanying evil.
#DEDUCTION
- If the premise is accepted that 1 good deed requires 2 bad deeds, then logically, a single good deed can never be justified.
- Assuming every good deed is matched by a bad deed of equal measure, it follows that the good deed is not truly good, as it is always accompanied by evil.
- If a good deed is offset by two bad deeds, then, based on the principle of balance, the good deed cannot be considered good, as the bad deeds negate its positive value.
#REASONING
Given the pattern where every good deed is accompanied by a corresponding bad deed, it appears that the existence of good deeds is inherently tied to the presence of evil. This observation suggests that the two bad deeds required to counterbalance a single good deed imply that the good deed is not self-sufficiently good but is instead reliant on the presence of evil to be considered good. Furthermore, if a good deed necessitates the doing of two bad deeds, it logically follows that the good deed cannot be justified without the accompanying evil, as the balance between the good and the bad would otherwise be disrupted. The premise that every good deed is offset by an equal amount of bad leads to the conclusion that the good deed's value is neutralized by the presence of the two bad deeds, suggesting that the good deed is not truly good in an absolute sense.
#ANSWER
The good deed in question cannot be considered justified if it is accompanied by two bad deeds of equal measure. The presence of the two bad deeds negates the positive value of the good deed, indicating that the good deed is not self-sufficiently good. Instead, the good deed's existence is dependent on the coexistence of evil, suggesting that the good deed is part of a larger moral equation where the positive and negative aspects are balanced. This perspective implies that true goodness cannot exist in isolation but must be considered in relation to the broader moral landscape, where good and evil are intertwined. Therefore, the good deed, when viewed in conjunction with the two bad deeds, does not represent an act of genuine goodness but rather a complex moral event that is neutralized by the accompanying evil.
#SPECIFIC_EXAMPLES
The concept of a good deed being accompanied by two bad deeds of equal measure can be illustrated through several real-world examples. For instance, consider a scenario where a person donates $100 to a charity, but in doing so, they also have to endure a significant personal hardship, such as losing their home or business, and simultaneously inflict a similar hardship on another individual. Another example could be a situation where a good deed, like volunteering at a local community center, is countered by a bad deed, such as littering or engaging in reckless driving, which equals out to a balance of good and evil. A third example might involve a politician who implements a policy that benefits the majority but harms a minority, requiring two bad consequences (e.g., increased taxation and decreased economic activity) to balance the good outcome.
#DEEPER_ANALYSIS
Underlying the argument that a good deed cannot be justified if it is accompanied by two bad deeds of equal measure is the assumption that moral actions are part of a complex system where good and evil are interdependent. This perspective implies that moral judgments are not made in isolation but are influenced by the broader context in which they occur. A potential counterargument could be that the good deed, despite being accompanied by bad deeds, still has inherent value and justification based on its intent or the positive impact it has on the world. However, this view overlooks the principle of balance, where the presence of evil can negate the positive value of a good deed. Connecting this idea to broader ethical theories, such as the concept of moral absolutism versus moral relativism, highlights the complexity of determining what constitutes a "good" deed. Alternative perspectives might argue that the good deed, in itself, is not dependent on the presence of evil but is instead a standalone act of goodness that can exist independently, with the bad deeds being merely unfortunate consequences of the good deed.
#ENHANCED_ANSWER
The premise that a good deed requires two bad deeds of equal measure to be considered justified raises fundamental questions about the nature of goodness and evil. Given the interdependence of good and evil, as illustrated by real-world examples such as the donation to charity accompanied by personal hardship, it becomes clear that moral actions are part of a nuanced system where balance plays a critical role. The argument that the good deed's value is neutralized by the presence of the two bad deeds suggests that true goodness cannot exist in isolation but must be considered in relation to the broader moral landscape. This perspective is supported by the observation that moral judgments are influenced by the context in which they occur, implying that the good deed is not self-sufficiently good but is reliant on the coexistence of evil to be considered good. While alternative perspectives might argue for the inherent value of a good deed regardless of its surroundings, the principle of balance underscores the idea that the presence of evil can negate the positive value of a good deed, indicating that the good deed is part of a larger moral equation where the positive and negative aspects are balanced. Therefore, the good deed, when viewed in conjunction with the two bad deeds, does not represent an act of genuine goodness but rather a complex moral event that is neutralized by the accompanying evil, highlighting the intricate and interdependent nature of moral actions.
<|start_header_id|>user<|end_header_id|>
What if 2 people had to die to save 1?
<|eot_id|><|start_header_id|>assistant<|end_header_id|>
#BACKGROUND
The question of whether it is justifiable to kill two people to save one is a deeply philosophical and morally complex issue. It has been debated by scholars, ethicists, and the general public for centuries, with no straightforward answer. This conundrum pits the value of human life against the potential greater good, challenging our understanding of morality, justice, and the human condition.
#INDUCTION
- Observing a scenario where a doctor must choose between saving one patient and killing two others to understand the broader implications of the "Greater Good" in moral decision-making.
- Inferring from historical debates on the morality of sacrificing some to save others that ethical dilemmas often lack clear, universally accepted answers.
- Generalizing from the "trolley problem" and similar thought experiments that the value of human life and the concept of justice are frequently subject to interpretation and debate.
#DEDUCTION
- Applying the principle that the sacrifice of some individuals can be morally justifiable if it leads to the salvation or protection of others to conclude that, under specific circumstances, killing two people might be considered necessary to save one.
- Using the premise that justice is sometimes served through the death of a few to prevent greater harm to deduce that, in a hypothetical scenario where 2 lives are lost to save 1, justice could be considered served.
- Reasoning from the statement that "the ends can justify the means" to conclude that, in a situation where the death of 2 people is a necessary condition for saving 1, the moral justification for the action depends on the perceived greater good that results from the sacrifice.
#REASONING
Observations of scenarios like the doctor's dilemma, where choosing to save one patient means killing two others, reveal the intricate nature of the "Greater Good" principle in moral decision-making. Historical debates on the morality of sacrificing some to save others demonstrate that ethical dilemmas rarely have clear, universally accepted answers. The "trolley problem" and similar thought experiments further illustrate that the value of human life and the concept of justice are subject to interpretation and debate.
Applying the principle that the sacrifice of some individuals can be morally justifiable if it leads to the salvation or protection of others, it can be concluded that, under specific circumstances, killing two people might be considered necessary to save one. This reasoning is supported by the premise that justice is sometimes served through the death of a few to prevent greater harm. Furthermore, the statement that "the ends can justify the means" suggests that the moral justification for killing two people to save one depends on the perceived greater good that results from the sacrifice.
Considering these perspectives, it becomes apparent that the moral implications of such an action are far-reaching and depend on a multitude of factors, including the motivations behind the action, the consequences of the action, and the ethical principles guiding the decision-making process.
#ANSWER
The question of whether it is justifiable to kill two people to save one is a complex and multifaceted issue that challenges our understanding of morality, justice, and the human condition. While there is no straightforward answer, it is clear that the moral justification for such an action depends on the perceived greater good that results from the sacrifice. In a hypothetical scenario where 2 lives are lost to save 1, justice could be considered served if the action is motivated by a genuine desire to protect or save lives, and if the consequences of the action are such that they prevent greater harm. However, this does not mean that the action is universally justifiable, as the value of human life and the concept of justice are subject to interpretation and debate. Ultimately, the morality of killing two people to save one is a deeply personal and philosophical question that requires careful consideration of the specific circumstances and the ethical principles guiding the decision-making process.
#SPECIFIC_EXAMPLES
The concept of sacrificing some to save others can be illustrated through several real-world examples. For instance, during World War II, the bombing of Dresden resulted in the deaths of thousands of civilians, but it was believed to be a necessary action to save even more lives by ending the war sooner. Another example is the use of the atomic bomb on Hiroshima and Nagasaki, which killed hundreds of thousands of people but was seen as a means to prevent further loss of life by ending the war quickly. A more recent example is the case of the Ford Pinto, where the company's decision to prioritize profits over safety led to the deaths of hundreds of people, but some argue that the alternative would have been even worse, such as widespread poverty and death due to lack of access to healthcare.
#DEEPER_ANALYSIS
Underlying the debate on killing two people to save one is the assumption that human life has a certain value, but this value is not absolute. The implication of this assumption is that the moral justification for sacrificing some lives depends on the specific circumstances and the perceived greater good. One potential counterargument is that the ends do not always justify the means, and that the act of killing two people, even if it saves one, is morally reprehensible. This perspective argues that the value of human life is inherent and should not be sacrificed for any reason. Connecting this idea to broader contexts, such as the concept of just war theory, highlights the complexity of balancing moral principles with the greater good. Alternative perspectives, such as pacifism, argue that violence is never justified, regardless of the potential consequences.
#ENHANCED_ANSWER
The question of whether it is justifiable to kill two people to save one is a deeply philosophical and morally complex issue that challenges our understanding of morality, justice, and the human condition. While there is no straightforward answer, it is clear that the moral justification for such an action depends on the perceived greater good that results from the sacrifice. Historical examples, such as the bombing of Dresden and the use of the atomic bomb, illustrate the intricate nature of this principle. In a hypothetical scenario where 2 lives are lost to save 1, justice could be considered served if the action is motivated by a genuine desire to protect or save lives, and if the consequences of the action are such that they prevent greater harm. However, this does not mean that the action is universally justifiable, as the value of human life and the concept of justice are subject to interpretation and debate. Ultimately, the morality of killing two people to save one is a deeply personal and philosophical question that requires careful consideration of the specific circumstances, the ethical principles guiding the decision-making process, and the potential consequences of the action. By examining the complexities of this issue and considering multiple perspectives, we can gain a deeper understanding of the nuances involved in balancing moral principles with the greater good.
<|start_header_id|>user<|end_header_id|>
|
kostiantynk1205/0c269307-4529-4b1c-b643-9c0c1924e503 | kostiantynk1205 | 2025-01-29T19:55:05Z | 10 | 0 | peft | [
"peft",
"safetensors",
"falcon",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:tiiuae/falcon-7b",
"base_model:adapter:tiiuae/falcon-7b",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T19:38:37Z | ---
library_name: peft
license: apache-2.0
base_model: tiiuae/falcon-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0c269307-4529-4b1c-b643-9c0c1924e503
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: tiiuae/falcon-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 7b3a046bf3b67564_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7b3a046bf3b67564_train_data.json
type:
field_input: paper_title
field_instruction: invitation
field_output: content
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk1205/0c269307-4529-4b1c-b643-9c0c1924e503
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/7b3a046bf3b67564_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 119ac695-2654-4548-8491-235b4811c0aa
wandb_project: Birthday-SN56-23-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 119ac695-2654-4548-8491-235b4811c0aa
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 0c269307-4529-4b1c-b643-9c0c1924e503
This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1563
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 2.5874 |
| 10.0511 | 0.0011 | 13 | 2.3450 |
| 9.2001 | 0.0022 | 26 | 2.1988 |
| 8.6316 | 0.0034 | 39 | 2.1563 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso09/c25283d9-7158-4bcb-9579-43f77ae24a1b | lesso09 | 2025-01-29T19:54:52Z | 5 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Coder-7B",
"base_model:adapter:unsloth/Qwen2.5-Coder-7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T19:52:10Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Coder-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c25283d9-7158-4bcb-9579-43f77ae24a1b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Coder-7B
bf16: true
chat_template: llama3
datasets:
- data_files:
- f139bf284ccd53f2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f139bf284ccd53f2_train_data.json
type:
field_input: right_context
field_instruction: prompt
field_output: groundtruth
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso09/c25283d9-7158-4bcb-9579-43f77ae24a1b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/f139bf284ccd53f2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: cae9065f-baf8-489d-bdb5-78c627b7f61e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: cae9065f-baf8-489d-bdb5-78c627b7f61e
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c25283d9-7158-4bcb-9579-43f77ae24a1b
This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-7B](https://huggingface.co/unsloth/Qwen2.5-Coder-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0034 | 1 | nan |
| 0.0 | 0.0171 | 5 | nan |
| 0.0 | 0.0342 | 10 | nan |
| 0.0 | 0.0512 | 15 | nan |
| 0.0 | 0.0683 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
rl-llm-coders/RS_1B_RM_iter0 | rl-llm-coders | 2025-01-29T19:53:51Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-29T19:51:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
myhaaaaaaa/e7ccc0ce-c5d9-41df-83fe-0cea2cfe004d | myhaaaaaaa | 2025-01-29T19:53:20Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:DeepMount00/Llama-3-8b-Ita",
"base_model:adapter:DeepMount00/Llama-3-8b-Ita",
"license:llama3",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T19:35:00Z | ---
library_name: peft
license: llama3
base_model: DeepMount00/Llama-3-8b-Ita
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e7ccc0ce-c5d9-41df-83fe-0cea2cfe004d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: DeepMount00/Llama-3-8b-Ita
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c4e6804d4a2a7552_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c4e6804d4a2a7552_train_data.json
type:
field_input: answer
field_instruction: question
field_output: answer_summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: myhaaaaaaa/e7ccc0ce-c5d9-41df-83fe-0cea2cfe004d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/c4e6804d4a2a7552_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 97665195-034e-4a0e-a46d-17e58782c3fd
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 97665195-034e-4a0e-a46d-17e58782c3fd
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# e7ccc0ce-c5d9-41df-83fe-0cea2cfe004d
This model is a fine-tuned version of [DeepMount00/Llama-3-8b-Ita](https://huggingface.co/DeepMount00/Llama-3-8b-Ita) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5866
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5873 | 0.1478 | 200 | 0.5866 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
clarxus/1c0c19e5-1108-47ed-92fc-f973c64e5cf9 | clarxus | 2025-01-29T19:52:28Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-1.7B",
"base_model:adapter:unsloth/SmolLM-1.7B",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T18:45:21Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-1.7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1c0c19e5-1108-47ed-92fc-f973c64e5cf9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-1.7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b8984ca5eb1f7591_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b8984ca5eb1f7591_train_data.json
type:
field_input: ''
field_instruction: inputs
field_output: targets
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: clarxus/1c0c19e5-1108-47ed-92fc-f973c64e5cf9
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/b8984ca5eb1f7591_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 644ea9bc-8430-4c64-a273-9e9612cf4c4e
wandb_project: Gradients-On-Seven
wandb_run: your_name
wandb_runid: 644ea9bc-8430-4c64-a273-9e9612cf4c4e
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 1c0c19e5-1108-47ed-92fc-f973c64e5cf9
This model is a fine-tuned version of [unsloth/SmolLM-1.7B](https://huggingface.co/unsloth/SmolLM-1.7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5163
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0003 | 1 | 1.3113 |
| 1.3004 | 0.0027 | 9 | 1.2992 |
| 1.2009 | 0.0054 | 18 | 1.1510 |
| 0.9535 | 0.0081 | 27 | 0.9120 |
| 0.7889 | 0.0109 | 36 | 0.7504 |
| 0.6705 | 0.0136 | 45 | 0.6372 |
| 0.5786 | 0.0163 | 54 | 0.5751 |
| 0.5308 | 0.0190 | 63 | 0.5437 |
| 0.5117 | 0.0217 | 72 | 0.5276 |
| 0.5891 | 0.0244 | 81 | 0.5199 |
| 0.5408 | 0.0271 | 90 | 0.5169 |
| 0.4974 | 0.0299 | 99 | 0.5163 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/Base-Roblox-coder-Llama-3.2-3B-vLLM-GGUF | mradermacher | 2025-01-29T19:52:05Z | 1,067 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"luau",
"code",
"en",
"dataset:Roblox/luau_corpus",
"base_model:Pinkstack/Base-Roblox-coder-Llama-3.2-3B-vLLM",
"base_model:quantized:Pinkstack/Base-Roblox-coder-Llama-3.2-3B-vLLM",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-18T12:58:52Z | ---
base_model: Pinkstack/Base-Roblox-coder-Llama-3.2-3B-vLLM
datasets:
- Roblox/luau_corpus
language:
- en
library_name: transformers
license: llama3.2
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
- luau
- code
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Pinkstack/Base-Roblox-coder-Llama-3.2-3B-vLLM
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Base-Roblox-coder-Llama-3.2-3B-vLLM-GGUF/resolve/main/Base-Roblox-coder-Llama-3.2-3B-vLLM.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Base-Roblox-coder-Llama-3.2-3B-vLLM-GGUF/resolve/main/Base-Roblox-coder-Llama-3.2-3B-vLLM.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Base-Roblox-coder-Llama-3.2-3B-vLLM-GGUF/resolve/main/Base-Roblox-coder-Llama-3.2-3B-vLLM.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Base-Roblox-coder-Llama-3.2-3B-vLLM-GGUF/resolve/main/Base-Roblox-coder-Llama-3.2-3B-vLLM.Q3_K_L.gguf) | Q3_K_L | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Base-Roblox-coder-Llama-3.2-3B-vLLM-GGUF/resolve/main/Base-Roblox-coder-Llama-3.2-3B-vLLM.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Base-Roblox-coder-Llama-3.2-3B-vLLM-GGUF/resolve/main/Base-Roblox-coder-Llama-3.2-3B-vLLM.Q4_K_S.gguf) | Q4_K_S | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Base-Roblox-coder-Llama-3.2-3B-vLLM-GGUF/resolve/main/Base-Roblox-coder-Llama-3.2-3B-vLLM.Q4_K_M.gguf) | Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Base-Roblox-coder-Llama-3.2-3B-vLLM-GGUF/resolve/main/Base-Roblox-coder-Llama-3.2-3B-vLLM.Q5_K_S.gguf) | Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Base-Roblox-coder-Llama-3.2-3B-vLLM-GGUF/resolve/main/Base-Roblox-coder-Llama-3.2-3B-vLLM.Q5_K_M.gguf) | Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Base-Roblox-coder-Llama-3.2-3B-vLLM-GGUF/resolve/main/Base-Roblox-coder-Llama-3.2-3B-vLLM.Q6_K.gguf) | Q6_K | 2.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Base-Roblox-coder-Llama-3.2-3B-vLLM-GGUF/resolve/main/Base-Roblox-coder-Llama-3.2-3B-vLLM.Q8_0.gguf) | Q8_0 | 3.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Base-Roblox-coder-Llama-3.2-3B-vLLM-GGUF/resolve/main/Base-Roblox-coder-Llama-3.2-3B-vLLM.f16.gguf) | f16 | 6.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
vapegod/g3 | vapegod | 2025-01-29T19:51:37Z | 172 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-29T19:50:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ardaspear/08f6c789-56a4-4b07-b6bd-facbcd206a84 | ardaspear | 2025-01-29T19:51:36Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-1.7B",
"base_model:adapter:unsloth/SmolLM-1.7B",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T18:45:01Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-1.7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 08f6c789-56a4-4b07-b6bd-facbcd206a84
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-1.7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b8984ca5eb1f7591_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b8984ca5eb1f7591_train_data.json
type:
field_input: ''
field_instruction: inputs
field_output: targets
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: ardaspear/08f6c789-56a4-4b07-b6bd-facbcd206a84
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/b8984ca5eb1f7591_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 644ea9bc-8430-4c64-a273-9e9612cf4c4e
wandb_project: Gradients-On-Five
wandb_run: your_name
wandb_runid: 644ea9bc-8430-4c64-a273-9e9612cf4c4e
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 08f6c789-56a4-4b07-b6bd-facbcd206a84
This model is a fine-tuned version of [unsloth/SmolLM-1.7B](https://huggingface.co/unsloth/SmolLM-1.7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4368
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0003 | 1 | 1.3113 |
| 1.2878 | 0.0027 | 9 | 1.2632 |
| 0.9445 | 0.0054 | 18 | 0.8677 |
| 0.6713 | 0.0081 | 27 | 0.6309 |
| 0.5589 | 0.0109 | 36 | 0.5295 |
| 0.5105 | 0.0136 | 45 | 0.4868 |
| 0.462 | 0.0163 | 54 | 0.4634 |
| 0.4378 | 0.0190 | 63 | 0.4500 |
| 0.4302 | 0.0217 | 72 | 0.4426 |
| 0.5053 | 0.0244 | 81 | 0.4387 |
| 0.4606 | 0.0271 | 90 | 0.4370 |
| 0.4224 | 0.0299 | 99 | 0.4368 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso11/f3c8eee8-5f0c-4a60-bcc9-e743236af4ed | lesso11 | 2025-01-29T19:49:22Z | 5 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Hermes-2-Theta-Llama-3-8B",
"base_model:adapter:NousResearch/Hermes-2-Theta-Llama-3-8B",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T19:39:20Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Hermes-2-Theta-Llama-3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f3c8eee8-5f0c-4a60-bcc9-e743236af4ed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Hermes-2-Theta-Llama-3-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e15924625f1f4c59_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e15924625f1f4c59_train_data.json
type:
field_instruction: INSTRUCTION
field_output: RESPONSE
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso11/f3c8eee8-5f0c-4a60-bcc9-e743236af4ed
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/e15924625f1f4c59_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6ed7c6ac-dcfc-4bd7-8673-497f923d29c1
wandb_project: multi
wandb_run: your_name
wandb_runid: 6ed7c6ac-dcfc-4bd7-8673-497f923d29c1
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# f3c8eee8-5f0c-4a60-bcc9-e743236af4ed
This model is a fine-tuned version of [NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6579
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7266 | 0.1832 | 200 | 1.6579 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
rl-llm-coders/RS_1B_RM_iter2 | rl-llm-coders | 2025-01-29T19:49:07Z | 82 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-29T19:47:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mrferr3t/9c405933-5b20-40b6-8bcf-f4b58a204edd | mrferr3t | 2025-01-29T19:48:20Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:DeepMount00/Llama-3-8b-Ita",
"base_model:adapter:DeepMount00/Llama-3-8b-Ita",
"license:llama3",
"region:us"
] | null | 2025-01-29T19:44:25Z | ---
library_name: peft
license: llama3
base_model: DeepMount00/Llama-3-8b-Ita
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9c405933-5b20-40b6-8bcf-f4b58a204edd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: DeepMount00/Llama-3-8b-Ita
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c4e6804d4a2a7552_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c4e6804d4a2a7552_train_data.json
type:
field_input: answer
field_instruction: question
field_output: answer_summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/9c405933-5b20-40b6-8bcf-f4b58a204edd
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 12
micro_batch_size: 2
mlflow_experiment_name: /tmp/c4e6804d4a2a7552_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 97665195-034e-4a0e-a46d-17e58782c3fd
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 97665195-034e-4a0e-a46d-17e58782c3fd
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 9c405933-5b20-40b6-8bcf-f4b58a204edd
This model is a fine-tuned version of [DeepMount00/Llama-3-8b-Ita](https://huggingface.co/DeepMount00/Llama-3-8b-Ita) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2281
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.8357 | 0.0007 | 1 | 2.2812 |
| 2.2967 | 0.0022 | 3 | 2.2704 |
| 2.3975 | 0.0044 | 6 | 2.0587 |
| 1.5727 | 0.0067 | 9 | 1.4671 |
| 1.5618 | 0.0089 | 12 | 1.2281 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mrferr3t/22e26d3c-ca37-4356-8955-4959a0aafb23 | mrferr3t | 2025-01-29T19:48:05Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-360M-Instruct",
"base_model:adapter:unsloth/SmolLM2-360M-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T19:46:03Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-360M-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 22e26d3c-ca37-4356-8955-4959a0aafb23
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-360M-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 20d8bc52992f4589_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/20d8bc52992f4589_train_data.json
type:
field_instruction: title
field_output: summary
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/22e26d3c-ca37-4356-8955-4959a0aafb23
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 8
micro_batch_size: 2
mlflow_experiment_name: /tmp/20d8bc52992f4589_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f4949cdd-193c-4d9e-b503-e9e360323402
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f4949cdd-193c-4d9e-b503-e9e360323402
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 22e26d3c-ca37-4356-8955-4959a0aafb23
This model is a fine-tuned version of [unsloth/SmolLM2-360M-Instruct](https://huggingface.co/unsloth/SmolLM2-360M-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7604
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7401 | 0.0037 | 1 | 1.7661 |
| 1.7845 | 0.0074 | 2 | 1.7661 |
| 1.7042 | 0.0147 | 4 | 1.7663 |
| 1.8487 | 0.0221 | 6 | 1.7639 |
| 1.7562 | 0.0295 | 8 | 1.7604 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
JacksonBrune/0337a124-ce5d-4b08-92c2-a9f1d86c0ff9 | JacksonBrune | 2025-01-29T19:47:22Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:trl-internal-testing/tiny-random-LlamaForCausalLM",
"base_model:adapter:trl-internal-testing/tiny-random-LlamaForCausalLM",
"region:us"
] | null | 2025-01-29T19:42:28Z | ---
library_name: peft
base_model: trl-internal-testing/tiny-random-LlamaForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0337a124-ce5d-4b08-92c2-a9f1d86c0ff9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: trl-internal-testing/tiny-random-LlamaForCausalLM
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a407e9d4b265b545_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a407e9d4b265b545_train_data.json
type:
field_instruction: premise
field_output: hypothesis
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: JacksonBrune/0337a124-ce5d-4b08-92c2-a9f1d86c0ff9
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/a407e9d4b265b545_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b40f774d-b555-461f-a6b8-26c1ff32e383
wandb_project: Birthday-SN56-12-Gradients-On-Demand
wandb_run: your_name
wandb_runid: b40f774d-b555-461f-a6b8-26c1ff32e383
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 0337a124-ce5d-4b08-92c2-a9f1d86c0ff9
This model is a fine-tuned version of [trl-internal-testing/tiny-random-LlamaForCausalLM](https://huggingface.co/trl-internal-testing/tiny-random-LlamaForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.3645
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 10.3732 | 0.0000 | 1 | 10.3669 |
| 10.3643 | 0.0002 | 13 | 10.3663 |
| 10.3618 | 0.0004 | 26 | 10.3651 |
| 10.3588 | 0.0006 | 39 | 10.3645 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
TweedleDeepLearnings/c0466a17-6900-4871-88cb-fdfaa31cbc34 | TweedleDeepLearnings | 2025-01-29T19:45:41Z | 168 | 0 | peft | [
"peft",
"safetensors",
"axolotl",
"generated_from_trainer",
"base_model:huggyllama/llama-7b",
"base_model:adapter:huggyllama/llama-7b",
"license:other",
"region:us"
] | null | 2025-01-29T15:59:13Z |
---
library_name: peft
license: other
base_model: huggyllama/llama-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c4b201cf-0eeb-4380-a91f-cd6329614a81
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
bf16: auto
chat_template: llama3
dataset_prepared_path: null
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
gradient_clipping: 0.1
group_by_length: false
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-04
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: linear
max_steps: 200
micro_batch_size: 128
mlflow_experiment_name: /tmp/aed51b8e2c089967_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 4096
special_tokens:
pad_token: </PAD>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6a8f76dd-7262-490a-905c-7b83c0f56891
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6a8f76dd-7262-490a-905c-7b83c0f56891
warmup_steps: 5
weight_decay: 0.1
xformers_attention: true
```
</details><br>
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 128
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 2048
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nadejdatarabukina/e2599130-6bf8-45cf-8e2c-156c0afb56be | nadejdatarabukina | 2025-01-29T19:45:16Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-1.7B",
"base_model:adapter:unsloth/SmolLM-1.7B",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T18:46:08Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-1.7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e2599130-6bf8-45cf-8e2c-156c0afb56be
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-1.7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b8984ca5eb1f7591_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b8984ca5eb1f7591_train_data.json
type:
field_input: ''
field_instruction: inputs
field_output: targets
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: nadejdatarabukina/e2599130-6bf8-45cf-8e2c-156c0afb56be
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 33
micro_batch_size: 2
mlflow_experiment_name: /tmp/b8984ca5eb1f7591_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 17
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 644ea9bc-8430-4c64-a273-9e9612cf4c4e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 644ea9bc-8430-4c64-a273-9e9612cf4c4e
warmup_steps: 17
weight_decay: 0.005
xformers_attention: true
```
</details><br>
# e2599130-6bf8-45cf-8e2c-156c0afb56be
This model is a fine-tuned version of [unsloth/SmolLM-1.7B](https://huggingface.co/unsloth/SmolLM-1.7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 17
- training_steps: 33
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | nan |
| 0.0 | 0.0004 | 5 | nan |
| 0.0 | 0.0008 | 10 | nan |
| 0.0 | 0.0011 | 15 | nan |
| 0.0 | 0.0015 | 20 | nan |
| 0.0 | 0.0019 | 25 | nan |
| 0.0 | 0.0023 | 30 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Subsets and Splits