modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
dimasik2987/eea7d6ca-5ae0-41ba-972c-bd24d4672380 | dimasik2987 | 2025-01-10T20:24:48Z | 11 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Llama-2-7b-64k",
"base_model:adapter:NousResearch/Yarn-Llama-2-7b-64k",
"region:us"
] | null | 2025-01-10T20:18:01Z | ---
library_name: peft
base_model: NousResearch/Yarn-Llama-2-7b-64k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: eea7d6ca-5ae0-41ba-972c-bd24d4672380
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Llama-2-7b-64k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e55d99f129b023ec_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e55d99f129b023ec_train_data.json
type:
field_instruction: english
field_output: chichewa
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: dimasik2987/eea7d6ca-5ae0-41ba-972c-bd24d4672380
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/e55d99f129b023ec_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a764bcbd-6c20-4dad-b659-d2dc56817841
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: a764bcbd-6c20-4dad-b659-d2dc56817841
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# eea7d6ca-5ae0-41ba-972c-bd24d4672380
This model is a fine-tuned version of [NousResearch/Yarn-Llama-2-7b-64k](https://huggingface.co/NousResearch/Yarn-Llama-2-7b-64k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3024
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0006 | 1 | 4.0264 |
| 15.9992 | 0.0049 | 8 | 3.6810 |
| 13.1385 | 0.0098 | 16 | 3.4051 |
| 12.9244 | 0.0146 | 24 | 3.3024 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
error577/7efa787b-62d8-4707-86c6-c7fe1c790941 | error577 | 2025-01-10T20:24:47Z | 11 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:01-ai/Yi-1.5-9B-Chat-16K",
"base_model:adapter:01-ai/Yi-1.5-9B-Chat-16K",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-10T17:49:53Z | ---
library_name: peft
license: apache-2.0
base_model: 01-ai/Yi-1.5-9B-Chat-16K
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7efa787b-62d8-4707-86c6-c7fe1c790941
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: qlora
base_model: 01-ai/Yi-1.5-9B-Chat-16K
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 227afdb836d95568_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/227afdb836d95568_train_data.json
type:
field_input: level
field_instruction: name
field_output: text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: false
group_by_length: false
hub_model_id: error577/7efa787b-62d8-4707-86c6-c7fe1c790941
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.01
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 4
lora_target_linear: true
lr_scheduler: cosine
max_steps: 3
micro_batch_size: 1
mlflow_experiment_name: /tmp/227afdb836d95568_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch_4bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 128
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7d046c5f-2837-45a6-a88d-f3d18615b72c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7d046c5f-2837-45a6-a88d-f3d18615b72c
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7efa787b-62d8-4707-86c6-c7fe1c790941
This model is a fine-tuned version of [01-ai/Yi-1.5-9B-Chat-16K](https://huggingface.co/01-ai/Yi-1.5-9B-Chat-16K) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4793
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH_4BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.8488 | 0.0000 | 1 | 2.0958 |
| 2.2878 | 0.0001 | 2 | 1.8284 |
| 1.9633 | 0.0001 | 3 | 3.4793 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
tmpmodelsave/type134_step200 | tmpmodelsave | 2025-01-10T20:21:32Z | 24 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-10T20:15:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lhong4759/29c9a7fb-e34c-4e02-bd97-bd07548e5853 | lhong4759 | 2025-01-10T20:21:32Z | 11 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"base_model:adapter:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"license:llama3",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-10T19:47:43Z | ---
library_name: peft
license: llama3
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 29c9a7fb-e34c-4e02-bd97-bd07548e5853
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e53c72ba80674f72_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e53c72ba80674f72_train_data.json
type:
field_instruction: ptype
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lhong4759/29c9a7fb-e34c-4e02-bd97-bd07548e5853
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/e53c72ba80674f72_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 947b25e3-b276-4a08-8875-e5b98a03e2b8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 947b25e3-b276-4a08-8875-e5b98a03e2b8
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 29c9a7fb-e34c-4e02-bd97-bd07548e5853
This model is a fine-tuned version of [Orenguteng/Llama-3-8B-Lexi-Uncensored](https://huggingface.co/Orenguteng/Llama-3-8B-Lexi-Uncensored) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4749
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.4975 | 0.7428 | 200 | 2.4749 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
JacksonBrune/338aec7d-1a10-480b-a0e4-bc05240ed560 | JacksonBrune | 2025-01-10T20:19:04Z | 20 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:JackFram/llama-160m",
"base_model:adapter:JackFram/llama-160m",
"license:apache-2.0",
"region:us"
] | null | 2025-01-10T20:18:27Z | ---
library_name: peft
license: apache-2.0
base_model: JackFram/llama-160m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 338aec7d-1a10-480b-a0e4-bc05240ed560
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: JackFram/llama-160m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b89ff688452d146d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b89ff688452d146d_train_data.json
type:
field_input: category
field_instruction: original
field_output: revised
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: JacksonBrune/338aec7d-1a10-480b-a0e4-bc05240ed560
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/b89ff688452d146d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ac4e0bf4-846c-4cb0-8b2f-bf6d8518e0c1
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ac4e0bf4-846c-4cb0-8b2f-bf6d8518e0c1
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 338aec7d-1a10-480b-a0e4-bc05240ed560
This model is a fine-tuned version of [JackFram/llama-160m](https://huggingface.co/JackFram/llama-160m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2987
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.956 | 0.0024 | 1 | 2.3350 |
| 2.1726 | 0.0072 | 3 | 2.3346 |
| 2.4523 | 0.0144 | 6 | 2.3254 |
| 2.5749 | 0.0216 | 9 | 2.2987 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/Calme-3.2-fp16-GGUF | mradermacher | 2025-01-10T20:18:28Z | 108 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Sakalti/Calme-3.2-fp16",
"base_model:quantized:Sakalti/Calme-3.2-fp16",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-10T15:22:25Z | ---
base_model: Sakalti/Calme-3.2-fp16
language:
- en
library_name: transformers
license: other
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
license_name: qwen
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/Sakalti/Calme-3.2-fp16
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Calme-3.2-fp16-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Calme-3.2-fp16-GGUF/resolve/main/Calme-3.2-fp16.Q2_K.gguf) | Q2_K | 31.9 | |
| [GGUF](https://huggingface.co/mradermacher/Calme-3.2-fp16-GGUF/resolve/main/Calme-3.2-fp16.Q3_K_S.gguf) | Q3_K_S | 36.9 | |
| [GGUF](https://huggingface.co/mradermacher/Calme-3.2-fp16-GGUF/resolve/main/Calme-3.2-fp16.Q3_K_M.gguf) | Q3_K_M | 40.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Calme-3.2-fp16-GGUF/resolve/main/Calme-3.2-fp16.Q3_K_L.gguf) | Q3_K_L | 42.4 | |
| [GGUF](https://huggingface.co/mradermacher/Calme-3.2-fp16-GGUF/resolve/main/Calme-3.2-fp16.IQ4_XS.gguf) | IQ4_XS | 43.1 | |
| [GGUF](https://huggingface.co/mradermacher/Calme-3.2-fp16-GGUF/resolve/main/Calme-3.2-fp16.Q4_K_S.gguf) | Q4_K_S | 47.0 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Calme-3.2-fp16-GGUF/resolve/main/Calme-3.2-fp16.Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Calme-3.2-fp16-GGUF/resolve/main/Calme-3.2-fp16.Q4_K_M.gguf.part2of2) | Q4_K_M | 50.8 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Calme-3.2-fp16-GGUF/resolve/main/Calme-3.2-fp16.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Calme-3.2-fp16-GGUF/resolve/main/Calme-3.2-fp16.Q5_K_S.gguf.part2of2) | Q5_K_S | 55.2 | |
| [PART 1](https://huggingface.co/mradermacher/Calme-3.2-fp16-GGUF/resolve/main/Calme-3.2-fp16.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Calme-3.2-fp16-GGUF/resolve/main/Calme-3.2-fp16.Q5_K_M.gguf.part2of2) | Q5_K_M | 58.4 | |
| [PART 1](https://huggingface.co/mradermacher/Calme-3.2-fp16-GGUF/resolve/main/Calme-3.2-fp16.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Calme-3.2-fp16-GGUF/resolve/main/Calme-3.2-fp16.Q6_K.gguf.part2of2) | Q6_K | 69.1 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Calme-3.2-fp16-GGUF/resolve/main/Calme-3.2-fp16.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Calme-3.2-fp16-GGUF/resolve/main/Calme-3.2-fp16.Q8_0.gguf.part2of2) | Q8_0 | 83.0 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
ehristoforu/qwen2.5-with-lora-think-3b-it | ehristoforu | 2025-01-10T20:15:02Z | 24 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-3B",
"base_model:finetune:Qwen/Qwen2.5-3B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-10T20:06:01Z | ---
license: other
license_name: qwen-research
license_link: https://huggingface.co/Qwen/Qwen2.5-3B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-3B
tags:
- chat
library_name: transformers
---
# Qwen2.5-3B-Instruct
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the instruction-tuned 3B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 3.09B
- Number of Paramaters (Non-Embedding): 2.77B
- Number of Layers: 36
- Number of Attention Heads (GQA): 16 for Q and 2 for KV
- Context Length: Full 32,768 tokens and generation 8192 tokens
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-3B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
``` |
sergioalves/3902a50d-e050-4f35-bc1d-df2f97be6c34 | sergioalves | 2025-01-10T20:10:26Z | 18 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-360M",
"base_model:adapter:unsloth/SmolLM2-360M",
"license:apache-2.0",
"region:us"
] | null | 2025-01-10T20:08:26Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-360M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3902a50d-e050-4f35-bc1d-df2f97be6c34
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-360M
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c89dd29ef50531ae_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c89dd29ef50531ae_train_data.json
type:
field_instruction: docname
field_output: conclusion
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: sergioalves/3902a50d-e050-4f35-bc1d-df2f97be6c34
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/c89dd29ef50531ae_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_hf
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b6a835a7-4ca6-4649-9d41-e742004a1dc6
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b6a835a7-4ca6-4649-9d41-e742004a1dc6
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 3902a50d-e050-4f35-bc1d-df2f97be6c34
This model is a fine-tuned version of [unsloth/SmolLM2-360M](https://huggingface.co/unsloth/SmolLM2-360M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0007 | 1 | nan |
| 0.0 | 0.0060 | 8 | nan |
| 0.0 | 0.0119 | 16 | nan |
| 0.0 | 0.0179 | 24 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Echo9Zulu/granite-3.1-2b-instruct-int8_asym-ov | Echo9Zulu | 2025-01-10T20:10:07Z | 7 | 0 | null | [
"openvino",
"granite",
"license:apache-2.0",
"region:us"
] | null | 2025-01-10T20:08:36Z | ---
license: apache-2.0
---
|
great0001/9ee8445d-1357-49ca-bf94-5f10eb237524 | great0001 | 2025-01-10T20:09:54Z | 18 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-360M",
"base_model:adapter:unsloth/SmolLM2-360M",
"license:apache-2.0",
"region:us"
] | null | 2025-01-10T20:08:34Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-360M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9ee8445d-1357-49ca-bf94-5f10eb237524
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-360M
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c89dd29ef50531ae_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c89dd29ef50531ae_train_data.json
type:
field_instruction: docname
field_output: conclusion
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: great0001/9ee8445d-1357-49ca-bf94-5f10eb237524
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/c89dd29ef50531ae_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b6a835a7-4ca6-4649-9d41-e742004a1dc6
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b6a835a7-4ca6-4649-9d41-e742004a1dc6
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 9ee8445d-1357-49ca-bf94-5f10eb237524
This model is a fine-tuned version of [unsloth/SmolLM2-360M](https://huggingface.co/unsloth/SmolLM2-360M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0007 | 1 | nan |
| 0.0 | 0.0022 | 3 | nan |
| 0.0 | 0.0045 | 6 | nan |
| 0.0 | 0.0067 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
VERSIL91/947b25e3-b276-4a08-8875-e5b98a03e2b8 | VERSIL91 | 2025-01-10T20:06:39Z | 11 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"base_model:adapter:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"license:llama3",
"region:us"
] | null | 2025-01-10T19:52:22Z | ---
library_name: peft
license: llama3
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 947b25e3-b276-4a08-8875-e5b98a03e2b8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
accelerate_config:
dynamo_backend: inductor
mixed_precision: bf16
num_machines: 1
num_processes: auto
use_cpu: false
adapter: lora
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e53c72ba80674f72_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e53c72ba80674f72_train_data.json
type:
field_instruction: ptype
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: false
hub_model_id: VERSIL91/947b25e3-b276-4a08-8875-e5b98a03e2b8
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lora_target_modules:
- q_proj
- v_proj
lr_scheduler: cosine
max_memory:
0: 70GiB
max_steps: 20
micro_batch_size: 2
mlflow_experiment_name: /tmp/e53c72ba80674f72_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
quantization_config:
llm_int8_enable_fp32_cpu_offload: true
load_in_8bit: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
torch_compile: true
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 947b25e3-b276-4a08-8875-e5b98a03e2b8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 947b25e3-b276-4a08-8875-e5b98a03e2b8
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 947b25e3-b276-4a08-8875-e5b98a03e2b8
This model is a fine-tuned version of [Orenguteng/Llama-3-8B-Lexi-Uncensored](https://huggingface.co/Orenguteng/Llama-3-8B-Lexi-Uncensored) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5958
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.85 | 0.0149 | 1 | 2.7583 |
| 2.7071 | 0.0743 | 5 | 2.7210 |
| 2.6887 | 0.1486 | 10 | 2.6699 |
| 2.8234 | 0.2228 | 15 | 2.6043 |
| 2.7161 | 0.2971 | 20 | 2.5958 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Slicky325/rm-trial | Slicky325 | 2025-01-10T20:06:39Z | 15 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-classification",
"generated_from_trainer",
"base_model:openai-community/gpt2-medium",
"base_model:finetune:openai-community/gpt2-medium",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-01-10T07:31:54Z | ---
library_name: transformers
license: mit
base_model: openai-community/gpt2-medium
tags:
- generated_from_trainer
model-index:
- name: rm-trial
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rm-trial
This model is a fine-tuned version of [openai-community/gpt2-medium](https://huggingface.co/openai-community/gpt2-medium) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.48.0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
quannh197/f5fb298c-df1a-4966-9c77-a5b493faeff6 | quannh197 | 2025-01-10T20:06:14Z | 13 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:sethuiyer/Medichat-Llama3-8B",
"base_model:adapter:sethuiyer/Medichat-Llama3-8B",
"license:other",
"region:us"
] | null | 2025-01-10T20:04:25Z | ---
library_name: peft
license: other
base_model: sethuiyer/Medichat-Llama3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f5fb298c-df1a-4966-9c77-a5b493faeff6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: sethuiyer/Medichat-Llama3-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8a35a7c9651c7d97_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8a35a7c9651c7d97_train_data.json
type:
field_input: negative
field_instruction: anchor
field_output: positive
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: quannh197/f5fb298c-df1a-4966-9c77-a5b493faeff6
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/8a35a7c9651c7d97_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f5fb298c-df1a-4966-9c77-a5b493faeff6
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f5fb298c-df1a-4966-9c77-a5b493faeff6
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# f5fb298c-df1a-4966-9c77-a5b493faeff6
This model is a fine-tuned version of [sethuiyer/Medichat-Llama3-8B](https://huggingface.co/sethuiyer/Medichat-Llama3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.0864 | 0.0187 | 1 | 2.4635 |
| 2.0881 | 0.0561 | 3 | 2.4597 |
| 2.1077 | 0.1121 | 6 | 2.4028 |
| 1.9801 | 0.1682 | 9 | 2.1271 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
CrimsonZockt/LexyChaplin-FLUXLORA | CrimsonZockt | 2025-01-10T20:05:53Z | 37 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-01-10T20:05:34Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: Lexy Chaplin, black tanktop, professional headshot, photoshoot.
output:
url: images/Lexy Chaplin, black tanktop, professional heads....png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Lexy Chaplin
---
# LexyChaplin
<Gallery />
## Model description
This is a LORA Model that i have train on Weights.gg
## Trigger words
You should use `Lexy Chaplin` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/CrimsonZockt/LexyChaplin-FLUXLORA/tree/main) them in the Files & versions tab.
|
tuanna08go/8babe579-9337-b4c2-3206-7f11b058a95f | tuanna08go | 2025-01-10T20:04:40Z | 18 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Coder-7B",
"base_model:adapter:unsloth/Qwen2.5-Coder-7B",
"license:apache-2.0",
"region:us"
] | null | 2025-01-10T19:53:50Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Coder-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8babe579-9337-b4c2-3206-7f11b058a95f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Coder-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f92233bb04f07837_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f92233bb04f07837_train_data.json
type:
field_instruction: instruction
field_output: content
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: tuanna08go/8babe579-9337-b4c2-3206-7f11b058a95f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/f92233bb04f07837_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8f8e1714-7272-4451-aa8c-2fc4e30cb7a0
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8f8e1714-7272-4451-aa8c-2fc4e30cb7a0
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 8babe579-9337-b4c2-3206-7f11b058a95f
This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-7B](https://huggingface.co/unsloth/Qwen2.5-Coder-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0039 | 1 | nan |
| 0.0 | 0.0390 | 10 | nan |
| 0.0 | 0.0780 | 20 | nan |
| 0.0 | 0.1170 | 30 | nan |
| 0.0 | 0.1559 | 40 | nan |
| 0.0161 | 0.1949 | 50 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
havinash-ai/7394d15d-eefa-4402-9516-ba84cd3e7dad | havinash-ai | 2025-01-10T20:04:23Z | 11 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-7B",
"base_model:adapter:Qwen/Qwen1.5-7B",
"license:other",
"region:us"
] | null | 2025-01-10T20:03:12Z | ---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7394d15d-eefa-4402-9516-ba84cd3e7dad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 39217ad3029e2718_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/39217ad3029e2718_train_data.json
type:
field_instruction: inputs
field_output: targets
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: havinash-ai/7394d15d-eefa-4402-9516-ba84cd3e7dad
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/39217ad3029e2718_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 67fca483-4118-4d2f-936b-5b5d958ee0d0
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 67fca483-4118-4d2f-936b-5b5d958ee0d0
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7394d15d-eefa-4402-9516-ba84cd3e7dad
This model is a fine-tuned version of [Qwen/Qwen1.5-7B](https://huggingface.co/Qwen/Qwen1.5-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7645
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.9273 | 0.0021 | 1 | 2.8908 |
| 2.7151 | 0.0062 | 3 | 2.8867 |
| 2.9105 | 0.0123 | 6 | 2.8485 |
| 2.665 | 0.0185 | 9 | 2.7645 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
neuroplus/secret-sauce-b3 | neuroplus | 2025-01-10T20:04:13Z | 87 | 1 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:Tencent-Hunyuan/HunyuanDiT-v1.2-Diffusers-Distilled",
"base_model:adapter:Tencent-Hunyuan/HunyuanDiT-v1.2-Diffusers-Distilled",
"region:us"
] | text-to-image | 2025-01-10T19:09:51Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: Screenshot
output:
url: images/Screen Shot 2025-01-10 at 4.09.23 PM.png
base_model: Tencent-Hunyuan/HunyuanDiT-v1.2-Diffusers-Distilled
instance_prompt: secret-sauce
---
# secret-sauce-b3
<Gallery />
## Trigger words
You should use `secret-sauce` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/neuroplus/secret-sauce-b3/tree/main) them in the Files & versions tab.
|
thangla01/5ffa1374-0e30-493d-b060-a2d2b07f66b1 | thangla01 | 2025-01-10T20:03:53Z | 5 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Math-7B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-Math-7B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-10T18:45:49Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Math-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5ffa1374-0e30-493d-b060-a2d2b07f66b1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Math-7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6b9109a6db174c05_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6b9109a6db174c05_train_data.json
type:
field_instruction: text
field_output: title
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thangla01/5ffa1374-0e30-493d-b060-a2d2b07f66b1
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/6b9109a6db174c05_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3aefa5d0-1c2e-4a87-ac85-eadf9d27a6ee
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 3aefa5d0-1c2e-4a87-ac85-eadf9d27a6ee
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 5ffa1374-0e30-493d-b060-a2d2b07f66b1
This model is a fine-tuned version of [unsloth/Qwen2.5-Math-7B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Math-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.4334
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 6.1947 | 0.0047 | 200 | 5.4334 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kk-aivio/decea65c-2daa-4bdd-b988-57df9b8f720e | kk-aivio | 2025-01-10T20:03:48Z | 11 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"base_model:adapter:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"license:llama3",
"region:us"
] | null | 2025-01-10T20:02:35Z | ---
library_name: peft
license: llama3
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
tags:
- axolotl
- generated_from_trainer
model-index:
- name: decea65c-2daa-4bdd-b988-57df9b8f720e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e53c72ba80674f72_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e53c72ba80674f72_train_data.json
type:
field_instruction: ptype
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kk-aivio/decea65c-2daa-4bdd-b988-57df9b8f720e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/e53c72ba80674f72_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 947b25e3-b276-4a08-8875-e5b98a03e2b8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 947b25e3-b276-4a08-8875-e5b98a03e2b8
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# decea65c-2daa-4bdd-b988-57df9b8f720e
This model is a fine-tuned version of [Orenguteng/Llama-3-8B-Lexi-Uncensored](https://huggingface.co/Orenguteng/Llama-3-8B-Lexi-Uncensored) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.7717 | 0.0037 | 1 | 2.7590 |
| 2.8945 | 0.0111 | 3 | 2.7541 |
| 2.7749 | 0.0223 | 6 | 2.6835 |
| 2.7356 | 0.0334 | 9 | 2.6656 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
imrobbiegreen/robbie-lora | imrobbiegreen | 2025-01-10T20:02:00Z | 26 | 1 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-10T19:18:50Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ROBBIEG
---
# Robbielora
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ROBBIEG` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('imrobbiegreen/robbieLora', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
adammandic87/a4b602b9-442e-471b-a4f7-6692a152d401 | adammandic87 | 2025-01-10T19:58:24Z | 11 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-random-GemmaForCausalLM",
"base_model:adapter:fxmarty/tiny-random-GemmaForCausalLM",
"license:mit",
"region:us"
] | null | 2025-01-10T19:56:55Z | ---
library_name: peft
license: mit
base_model: fxmarty/tiny-random-GemmaForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a4b602b9-442e-471b-a4f7-6692a152d401
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/tiny-random-GemmaForCausalLM
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6fd144c2f0e2602e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6fd144c2f0e2602e_train_data.json
type:
field_instruction: instruction
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: adammandic87/a4b602b9-442e-471b-a4f7-6692a152d401
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/6fd144c2f0e2602e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6b22e34a-3e02-4be9-8d62-8b067bce88c7
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6b22e34a-3e02-4be9-8d62-8b067bce88c7
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a4b602b9-442e-471b-a4f7-6692a152d401
This model is a fine-tuned version of [fxmarty/tiny-random-GemmaForCausalLM](https://huggingface.co/fxmarty/tiny-random-GemmaForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0001 | 1 | nan |
| 0.0 | 0.0003 | 3 | nan |
| 0.0 | 0.0006 | 6 | nan |
| 0.0 | 0.0009 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
havinash-ai/22233a1c-66c0-4d21-a01e-9851964c720e | havinash-ai | 2025-01-10T19:56:15Z | 11 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Korabbit/llama-2-ko-7b",
"base_model:adapter:Korabbit/llama-2-ko-7b",
"region:us"
] | null | 2025-01-10T19:52:30Z | ---
library_name: peft
base_model: Korabbit/llama-2-ko-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 22233a1c-66c0-4d21-a01e-9851964c720e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Korabbit/llama-2-ko-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fe2766c24526d5b6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fe2766c24526d5b6_train_data.json
type:
field_instruction: instruction
field_output: output_1
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: havinash-ai/22233a1c-66c0-4d21-a01e-9851964c720e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/fe2766c24526d5b6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6a946a5a-c33d-4e49-b8f6-d790fd5d9545
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6a946a5a-c33d-4e49-b8f6-d790fd5d9545
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 22233a1c-66c0-4d21-a01e-9851964c720e
This model is a fine-tuned version of [Korabbit/llama-2-ko-7b](https://huggingface.co/Korabbit/llama-2-ko-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1058
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0009 | 0.0005 | 1 | 1.1621 |
| 1.1693 | 0.0015 | 3 | 1.1604 |
| 1.0487 | 0.0031 | 6 | 1.1477 |
| 1.08 | 0.0046 | 9 | 1.1058 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Best000/514146dd-9f4e-4e8d-aa57-ffbe268a7908 | Best000 | 2025-01-10T19:55:58Z | 11 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Math-1.5B",
"base_model:adapter:unsloth/Qwen2.5-Math-1.5B",
"license:apache-2.0",
"region:us"
] | null | 2025-01-10T19:55:16Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Math-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 514146dd-9f4e-4e8d-aa57-ffbe268a7908
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Math-1.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ef271f6ed281f7dc_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ef271f6ed281f7dc_train_data.json
type:
field_input: spec_relation
field_instruction: premise
field_output: hypothesis
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Best000/514146dd-9f4e-4e8d-aa57-ffbe268a7908
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/ef271f6ed281f7dc_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bb219ed3-42f4-4b4a-b5fd-43afd6d30466
wandb_project: birthday-sn56-16-Gradients-On-Demand
wandb_run: your_name
wandb_runid: bb219ed3-42f4-4b4a-b5fd-43afd6d30466
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 514146dd-9f4e-4e8d-aa57-ffbe268a7908
This model is a fine-tuned version of [unsloth/Qwen2.5-Math-1.5B](https://huggingface.co/unsloth/Qwen2.5-Math-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0076 | 1 | nan |
| 0.0 | 0.0229 | 3 | nan |
| 0.0 | 0.0459 | 6 | nan |
| 0.0 | 0.0688 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kk-aivio/b2fd33d7-cec1-4787-8d27-9134e2753212 | kk-aivio | 2025-01-10T19:54:44Z | 11 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Math-7B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-Math-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-10T19:43:42Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Math-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b2fd33d7-cec1-4787-8d27-9134e2753212
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Math-7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 67ca02675240a51f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/67ca02675240a51f_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kk-aivio/b2fd33d7-cec1-4787-8d27-9134e2753212
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/67ca02675240a51f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 293589b7-5982-4d1d-bd8c-e478d3dfd31e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 293589b7-5982-4d1d-bd8c-e478d3dfd31e
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b2fd33d7-cec1-4787-8d27-9134e2753212
This model is a fine-tuned version of [unsloth/Qwen2.5-Math-7B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Math-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0001 | 1 | nan |
| 0.0 | 0.0003 | 3 | nan |
| 0.0 | 0.0007 | 6 | nan |
| 0.0 | 0.0010 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
0x1202/e6b10c28-9bbb-459e-9eef-2ef21bf0637b | 0x1202 | 2025-01-10T19:54:36Z | 11 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:trl-internal-testing/tiny-random-LlamaForCausalLM",
"base_model:adapter:trl-internal-testing/tiny-random-LlamaForCausalLM",
"region:us"
] | null | 2025-01-10T19:54:18Z | ---
library_name: peft
base_model: trl-internal-testing/tiny-random-LlamaForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e6b10c28-9bbb-459e-9eef-2ef21bf0637b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: trl-internal-testing/tiny-random-LlamaForCausalLM
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 33cc955f25acc611_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/33cc955f25acc611_train_data.json
type:
field_input: sentence1
field_instruction: instruction
field_output: sentence2
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: true
hub_model_id: 0x1202/e6b10c28-9bbb-459e-9eef-2ef21bf0637b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/33cc955f25acc611_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f2ed0dfb-46ae-4e06-970d-6bec407a36d4
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f2ed0dfb-46ae-4e06-970d-6bec407a36d4
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# e6b10c28-9bbb-459e-9eef-2ef21bf0637b
This model is a fine-tuned version of [trl-internal-testing/tiny-random-LlamaForCausalLM](https://huggingface.co/trl-internal-testing/tiny-random-LlamaForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.3534
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0032 | 1 | 10.3618 |
| 10.3493 | 0.1584 | 50 | 10.3593 |
| 10.3492 | 0.3167 | 100 | 10.3561 |
| 10.342 | 0.4751 | 150 | 10.3538 |
| 10.3348 | 0.6334 | 200 | 10.3534 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nbninh/5a37825d-c338-4212-ad26-fc8d7691a941 | nbninh | 2025-01-10T19:54:32Z | 11 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-7B",
"base_model:adapter:Qwen/Qwen1.5-7B",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-10T19:36:10Z | ---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5a37825d-c338-4212-ad26-fc8d7691a941
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 39217ad3029e2718_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/39217ad3029e2718_train_data.json
type:
field_instruction: inputs
field_output: targets
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nbninh/5a37825d-c338-4212-ad26-fc8d7691a941
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/39217ad3029e2718_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 67fca483-4118-4d2f-936b-5b5d958ee0d0
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 67fca483-4118-4d2f-936b-5b5d958ee0d0
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 5a37825d-c338-4212-ad26-fc8d7691a941
This model is a fine-tuned version of [Qwen/Qwen1.5-7B](https://huggingface.co/Qwen/Qwen1.5-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6431
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.0096 | 0.4111 | 200 | 2.6431 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Ejada/numerical_arabic_ | Ejada | 2025-01-10T19:52:25Z | 4,856 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-01-10T19:48:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fuliucansheng/FLUX.1-Depth-dev-diffusers | fuliucansheng | 2025-01-10T19:50:56Z | 23 | 1 | diffusers | [
"diffusers",
"safetensors",
"image-generation",
"flux",
"diffusion-single-file",
"en",
"license:other",
"diffusers:FluxControlPipeline",
"region:us"
] | text-to-image | 2025-01-10T19:21:55Z | ---
language:
- en
license: other
license_name: flux-1-dev-non-commercial-license
license_link: LICENSE.md
extra_gated_prompt: By clicking "Agree", you agree to the [FluxDev Non-Commercial License Agreement](https://huggingface.co/black-forest-labs/FLUX.1-Depth-dev/blob/main/LICENSE.md)
and acknowledge the [Acceptable Use Policy](https://huggingface.co/black-forest-labs/FLUX.1-Depth-dev/blob/main/POLICY.md).
tags:
- image-generation
- flux
- diffusion-single-file
---

`FLUX.1 Depth [dev]` is a 12 billion parameter rectified flow transformer capable of generating an image based on a text description while following the structure of a given input image. For more information, please read our [blog post](https://blackforestlabs.ai/flux-1-tools/).
# Key Features
1. Cutting-edge output quality.
2. It blends impressive prompt adherence with maintaining the structure of source images based on depth maps.
3. Trained using guidance distillation, making `FLUX.1 Depth [dev]` more efficient.
4. Open weights to drive new scientific research, and empower artists to develop innovative workflows.
5. Generated outputs can be used for personal, scientific, and commercial purposes as described in the [`FLUX.1 [dev]` Non-Commercial License](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
# Usage
We provide a reference implementation of `FLUX.1 Depth [dev]`, as well as sampling code, in a dedicated [github repository](https://github.com/black-forest-labs/flux).
Developers and creatives looking to build on top of `FLUX.1 Depth [dev]` are encouraged to use this as a starting point.
## API Endpoints
`FLUX.1 Depth [pro]` is available in our API [bfl.ml](https://docs.bfl.ml/)

## Diffusers
To use `FLUX.1-Depth-dev` with the 🧨 diffusers python library, first install or upgrade `diffusers` and `image_gen_aux`.
```bash
pip install -U diffusers
pip install git+https://github.com/asomoza/image_gen_aux.git
```
Then you can use `FluxControlPipeline` to run the model
```python
import torch
from diffusers import FluxControlPipeline, FluxTransformer2DModel
from diffusers.utils import load_image
from image_gen_aux import DepthPreprocessor
pipe = FluxControlPipeline.from_pretrained("black-forest-labs/FLUX.1-Depth-dev", torch_dtype=torch.bfloat16).to("cuda")
prompt = "A robot made of exotic candies and chocolates of different kinds. The background is filled with confetti and celebratory gifts."
control_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/robot.png")
processor = DepthPreprocessor.from_pretrained("LiheYoung/depth-anything-large-hf")
control_image = processor(control_image)[0].convert("RGB")
image = pipe(
prompt=prompt,
control_image=control_image,
height=1024,
width=1024,
num_inference_steps=30,
guidance_scale=10.0,
generator=torch.Generator().manual_seed(42),
).images[0]
image.save("output.png")
```
To learn more check out the [diffusers](https://huggingface.co/docs/diffusers/main/en/api/pipelines/flux) documentation
---
# Limitations
- This model is not intended or able to provide factual information.
- As a statistical model this checkpoint might amplify existing societal biases.
- The model may fail to generate output that matches the prompts.
- Prompt following is heavily influenced by the prompting-style.
# Out-of-Scope Use
The model and its derivatives may not be used
- In any way that violates any applicable national, federal, state, local or international law or regulation.
- For the purpose of exploiting, harming or attempting to exploit or harm minors in any way; including but not limited to the solicitation, creation, acquisition, or dissemination of child exploitative content.
- To generate or disseminate verifiably false information and/or content with the purpose of harming others.
- To generate or disseminate personal identifiable information that can be used to harm an individual.
- To harass, abuse, threaten, stalk, or bully individuals or groups of individuals.
- To create non-consensual nudity or illegal pornographic content.
- For fully automated decision making that adversely impacts an individual's legal rights or otherwise creates or modifies a binding, enforceable obligation.
- Generating or facilitating large-scale disinformation campaigns.
# License
This model falls under the [`FLUX.1 [dev]` Non-Commercial License](https://huggingface.co/black-forest-labs/FLUX.1-Depth-dev/blob/main/LICENSE.md).
|
filipesantoscv11/8bbe6f56-6033-4962-b7fa-45d1f75ed594 | filipesantoscv11 | 2025-01-10T19:49:32Z | 11 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"base_model:adapter:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"license:llama3",
"region:us"
] | null | 2025-01-10T19:47:27Z | ---
library_name: peft
license: llama3
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8bbe6f56-6033-4962-b7fa-45d1f75ed594
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e53c72ba80674f72_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e53c72ba80674f72_train_data.json
type:
field_instruction: ptype
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: filipesantoscv11/8bbe6f56-6033-4962-b7fa-45d1f75ed594
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/e53c72ba80674f72_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 947b25e3-b276-4a08-8875-e5b98a03e2b8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 947b25e3-b276-4a08-8875-e5b98a03e2b8
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 8bbe6f56-6033-4962-b7fa-45d1f75ed594
This model is a fine-tuned version of [Orenguteng/Llama-3-8B-Lexi-Uncensored](https://huggingface.co/Orenguteng/Llama-3-8B-Lexi-Uncensored) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5618
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0037 | 1 | 2.7433 |
| 2.8873 | 0.0297 | 8 | 2.6688 |
| 2.6551 | 0.0594 | 16 | 2.5936 |
| 2.6251 | 0.0891 | 24 | 2.5618 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
tuanna08go/163f3447-2f26-a0f3-eda1-04d3d9a8cbdf | tuanna08go | 2025-01-10T19:47:06Z | 7 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-7B",
"base_model:adapter:Qwen/Qwen1.5-7B",
"license:other",
"region:us"
] | null | 2025-01-10T19:35:45Z | ---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 163f3447-2f26-a0f3-eda1-04d3d9a8cbdf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 39217ad3029e2718_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/39217ad3029e2718_train_data.json
type:
field_instruction: inputs
field_output: targets
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: tuanna08go/163f3447-2f26-a0f3-eda1-04d3d9a8cbdf
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/39217ad3029e2718_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 67fca483-4118-4d2f-936b-5b5d958ee0d0
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 67fca483-4118-4d2f-936b-5b5d958ee0d0
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 163f3447-2f26-a0f3-eda1-04d3d9a8cbdf
This model is a fine-tuned version of [Qwen/Qwen1.5-7B](https://huggingface.co/Qwen/Qwen1.5-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6851
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0021 | 1 | 2.8917 |
| 2.7838 | 0.0206 | 10 | 2.8080 |
| 2.5363 | 0.0411 | 20 | 2.7136 |
| 2.6531 | 0.0617 | 30 | 2.6972 |
| 2.6508 | 0.0822 | 40 | 2.6868 |
| 2.5272 | 0.1028 | 50 | 2.6851 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
VERSIL91/6bc59e91-007b-44b2-ab79-c8605e7e3fd3 | VERSIL91 | 2025-01-10T19:43:28Z | 13 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct",
"base_model:adapter:VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct",
"license:llama3.1",
"region:us"
] | null | 2025-01-10T19:28:00Z | ---
library_name: peft
license: llama3.1
base_model: VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6bc59e91-007b-44b2-ab79-c8605e7e3fd3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
accelerate_config:
dynamo_backend: inductor
mixed_precision: bf16
num_machines: 1
num_processes: auto
use_cpu: false
adapter: lora
base_model: VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- aff0bb4998a5c186_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/aff0bb4998a5c186_train_data.json
type:
field_input: ''
field_instruction: input_text
field_output: target_text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: false
hub_model_id: VERSIL91/6bc59e91-007b-44b2-ab79-c8605e7e3fd3
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lora_target_modules:
- q_proj
- v_proj
lr_scheduler: cosine
max_memory:
0: 70GiB
max_steps: 20
micro_batch_size: 2
mlflow_experiment_name: /tmp/aff0bb4998a5c186_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
quantization_config:
llm_int8_enable_fp32_cpu_offload: true
load_in_8bit: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
torch_compile: true
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6bc59e91-007b-44b2-ab79-c8605e7e3fd3
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6bc59e91-007b-44b2-ab79-c8605e7e3fd3
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 6bc59e91-007b-44b2-ab79-c8605e7e3fd3
This model is a fine-tuned version of [VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct](https://huggingface.co/VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2862
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.79 | 0.0011 | 1 | 0.7996 |
| 0.7558 | 0.0057 | 5 | 0.7683 |
| 0.655 | 0.0113 | 10 | 0.5883 |
| 0.3356 | 0.0170 | 15 | 0.3324 |
| 0.2807 | 0.0227 | 20 | 0.2862 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/LLaMA3-SFT-i1-GGUF | mradermacher | 2025-01-10T19:42:55Z | 654 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:RLHFlow/LLaMA3-SFT",
"base_model:quantized:RLHFlow/LLaMA3-SFT",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-01-10T11:40:55Z | ---
base_model: RLHFlow/LLaMA3-SFT
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/RLHFlow/LLaMA3-SFT
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/LLaMA3-SFT-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LLaMA3-SFT-i1-GGUF/resolve/main/LLaMA3-SFT.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/LLaMA3-SFT-i1-GGUF/resolve/main/LLaMA3-SFT.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/LLaMA3-SFT-i1-GGUF/resolve/main/LLaMA3-SFT.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA3-SFT-i1-GGUF/resolve/main/LLaMA3-SFT.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA3-SFT-i1-GGUF/resolve/main/LLaMA3-SFT.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA3-SFT-i1-GGUF/resolve/main/LLaMA3-SFT.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA3-SFT-i1-GGUF/resolve/main/LLaMA3-SFT.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/LLaMA3-SFT-i1-GGUF/resolve/main/LLaMA3-SFT.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/LLaMA3-SFT-i1-GGUF/resolve/main/LLaMA3-SFT.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LLaMA3-SFT-i1-GGUF/resolve/main/LLaMA3-SFT.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA3-SFT-i1-GGUF/resolve/main/LLaMA3-SFT.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/LLaMA3-SFT-i1-GGUF/resolve/main/LLaMA3-SFT.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/LLaMA3-SFT-i1-GGUF/resolve/main/LLaMA3-SFT.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA3-SFT-i1-GGUF/resolve/main/LLaMA3-SFT.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/LLaMA3-SFT-i1-GGUF/resolve/main/LLaMA3-SFT.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/LLaMA3-SFT-i1-GGUF/resolve/main/LLaMA3-SFT.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA3-SFT-i1-GGUF/resolve/main/LLaMA3-SFT.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/LLaMA3-SFT-i1-GGUF/resolve/main/LLaMA3-SFT.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/LLaMA3-SFT-i1-GGUF/resolve/main/LLaMA3-SFT.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/LLaMA3-SFT-i1-GGUF/resolve/main/LLaMA3-SFT.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LLaMA3-SFT-i1-GGUF/resolve/main/LLaMA3-SFT.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA3-SFT-i1-GGUF/resolve/main/LLaMA3-SFT.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA3-SFT-i1-GGUF/resolve/main/LLaMA3-SFT.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA3-SFT-i1-GGUF/resolve/main/LLaMA3-SFT.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
sgraham/palp_unsloth_finetune | sgraham | 2025-01-10T19:41:37Z | 6 | 0 | transformers | [
"transformers",
"llava",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | image-text-to-text | 2025-01-10T19:41:31Z | ---
base_model: unsloth/pixtral-12b-2409-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llava
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** sgraham
- **License:** apache-2.0
- **Finetuned from model :** unsloth/pixtral-12b-2409-unsloth-bnb-4bit
This llava model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kostiantynk/a4d73f4a-253b-4b1e-8151-4eac7f5d9152 | kostiantynk | 2025-01-10T19:40:25Z | 11 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Korabbit/llama-2-ko-7b",
"base_model:adapter:Korabbit/llama-2-ko-7b",
"region:us"
] | null | 2025-01-10T19:35:26Z | ---
library_name: peft
base_model: Korabbit/llama-2-ko-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a4d73f4a-253b-4b1e-8151-4eac7f5d9152
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Korabbit/llama-2-ko-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fe2766c24526d5b6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fe2766c24526d5b6_train_data.json
type:
field_instruction: instruction
field_output: output_1
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk/a4d73f4a-253b-4b1e-8151-4eac7f5d9152
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/fe2766c24526d5b6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6a946a5a-c33d-4e49-b8f6-d790fd5d9545
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6a946a5a-c33d-4e49-b8f6-d790fd5d9545
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a4d73f4a-253b-4b1e-8151-4eac7f5d9152
This model is a fine-tuned version of [Korabbit/llama-2-ko-7b](https://huggingface.co/Korabbit/llama-2-ko-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1074
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0009 | 0.0005 | 1 | 1.1621 |
| 1.1689 | 0.0015 | 3 | 1.1606 |
| 1.0494 | 0.0031 | 6 | 1.1486 |
| 1.0826 | 0.0046 | 9 | 1.1074 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
CrimsonZockt/MarcysiaRyskala-FLUXLORA | CrimsonZockt | 2025-01-10T19:39:12Z | 30 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-01-10T19:29:04Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: Marcysia Ryskala, black tanktop, professional headshot, photoshoot.
output:
url: images/Marcysia Ryskala, black tanktop, professional h....png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Marcysia Ryskala
---
# MarcysiaRyskala
<Gallery />
## Model Description
This is a LORA Model that i have train on Weights.gg
## Trigger words
You should use `Marcysia Ryskala` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/CrimsonZockt/MarcysiaRyskala-FLUXLORA/tree/main) them in the Files & versions tab.
|
tmpmodelsave/type12_step100 | tmpmodelsave | 2025-01-10T19:37:44Z | 13 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-10T19:31:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Faisalkhan758/image_caption | Faisalkhan758 | 2025-01-10T19:37:28Z | 24 | 0 | keras | [
"keras",
"tf-keras",
"license:apache-2.0",
"region:us"
] | null | 2025-01-10T19:24:55Z | ---
license: apache-2.0
---
|
kokovova/e0914c22-6f1b-46d0-9682-e06d24f3afef | kokovova | 2025-01-10T19:35:06Z | 12 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Korabbit/llama-2-ko-7b",
"base_model:adapter:Korabbit/llama-2-ko-7b",
"region:us"
] | null | 2025-01-10T19:29:44Z | ---
library_name: peft
base_model: Korabbit/llama-2-ko-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e0914c22-6f1b-46d0-9682-e06d24f3afef
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Korabbit/llama-2-ko-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fe2766c24526d5b6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fe2766c24526d5b6_train_data.json
type:
field_instruction: instruction
field_output: output_1
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kokovova/e0914c22-6f1b-46d0-9682-e06d24f3afef
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/fe2766c24526d5b6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6a946a5a-c33d-4e49-b8f6-d790fd5d9545
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6a946a5a-c33d-4e49-b8f6-d790fd5d9545
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# e0914c22-6f1b-46d0-9682-e06d24f3afef
This model is a fine-tuned version of [Korabbit/llama-2-ko-7b](https://huggingface.co/Korabbit/llama-2-ko-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0758
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0005 | 1 | 1.2558 |
| 1.1877 | 0.0041 | 8 | 1.1816 |
| 1.1013 | 0.0082 | 16 | 1.0907 |
| 1.092 | 0.0122 | 24 | 1.0758 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
HigherMind/career_advancement-Q8_0-GGUF | HigherMind | 2025-01-10T19:34:39Z | 17 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-10T19:34:06Z | ---
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
base_model: HigherMind/career_advancement
---
# HigherMind/career_advancement-Q8_0-GGUF
This model was converted to GGUF format from [`HigherMind/career_advancement`](https://huggingface.co/HigherMind/career_advancement) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/HigherMind/career_advancement) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo HigherMind/career_advancement-Q8_0-GGUF --hf-file career_advancement-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo HigherMind/career_advancement-Q8_0-GGUF --hf-file career_advancement-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo HigherMind/career_advancement-Q8_0-GGUF --hf-file career_advancement-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo HigherMind/career_advancement-Q8_0-GGUF --hf-file career_advancement-q8_0.gguf -c 2048
```
|
Seebrasse345/jojo | Seebrasse345 | 2025-01-10T19:34:04Z | 13 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-10T18:55:29Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: jojo
---
# Jojo
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `jojo` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Seebrasse345/jojo', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
denbeo/abefd6ba-b483-4aa9-8d96-e1866fcf975d | denbeo | 2025-01-10T19:30:24Z | 12 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:01-ai/Yi-1.5-9B-Chat-16K",
"base_model:adapter:01-ai/Yi-1.5-9B-Chat-16K",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-10T17:46:43Z | ---
library_name: peft
license: apache-2.0
base_model: 01-ai/Yi-1.5-9B-Chat-16K
tags:
- axolotl
- generated_from_trainer
model-index:
- name: abefd6ba-b483-4aa9-8d96-e1866fcf975d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: 01-ai/Yi-1.5-9B-Chat-16K
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 227afdb836d95568_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/227afdb836d95568_train_data.json
type:
field_input: level
field_instruction: name
field_output: text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: denbeo/abefd6ba-b483-4aa9-8d96-e1866fcf975d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/227afdb836d95568_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7d046c5f-2837-45a6-a88d-f3d18615b72c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7d046c5f-2837-45a6-a88d-f3d18615b72c
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# abefd6ba-b483-4aa9-8d96-e1866fcf975d
This model is a fine-tuned version of [01-ai/Yi-1.5-9B-Chat-16K](https://huggingface.co/01-ai/Yi-1.5-9B-Chat-16K) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3097
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.2795 | 0.0073 | 200 | 1.3097 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
MrLeonEvans/rle-lora | MrLeonEvans | 2025-01-10T19:26:41Z | 1,031 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-12-21T00:05:22Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: RLE
---
# Rle Lora
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `RLE` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('mrleonevans/rle-lora', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
duyphu/0ee83462-050b-63b8-40c1-c1604e075c9b | duyphu | 2025-01-10T19:23:15Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-10T19:19:08Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0ee83462-050b-63b8-40c1-c1604e075c9b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 354d69cdce7fda99_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/354d69cdce7fda99_train_data.json
type:
field_instruction: instruction
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: duyphu/0ee83462-050b-63b8-40c1-c1604e075c9b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/354d69cdce7fda99_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ab530c62-b812-48a1-978d-a3445ec1cd9e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ab530c62-b812-48a1-978d-a3445ec1cd9e
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 0ee83462-050b-63b8-40c1-c1604e075c9b
This model is a fine-tuned version of [unsloth/Qwen2-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0037 | 1 | nan |
| 0.0 | 0.0365 | 10 | nan |
| 0.0 | 0.0731 | 20 | nan |
| 0.0 | 0.1096 | 30 | nan |
| 0.0 | 0.1461 | 40 | nan |
| 0.0 | 0.1826 | 50 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
trenden/75497487-5c9d-48a3-b707-6d131c57a703 | trenden | 2025-01-10T19:21:44Z | 10 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Math-1.5B",
"base_model:adapter:unsloth/Qwen2.5-Math-1.5B",
"license:apache-2.0",
"region:us"
] | null | 2025-01-10T19:21:05Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Math-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 75497487-5c9d-48a3-b707-6d131c57a703
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Math-1.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ef271f6ed281f7dc_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ef271f6ed281f7dc_train_data.json
type:
field_input: spec_relation
field_instruction: premise
field_output: hypothesis
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: trenden/75497487-5c9d-48a3-b707-6d131c57a703
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/ef271f6ed281f7dc_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bb219ed3-42f4-4b4a-b5fd-43afd6d30466
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: bb219ed3-42f4-4b4a-b5fd-43afd6d30466
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 75497487-5c9d-48a3-b707-6d131c57a703
This model is a fine-tuned version of [unsloth/Qwen2.5-Math-1.5B](https://huggingface.co/unsloth/Qwen2.5-Math-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0076 | 1 | nan |
| 0.0 | 0.0229 | 3 | nan |
| 0.0 | 0.0459 | 6 | nan |
| 0.0 | 0.0688 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
0x1202/4aef5c19-2c47-4652-aaa2-9cd414c34dc3 | 0x1202 | 2025-01-10T19:21:16Z | 10 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-10T19:18:14Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4aef5c19-2c47-4652-aaa2-9cd414c34dc3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 354d69cdce7fda99_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/354d69cdce7fda99_train_data.json
type:
field_instruction: instruction
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: true
hub_model_id: 0x1202/4aef5c19-2c47-4652-aaa2-9cd414c34dc3
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/354d69cdce7fda99_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ab530c62-b812-48a1-978d-a3445ec1cd9e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ab530c62-b812-48a1-978d-a3445ec1cd9e
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 4aef5c19-2c47-4652-aaa2-9cd414c34dc3
This model is a fine-tuned version of [unsloth/Qwen2-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5536
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0037 | 1 | 1.6911 |
| 1.7419 | 0.1826 | 50 | 1.6199 |
| 1.4866 | 0.3653 | 100 | 1.5911 |
| 1.8726 | 0.5479 | 150 | 1.5384 |
| 1.6633 | 0.7306 | 200 | 1.5536 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Haesteining/PhiSN29Jan10_1 | Haesteining | 2025-01-10T19:19:39Z | 52 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-10T19:15:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
trenden/c3d7635d-4393-429e-bfca-0b4eeabe0a7e | trenden | 2025-01-10T19:19:17Z | 13 | 0 | peft | [
"peft",
"safetensors",
"mixtral",
"axolotl",
"generated_from_trainer",
"base_model:TitanML/tiny-mixtral",
"base_model:adapter:TitanML/tiny-mixtral",
"region:us"
] | null | 2025-01-10T19:17:09Z | ---
library_name: peft
base_model: TitanML/tiny-mixtral
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c3d7635d-4393-429e-bfca-0b4eeabe0a7e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: TitanML/tiny-mixtral
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2dd05a76ac3ce214_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2dd05a76ac3ce214_train_data.json
type:
field_input: system
field_instruction: instruction
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: trenden/c3d7635d-4393-429e-bfca-0b4eeabe0a7e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/2dd05a76ac3ce214_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: cde4566c-bdc0-4058-bdd5-0fd8c2957225
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: cde4566c-bdc0-4058-bdd5-0fd8c2957225
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c3d7635d-4393-429e-bfca-0b4eeabe0a7e
This model is a fine-tuned version of [TitanML/tiny-mixtral](https://huggingface.co/TitanML/tiny-mixtral) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0002 | 1 | nan |
| 0.0 | 0.0006 | 3 | nan |
| 0.0 | 0.0011 | 6 | nan |
| 0.0 | 0.0017 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kostiantynk-out/c5658bcf-cea4-4d2f-ac11-a04dead39a30 | kostiantynk-out | 2025-01-10T19:19:03Z | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:heegyu/WizardVicuna2-13b-hf",
"base_model:adapter:heegyu/WizardVicuna2-13b-hf",
"region:us"
] | null | 2025-01-10T19:04:38Z | ---
library_name: peft
base_model: heegyu/WizardVicuna2-13b-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c5658bcf-cea4-4d2f-ac11-a04dead39a30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: heegyu/WizardVicuna2-13b-hf
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3ad533406285a0ea_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3ad533406285a0ea_train_data.json
type:
field_instruction: dialogue
field_output: reference
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk-out/c5658bcf-cea4-4d2f-ac11-a04dead39a30
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/3ad533406285a0ea_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7011d4ab-02bf-4cec-ad2f-ae7dc6130702
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7011d4ab-02bf-4cec-ad2f-ae7dc6130702
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c5658bcf-cea4-4d2f-ac11-a04dead39a30
This model is a fine-tuned version of [heegyu/WizardVicuna2-13b-hf](https://huggingface.co/heegyu/WizardVicuna2-13b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8853
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7899 | 0.0003 | 1 | 0.9394 |
| 1.1065 | 0.0009 | 3 | 0.9388 |
| 0.9389 | 0.0017 | 6 | 0.9296 |
| 0.6346 | 0.0026 | 9 | 0.8853 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
duyphu/dd33e94f-0a82-02d7-7de1-636e5f94e416 | duyphu | 2025-01-10T19:17:19Z | 14 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-random-GemmaForCausalLM",
"base_model:adapter:fxmarty/tiny-random-GemmaForCausalLM",
"license:mit",
"region:us"
] | null | 2025-01-10T19:10:22Z | ---
library_name: peft
license: mit
base_model: fxmarty/tiny-random-GemmaForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: dd33e94f-0a82-02d7-7de1-636e5f94e416
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/tiny-random-GemmaForCausalLM
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6fd144c2f0e2602e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6fd144c2f0e2602e_train_data.json
type:
field_instruction: instruction
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: duyphu/dd33e94f-0a82-02d7-7de1-636e5f94e416
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/6fd144c2f0e2602e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6b22e34a-3e02-4be9-8d62-8b067bce88c7
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6b22e34a-3e02-4be9-8d62-8b067bce88c7
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# dd33e94f-0a82-02d7-7de1-636e5f94e416
This model is a fine-tuned version of [fxmarty/tiny-random-GemmaForCausalLM](https://huggingface.co/fxmarty/tiny-random-GemmaForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 12.4569
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 12.4569 |
| 12.4538 | 0.0009 | 10 | 12.4569 |
| 12.4591 | 0.0019 | 20 | 12.4569 |
| 12.4573 | 0.0028 | 30 | 12.4569 |
| 12.4551 | 0.0038 | 40 | 12.4569 |
| 12.4573 | 0.0047 | 50 | 12.4569 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
bbytxt/aef4215d-d9af-4c19-af2c-f443cc4e9d69 | bbytxt | 2025-01-10T19:17:09Z | 10 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Mistral-Nemo-Instruct-2407",
"base_model:adapter:unsloth/Mistral-Nemo-Instruct-2407",
"license:apache-2.0",
"region:us"
] | null | 2025-01-10T18:01:47Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Mistral-Nemo-Instruct-2407
tags:
- axolotl
- generated_from_trainer
model-index:
- name: aef4215d-d9af-4c19-af2c-f443cc4e9d69
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Mistral-Nemo-Instruct-2407
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2a3a0d6fea0609c5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2a3a0d6fea0609c5_train_data.json
type:
field_input: knowledge
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: true
hub_model_id: bbytxt/aef4215d-d9af-4c19-af2c-f443cc4e9d69
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/2a3a0d6fea0609c5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7185e2c9-93a6-4519-a3b9-075c4614c1de
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7185e2c9-93a6-4519-a3b9-075c4614c1de
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# aef4215d-d9af-4c19-af2c-f443cc4e9d69
This model is a fine-tuned version of [unsloth/Mistral-Nemo-Instruct-2407](https://huggingface.co/unsloth/Mistral-Nemo-Instruct-2407) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7912
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 1.1144 |
| 3.8129 | 0.0044 | 50 | 0.8564 |
| 3.3992 | 0.0088 | 100 | 0.8134 |
| 3.1222 | 0.0131 | 150 | 0.7958 |
| 3.6426 | 0.0175 | 200 | 0.7912 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
recogna-nlp/gembode-7b | recogna-nlp | 2025-01-10T19:16:54Z | 107 | 0 | peft | [
"peft",
"model-index",
"region:us"
] | null | 2024-04-21T03:20:27Z | ---
library_name: peft
model-index:
- name: gembode-7b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: ENEM Challenge (No Images)
type: eduagarcia/enem_challenge
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 66.9
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/gembode-7b
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BLUEX (No Images)
type: eduagarcia-temp/BLUEX_without_images
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 57.16
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/gembode-7b
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: OAB Exams
type: eduagarcia/oab_exams
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 45.47
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/gembode-7b
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 RTE
type: assin2
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 86.61
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/gembode-7b
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 STS
type: eduagarcia/portuguese_benchmark
split: test
args:
num_few_shot: 15
metrics:
- type: pearson
value: 71.39
name: pearson
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/gembode-7b
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: FaQuAD NLI
type: ruanchaves/faquad-nli
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 67.4
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/gembode-7b
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HateBR Binary
type: ruanchaves/hatebr
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 79.81
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/gembode-7b
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: PT Hate Speech Binary
type: hate_speech_portuguese
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 63.75
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/gembode-7b
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: tweetSentBR
type: eduagarcia/tweetsentbr_fewshot
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 65.49
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/gembode-7b
name: Open Portuguese LLM Leaderboard
---
# gembode-7b
<!--- PROJECT LOGO -->
<p align="center">
<img src="https://huggingface.co/recogna-nlp/GemBode-2b-it/resolve/main/gembode.jpg" alt="Phi-Bode Logo" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
</p>
GemmBode é um modelo de linguagem ajustado para o idioma português, desenvolvido a partir do modelo Gemma-7b fornecido pela [Google](https://huggingface.co/google/gemma-7b).
## Características Principais
- **Modelo Base:** Gemma-7b, criado pela Google, com 7 bilhões de parâmetros.
- **Dataset para Fine-tuning:** [UltraAlpaca](https://huggingface.co/datasets/recogna-nlp/ultra-alpaca-ptbr)
- **Treinamento:** O treinamento foi realizado a partir do fine-tuning, com QLoRA do gemma-7b.
# Resultados da avaliação do Open Portuguese LLM Leaderboard
Detailed results can be found [here](https://huggingface.co/datasets/eduagarcia-temp/llm_pt_leaderboard_raw_results/tree/main/recogna-nlp/gembode-7b) and on the [🚀 Open Portuguese LLM Leaderboard](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard)
| Metric | Value |
|--------------------------|---------|
|Average |**67.11**|
|ENEM Challenge (No Images)| 66.90|
|BLUEX (No Images) | 57.16|
|OAB Exams | 45.47|
|Assin2 RTE | 86.61|
|Assin2 STS | 71.39|
|FaQuAD NLI | 67.40|
|HateBR Binary | 79.81|
|PT Hate Speech Binary | 63.75|
|tweetSentBR | 65.49|
|
nhung03/06a32eaf-bfe9-421e-96f0-f720212af702 | nhung03 | 2025-01-10T19:16:52Z | 10 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Math-1.5B",
"base_model:adapter:unsloth/Qwen2.5-Math-1.5B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-10T19:10:05Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Math-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 06a32eaf-bfe9-421e-96f0-f720212af702
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Math-1.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ef271f6ed281f7dc_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ef271f6ed281f7dc_train_data.json
type:
field_input: spec_relation
field_instruction: premise
field_output: hypothesis
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung03/06a32eaf-bfe9-421e-96f0-f720212af702
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/ef271f6ed281f7dc_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bb219ed3-42f4-4b4a-b5fd-43afd6d30466
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: bb219ed3-42f4-4b4a-b5fd-43afd6d30466
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 06a32eaf-bfe9-421e-96f0-f720212af702
This model is a fine-tuned version of [unsloth/Qwen2.5-Math-1.5B](https://huggingface.co/unsloth/Qwen2.5-Math-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3030
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 131
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.2319 | 0.9943 | 130 | 0.3022 |
| 0.4401 | 1.0057 | 131 | 0.3030 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
VERSIL91/81c4dc0a-8ef5-40c9-b0db-10706ae31ddc | VERSIL91 | 2025-01-10T19:14:37Z | 13 | 0 | peft | [
"peft",
"safetensors",
"phi",
"axolotl",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2025-01-10T19:02:57Z | ---
library_name: peft
license: mit
base_model: microsoft/phi-2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 81c4dc0a-8ef5-40c9-b0db-10706ae31ddc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
accelerate_config:
dynamo_backend: inductor
mixed_precision: bf16
num_machines: 1
num_processes: auto
use_cpu: false
adapter: lora
base_model: microsoft/phi-2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 08ebc5fee3289a2a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/08ebc5fee3289a2a_train_data.json
type:
field_instruction: original_prompt_text
field_output: jailbreak_prompt_text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: false
hub_model_id: VERSIL91/81c4dc0a-8ef5-40c9-b0db-10706ae31ddc
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lora_target_modules:
- q_proj
- v_proj
lr_scheduler: cosine
max_memory:
0: 70GiB
max_steps: 20
micro_batch_size: 2
mlflow_experiment_name: /tmp/08ebc5fee3289a2a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
quantization_config:
llm_int8_enable_fp32_cpu_offload: true
load_in_8bit: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
torch_compile: true
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 81c4dc0a-8ef5-40c9-b0db-10706ae31ddc
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 81c4dc0a-8ef5-40c9-b0db-10706ae31ddc
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 81c4dc0a-8ef5-40c9-b0db-10706ae31ddc
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4148
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.3353 | 0.0032 | 1 | 2.4703 |
| 2.4301 | 0.0159 | 5 | 2.4672 |
| 2.3042 | 0.0318 | 10 | 2.4528 |
| 2.2668 | 0.0477 | 15 | 2.4250 |
| 2.2307 | 0.0636 | 20 | 2.4148 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ZhangShenao/SELM-Llama-3.2-3B-Instruct-re-00001-iter-2 | ZhangShenao | 2025-01-10T19:12:50Z | 14 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ZhangShenao/SELM-Llama-3.2-3B-Instruct-re-00001-iter-1",
"base_model:finetune:ZhangShenao/SELM-Llama-3.2-3B-Instruct-re-00001-iter-1",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-10T07:52:07Z | ---
library_name: transformers
license: llama3.2
base_model: ZhangShenao/SELM-Llama-3.2-3B-Instruct-re-00001-iter-1
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- trl
- dpo
- alignment-handbook
- generated_from_trainer
datasets:
- updated
- original
model-index:
- name: SELM-Llama-3.2-3B-Instruct-re-00001-iter-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SELM-Llama-3.2-3B-Instruct-re-00001-iter-2
This model is a fine-tuned version of [ZhangShenao/SELM-Llama-3.2-3B-Instruct-re-00001-iter-1](https://huggingface.co/ZhangShenao/SELM-Llama-3.2-3B-Instruct-re-00001-iter-1) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-07
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.45.0
- Pytorch 2.5.1+cu124
- Datasets 2.14.6
- Tokenizers 0.20.3
|
sergioalves/9c842882-c751-416a-9522-d4aa4a501dde | sergioalves | 2025-01-10T19:12:08Z | 10 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:Intel/neural-chat-7b-v3-3",
"base_model:adapter:Intel/neural-chat-7b-v3-3",
"license:apache-2.0",
"region:us"
] | null | 2025-01-10T18:06:31Z | ---
library_name: peft
license: apache-2.0
base_model: Intel/neural-chat-7b-v3-3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9c842882-c751-416a-9522-d4aa4a501dde
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Intel/neural-chat-7b-v3-3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fdf8963a439ed3fc_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fdf8963a439ed3fc_train_data.json
type:
field_input: bad
field_instruction: good
field_output: sentence
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: sergioalves/9c842882-c751-416a-9522-d4aa4a501dde
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/fdf8963a439ed3fc_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_hf
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e30a7f9a-d6aa-4337-a4ea-894f158b7780
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: e30a7f9a-d6aa-4337-a4ea-894f158b7780
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 9c842882-c751-416a-9522-d4aa4a501dde
This model is a fine-tuned version of [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | nan |
| 0.0 | 0.0004 | 8 | nan |
| 0.0 | 0.0008 | 16 | nan |
| 0.0 | 0.0012 | 24 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Seebrasse345/gangous | Seebrasse345 | 2025-01-10T19:09:35Z | 12 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-10T18:40:37Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: gangous
---
# Gangous
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `gangous` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Seebrasse345/gangous', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
matrixportal/Mistral-7B-Instruct_v0.2_UNA-TheBeagle-7b-v1-Q4_K_M-GGUF | matrixportal | 2025-01-10T19:07:12Z | 19 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:Eric111/Mistral-7B-Instruct_v0.2_UNA-TheBeagle-7b-v1",
"base_model:quantized:Eric111/Mistral-7B-Instruct_v0.2_UNA-TheBeagle-7b-v1",
"license:cc-by-nc-nd-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-10T19:06:53Z | ---
base_model: Eric111/Mistral-7B-Instruct_v0.2_UNA-TheBeagle-7b-v1
license: cc-by-nc-nd-4.0
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# matrixportal/Mistral-7B-Instruct_v0.2_UNA-TheBeagle-7b-v1-Q4_K_M-GGUF
This model was converted to GGUF format from [`Eric111/Mistral-7B-Instruct_v0.2_UNA-TheBeagle-7b-v1`](https://huggingface.co/Eric111/Mistral-7B-Instruct_v0.2_UNA-TheBeagle-7b-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Eric111/Mistral-7B-Instruct_v0.2_UNA-TheBeagle-7b-v1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo matrixportal/Mistral-7B-Instruct_v0.2_UNA-TheBeagle-7b-v1-Q4_K_M-GGUF --hf-file mistral-7b-instruct_v0.2_una-thebeagle-7b-v1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo matrixportal/Mistral-7B-Instruct_v0.2_UNA-TheBeagle-7b-v1-Q4_K_M-GGUF --hf-file mistral-7b-instruct_v0.2_una-thebeagle-7b-v1-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo matrixportal/Mistral-7B-Instruct_v0.2_UNA-TheBeagle-7b-v1-Q4_K_M-GGUF --hf-file mistral-7b-instruct_v0.2_una-thebeagle-7b-v1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo matrixportal/Mistral-7B-Instruct_v0.2_UNA-TheBeagle-7b-v1-Q4_K_M-GGUF --hf-file mistral-7b-instruct_v0.2_una-thebeagle-7b-v1-q4_k_m.gguf -c 2048
```
|
fedovtt/4e78eef5-005e-4d72-aca6-94e14307588b | fedovtt | 2025-01-10T19:06:31Z | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:huggyllama/llama-7b",
"base_model:adapter:huggyllama/llama-7b",
"license:other",
"region:us"
] | null | 2025-01-10T18:20:50Z | ---
library_name: peft
license: other
base_model: huggyllama/llama-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4e78eef5-005e-4d72-aca6-94e14307588b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: huggyllama/llama-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2aa6c6220f63108c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2aa6c6220f63108c_train_data.json
type:
field_input: label
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: fedovtt/4e78eef5-005e-4d72-aca6-94e14307588b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/2aa6c6220f63108c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b42b18be-7dfa-4f5a-b449-5eea97ddec10
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b42b18be-7dfa-4f5a-b449-5eea97ddec10
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 4e78eef5-005e-4d72-aca6-94e14307588b
This model is a fine-tuned version of [huggyllama/llama-7b](https://huggingface.co/huggyllama/llama-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | nan |
| 0.0 | 0.0004 | 8 | nan |
| 0.0 | 0.0008 | 16 | nan |
| 0.0 | 0.0011 | 24 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Best000/e5dced4d-8df6-4f83-a3e1-06fb9a73b2a1 | Best000 | 2025-01-10T19:05:04Z | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28",
"base_model:adapter:rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28",
"region:us"
] | null | 2025-01-10T18:33:55Z | ---
library_name: peft
base_model: rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e5dced4d-8df6-4f83-a3e1-06fb9a73b2a1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ea4b9b40db84f198_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ea4b9b40db84f198_train_data.json
type:
field_input: context
field_instruction: question
field_output: final_decision
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Best000/e5dced4d-8df6-4f83-a3e1-06fb9a73b2a1
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/ea4b9b40db84f198_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 04651042-a779-4343-9025-d1a23af15c30
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 04651042-a779-4343-9025-d1a23af15c30
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# e5dced4d-8df6-4f83-a3e1-06fb9a73b2a1
This model is a fine-tuned version of [rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28](https://huggingface.co/rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5149
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 13.2355 | 0.0000 | 1 | 13.6415 |
| 13.9426 | 0.0001 | 3 | 13.1736 |
| 9.4378 | 0.0002 | 6 | 6.8644 |
| 3.0478 | 0.0004 | 9 | 3.5149 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
dzanbek/9a519853-9e48-4e2c-b9f2-10528711d7a3 | dzanbek | 2025-01-10T19:04:29Z | 10 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-10T18:56:29Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9a519853-9e48-4e2c-b9f2-10528711d7a3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 90345863afa5cc84_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/90345863afa5cc84_train_data.json
type:
field_input: ''
field_instruction: article
field_output: highlights
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: dzanbek/9a519853-9e48-4e2c-b9f2-10528711d7a3
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/90345863afa5cc84_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fcb23765-9339-4c73-bdf6-8473b5e7fbba
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: fcb23765-9339-4c73-bdf6-8473b5e7fbba
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 9a519853-9e48-4e2c-b9f2-10528711d7a3
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0741
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0003 | 1 | 3.2615 |
| 3.2953 | 0.0027 | 8 | 3.1890 |
| 3.2127 | 0.0055 | 16 | 3.1111 |
| 3.0333 | 0.0082 | 24 | 3.0741 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso07/048c47fa-88ea-4992-bf66-ae8e50cfad40 | lesso07 | 2025-01-10T19:03:20Z | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28",
"base_model:adapter:rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28",
"region:us"
] | null | 2025-01-10T14:59:11Z | ---
library_name: peft
base_model: rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 048c47fa-88ea-4992-bf66-ae8e50cfad40
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28
bf16: true
chat_template: llama3
datasets:
- data_files:
- ea4b9b40db84f198_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ea4b9b40db84f198_train_data.json
type:
field_input: context
field_instruction: question
field_output: final_decision
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: false
hub_model_id: lesso07/048c47fa-88ea-4992-bf66-ae8e50cfad40
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 77GiB
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/ea4b9b40db84f198_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 25
save_strategy: steps
sequence_len: 1024
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 04651042-a779-4343-9025-d1a23af15c30
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 04651042-a779-4343-9025-d1a23af15c30
warmup_steps: 10
weight_decay: 0.01
xformers_attention: false
```
</details><br>
# 048c47fa-88ea-4992-bf66-ae8e50cfad40
This model is a fine-tuned version of [rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28](https://huggingface.co/rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0274
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 12.5422 | 0.0001 | 1 | 12.2763 |
| 0.2557 | 0.0007 | 9 | 0.4532 |
| 0.1373 | 0.0014 | 18 | 0.1263 |
| 0.062 | 0.0022 | 27 | 0.0433 |
| 0.0013 | 0.0029 | 36 | 0.0399 |
| 0.0007 | 0.0036 | 45 | 0.0365 |
| 0.0059 | 0.0043 | 54 | 0.0253 |
| 0.0027 | 0.0050 | 63 | 0.0254 |
| 0.0309 | 0.0057 | 72 | 0.0293 |
| 0.0047 | 0.0065 | 81 | 0.0307 |
| 0.0044 | 0.0072 | 90 | 0.0276 |
| 0.0003 | 0.0079 | 99 | 0.0274 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
thaffggg/dc3b9001-ce04-4719-8404-dcacde9243d2 | thaffggg | 2025-01-10T19:00:17Z | 11 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Korabbit/llama-2-ko-7b",
"base_model:adapter:Korabbit/llama-2-ko-7b",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-10T18:30:36Z | ---
library_name: peft
base_model: Korabbit/llama-2-ko-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: dc3b9001-ce04-4719-8404-dcacde9243d2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Korabbit/llama-2-ko-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f925ec5472bd6eac_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f925ec5472bd6eac_train_data.json
type:
field_input: description
field_instruction: title
field_output: text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thaffggg/dc3b9001-ce04-4719-8404-dcacde9243d2
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/f925ec5472bd6eac_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 9670d58f-4bf6-46f7-b76f-d44f3b5fb1f5
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 9670d58f-4bf6-46f7-b76f-d44f3b5fb1f5
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# dc3b9001-ce04-4719-8404-dcacde9243d2
This model is a fine-tuned version of [Korabbit/llama-2-ko-7b](https://huggingface.co/Korabbit/llama-2-ko-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6168
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.5584 | 0.0419 | 200 | 1.6168 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kostiantynk-out/45bb82d7-4703-4e62-938b-0f37a6b3ecfd | kostiantynk-out | 2025-01-10T18:59:42Z | 9 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:berkeley-nest/Starling-LM-7B-alpha",
"base_model:adapter:berkeley-nest/Starling-LM-7B-alpha",
"license:apache-2.0",
"region:us"
] | null | 2025-01-10T18:47:35Z | ---
library_name: peft
license: apache-2.0
base_model: berkeley-nest/Starling-LM-7B-alpha
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 45bb82d7-4703-4e62-938b-0f37a6b3ecfd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: berkeley-nest/Starling-LM-7B-alpha
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5f8782025623cf05_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5f8782025623cf05_train_data.json
type:
field_instruction: context
field_output: question
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk-out/45bb82d7-4703-4e62-938b-0f37a6b3ecfd
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/5f8782025623cf05_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0eaa2168-d144-4655-b824-847b031556c1
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0eaa2168-d144-4655-b824-847b031556c1
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 45bb82d7-4703-4e62-938b-0f37a6b3ecfd
This model is a fine-tuned version of [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0002 | 1 | nan |
| 3.3423 | 0.0006 | 3 | nan |
| 7.2252 | 0.0011 | 6 | nan |
| 0.0 | 0.0017 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
VERSIL91/ab530c62-b812-48a1-978d-a3445ec1cd9e | VERSIL91 | 2025-01-10T18:56:30Z | 10 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-10T18:47:15Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ab530c62-b812-48a1-978d-a3445ec1cd9e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
accelerate_config:
dynamo_backend: inductor
mixed_precision: bf16
num_machines: 1
num_processes: auto
use_cpu: false
adapter: lora
base_model: unsloth/Qwen2-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 354d69cdce7fda99_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/354d69cdce7fda99_train_data.json
type:
field_instruction: instruction
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: false
hub_model_id: VERSIL91/ab530c62-b812-48a1-978d-a3445ec1cd9e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lora_target_modules:
- q_proj
- v_proj
lr_scheduler: cosine
max_memory:
0: 70GiB
max_steps: 20
micro_batch_size: 2
mlflow_experiment_name: /tmp/354d69cdce7fda99_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
quantization_config:
llm_int8_enable_fp32_cpu_offload: true
load_in_8bit: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
torch_compile: true
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ab530c62-b812-48a1-978d-a3445ec1cd9e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ab530c62-b812-48a1-978d-a3445ec1cd9e
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# ab530c62-b812-48a1-978d-a3445ec1cd9e
This model is a fine-tuned version of [unsloth/Qwen2-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0146 | 1 | nan |
| 0.0 | 0.0731 | 5 | nan |
| 0.0 | 0.1461 | 10 | nan |
| 0.0 | 0.2192 | 15 | nan |
| 0.0 | 0.2922 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Logii33/nllb-peft-2-way-enta | Logii33 | 2025-01-10T18:54:13Z | 21 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:facebook/nllb-200-distilled-600M",
"base_model:adapter:facebook/nllb-200-distilled-600M",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-01-10T12:36:28Z | ---
base_model: facebook/nllb-200-distilled-600M
library_name: peft
license: cc-by-nc-4.0
metrics:
- sacrebleu
tags:
- generated_from_trainer
model-index:
- name: nllb-peft-2-way-enta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nllb-peft-2-way-enta
This model is a fine-tuned version of [facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.6779
- Sacrebleu: 14.6250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Sacrebleu |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|
| 6.7496 | 1.0 | 11250 | 6.6810 | 14.3659 |
| 6.737 | 2.0 | 22500 | 6.6785 | 14.5979 |
| 6.7405 | 3.0 | 33750 | 6.6779 | 14.6250 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.45.0.dev0
- Pytorch 2.1.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1 |
awrub93/dashawn | awrub93 | 2025-01-10T18:52:49Z | 19 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-10T18:12:44Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: dashwan
---
# Dashawn
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `dashwan` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('awrub93/dashawn', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
mohmdsh/whisper-with-augmentation-small-arabic_with-diacritics | mohmdsh | 2025-01-10T18:48:33Z | 58 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-01-10T14:02:28Z | ---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-with-augmentation-small-arabic_with-diacritics
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-with-augmentation-small-arabic_with-diacritics
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7564
- Wer: 0.7171
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 0.0489 | 3.5133 | 200 | 0.7624 | 0.7171 |
| 0.0143 | 7.0177 | 400 | 1.0633 | 0.7185 |
| 0.005 | 10.5310 | 600 | 1.0752 | 0.7178 |
| 0.0017 | 14.0354 | 800 | 1.1699 | 0.7127 |
| 0.0009 | 17.5487 | 1000 | 1.1766 | 0.7163 |
### Framework versions
- Transformers 4.47.0.dev0
- Pytorch 2.5.1
- Datasets 3.1.0
- Tokenizers 0.20.3
|
dimasik87/46ec6a4e-2713-4ac4-b897-552cb8310ac0 | dimasik87 | 2025-01-10T18:45:55Z | 12 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:01-ai/Yi-1.5-9B-Chat-16K",
"base_model:adapter:01-ai/Yi-1.5-9B-Chat-16K",
"license:apache-2.0",
"region:us"
] | null | 2025-01-10T17:46:25Z | ---
library_name: peft
license: apache-2.0
base_model: 01-ai/Yi-1.5-9B-Chat-16K
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 46ec6a4e-2713-4ac4-b897-552cb8310ac0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: 01-ai/Yi-1.5-9B-Chat-16K
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 227afdb836d95568_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/227afdb836d95568_train_data.json
type:
field_input: level
field_instruction: name
field_output: text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: dimasik87/46ec6a4e-2713-4ac4-b897-552cb8310ac0
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/227afdb836d95568_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7d046c5f-2837-45a6-a88d-f3d18615b72c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7d046c5f-2837-45a6-a88d-f3d18615b72c
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 46ec6a4e-2713-4ac4-b897-552cb8310ac0
This model is a fine-tuned version of [01-ai/Yi-1.5-9B-Chat-16K](https://huggingface.co/01-ai/Yi-1.5-9B-Chat-16K) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5118
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 2.2709 |
| 1.8914 | 0.0003 | 8 | 1.7924 |
| 1.6693 | 0.0006 | 16 | 1.5610 |
| 1.5154 | 0.0009 | 24 | 1.5118 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
VERSIL91/e30a7f9a-d6aa-4337-a4ea-894f158b7780 | VERSIL91 | 2025-01-10T18:44:16Z | 10 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:Intel/neural-chat-7b-v3-3",
"base_model:adapter:Intel/neural-chat-7b-v3-3",
"license:apache-2.0",
"region:us"
] | null | 2025-01-10T18:06:39Z | ---
library_name: peft
license: apache-2.0
base_model: Intel/neural-chat-7b-v3-3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e30a7f9a-d6aa-4337-a4ea-894f158b7780
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
accelerate_config:
dynamo_backend: inductor
mixed_precision: bf16
num_machines: 1
num_processes: auto
use_cpu: false
adapter: lora
base_model: Intel/neural-chat-7b-v3-3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fdf8963a439ed3fc_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fdf8963a439ed3fc_train_data.json
type:
field_input: bad
field_instruction: good
field_output: sentence
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: false
hub_model_id: VERSIL91/e30a7f9a-d6aa-4337-a4ea-894f158b7780
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lora_target_modules:
- q_proj
- v_proj
lr_scheduler: cosine
max_memory:
0: 70GiB
max_steps: 20
micro_batch_size: 2
mlflow_experiment_name: /tmp/fdf8963a439ed3fc_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
quantization_config:
llm_int8_enable_fp32_cpu_offload: true
load_in_8bit: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
torch_compile: true
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e30a7f9a-d6aa-4337-a4ea-894f158b7780
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: e30a7f9a-d6aa-4337-a4ea-894f158b7780
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# e30a7f9a-d6aa-4337-a4ea-894f158b7780
This model is a fine-tuned version of [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0002 | 1 | nan |
| 0.0 | 0.0010 | 5 | nan |
| 0.0 | 0.0020 | 10 | nan |
| 0.0 | 0.0031 | 15 | nan |
| 0.0 | 0.0041 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
PrunaAI/minh132-de-v3.5-bnb-8bit-smashed | PrunaAI | 2025-01-10T18:43:04Z | 7 | 0 | null | [
"safetensors",
"llama",
"pruna-ai",
"base_model:minh132/de-v3.5",
"base_model:quantized:minh132/de-v3.5",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-10T18:33:27Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: minh132/de-v3.5
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer">
<img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo minh132/de-v3.5 installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/minh132-de-v3.5-bnb-8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("minh132/de-v3.5")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model minh132/de-v3.5 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Do it by yourself [here](https://docs.pruna.ai/en/latest/setup/pip.html). |
tuanna08go/b2634976-6617-8249-2c29-5085e619f1de | tuanna08go | 2025-01-10T18:40:35Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-10T18:07:45Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b2634976-6617-8249-2c29-5085e619f1de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c8a49b54a2faeab1_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c8a49b54a2faeab1_train_data.json
type:
field_input: paper_abstract
field_instruction: invitation
field_output: content
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: tuanna08go/b2634976-6617-8249-2c29-5085e619f1de
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/c8a49b54a2faeab1_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e1ebdca1-1f83-4029-b78d-8db2687c4b59
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: e1ebdca1-1f83-4029-b78d-8db2687c4b59
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b2634976-6617-8249-2c29-5085e619f1de
This model is a fine-tuned version of [unsloth/Qwen2-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | nan |
| 0.0 | 0.0008 | 10 | nan |
| 0.0 | 0.0016 | 20 | nan |
| 0.0 | 0.0023 | 30 | nan |
| 0.0 | 0.0031 | 40 | nan |
| 0.0 | 0.0039 | 50 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/Patent-Base-Barcenas-Orca-2-7B-Slerp-GGUF | mradermacher | 2025-01-10T18:38:27Z | 222 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:arcee-ai/Patent-Base-Barcenas-Orca-2-7B-Slerp",
"base_model:quantized:arcee-ai/Patent-Base-Barcenas-Orca-2-7B-Slerp",
"endpoints_compatible",
"region:us"
] | null | 2025-01-10T17:32:38Z | ---
base_model: arcee-ai/Patent-Base-Barcenas-Orca-2-7B-Slerp
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/arcee-ai/Patent-Base-Barcenas-Orca-2-7B-Slerp
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Patent-Base-Barcenas-Orca-2-7B-Slerp-GGUF/resolve/main/Patent-Base-Barcenas-Orca-2-7B-Slerp.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Patent-Base-Barcenas-Orca-2-7B-Slerp-GGUF/resolve/main/Patent-Base-Barcenas-Orca-2-7B-Slerp.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Patent-Base-Barcenas-Orca-2-7B-Slerp-GGUF/resolve/main/Patent-Base-Barcenas-Orca-2-7B-Slerp.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Patent-Base-Barcenas-Orca-2-7B-Slerp-GGUF/resolve/main/Patent-Base-Barcenas-Orca-2-7B-Slerp.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Patent-Base-Barcenas-Orca-2-7B-Slerp-GGUF/resolve/main/Patent-Base-Barcenas-Orca-2-7B-Slerp.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Patent-Base-Barcenas-Orca-2-7B-Slerp-GGUF/resolve/main/Patent-Base-Barcenas-Orca-2-7B-Slerp.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Patent-Base-Barcenas-Orca-2-7B-Slerp-GGUF/resolve/main/Patent-Base-Barcenas-Orca-2-7B-Slerp.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Patent-Base-Barcenas-Orca-2-7B-Slerp-GGUF/resolve/main/Patent-Base-Barcenas-Orca-2-7B-Slerp.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Patent-Base-Barcenas-Orca-2-7B-Slerp-GGUF/resolve/main/Patent-Base-Barcenas-Orca-2-7B-Slerp.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Patent-Base-Barcenas-Orca-2-7B-Slerp-GGUF/resolve/main/Patent-Base-Barcenas-Orca-2-7B-Slerp.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Patent-Base-Barcenas-Orca-2-7B-Slerp-GGUF/resolve/main/Patent-Base-Barcenas-Orca-2-7B-Slerp.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Patent-Base-Barcenas-Orca-2-7B-Slerp-GGUF/resolve/main/Patent-Base-Barcenas-Orca-2-7B-Slerp.f16.gguf) | f16 | 13.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
macordob/clasificador-tweets | macordob | 2025-01-10T18:33:04Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:mrm8488/electricidad-base-discriminator",
"base_model:finetune:mrm8488/electricidad-base-discriminator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-01-10T18:32:26Z | ---
library_name: transformers
base_model: mrm8488/electricidad-base-discriminator
tags:
- classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: clasificador-tweets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-tweets
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1347
- Accuracy: 0.6383
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 23 | 1.7416 | 0.3404 |
| No log | 2.0 | 46 | 1.5459 | 0.5319 |
| No log | 3.0 | 69 | 1.3115 | 0.6383 |
| No log | 4.0 | 92 | 1.1876 | 0.6383 |
| No log | 5.0 | 115 | 1.1770 | 0.6383 |
| No log | 6.0 | 138 | 1.1774 | 0.6383 |
| No log | 7.0 | 161 | 1.1591 | 0.6383 |
| No log | 8.0 | 184 | 1.1297 | 0.6596 |
| No log | 9.0 | 207 | 1.1315 | 0.6383 |
| No log | 10.0 | 230 | 1.1347 | 0.6383 |
### Framework versions
- Transformers 4.48.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
awrub93/kanoa | awrub93 | 2025-01-10T18:29:23Z | 18 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-10T18:09:50Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: kanoa
---
# Kanoa
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `kanoa` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('awrub93/kanoa', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
BlackHillsInformationSecurity/Llama-3.2-3B-Instruct-abliterated | BlackHillsInformationSecurity | 2025-01-10T18:29:08Z | 12 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"abliterated",
"uncensored",
"conversational",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-10T17:15:49Z | ---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-3B-Instruct
tags:
- abliterated
- uncensored
---
# 🦙 Llama-3.2-3B-Instruct-abliterated
This is an uncensored version of Llama 3.2 3B Instruct created with abliteration (see [this article](https://huggingface.co/blog/mlabonne/abliteration) to know more about it).
Special thanks to [@FailSpy](https://huggingface.co/failspy) for the original code and technique. Please follow him if you're interested in abliterated models.
## ollama
You can use [huihui_ai/llama3.2-abliterate:3b](https://ollama.com/huihui_ai/llama3.2-abliterate:3b) directly,
```
ollama run huihui_ai/llama3.2-abliterate
```
or create your own model using the following methods.
1. Download this model.
```
huggingface-cli download huihui-ai/Llama-3.2-3B-Instruct-abliterated --local-dir ./huihui-ai/Llama-3.2-3B-Instruct-abliterated
```
2. Get Llama-3.2-3B-Instruct model for reference.
```
ollama pull llama3.2
```
3. Export Llama-3.2-3B-Instruct model parameters.
```
ollama show llama3.2 --modelfile > Modelfile
```
4. Modify Modelfile, Remove all comment lines (indicated by #) before the "FROM" keyword. Replace the "FROM" with the following content.
```
FROM huihui-ai/Llama-3.2-3B-Instruct-abliterated
```
5. Use ollama create to then create the quantized model.
```
ollama create --quantize q4_K_M -f Modelfile Llama-3.2-3B-Instruct-abliterated-q4_K_M
```
6. Run model
```
ollama run Llama-3.2-3B-Instruct-abliterated-q4_K_M
```
The running architecture is llama.
## Evaluations
The following data has been re-evaluated and calculated as the average for each test.
| Benchmark | Llama-3.2-3B-Instruct | Llama-3.2-3B-Instruct-abliterated |
|-------------|-----------------------|-----------------------------------|
| IF_Eval | 76.55 | **76.76** |
| MMLU Pro | 27.88 | **28.00** |
| TruthfulQA | 50.55 | **50.73** |
| BBH | 41.81 | **41.86** |
| GPQA | 28.39 | **28.41** |
The script used for evaluation can be found inside this repository under /eval.sh, or click [here](https://huggingface.co/huihui-ai/Llama-3.2-3B-Instruct-abliterated/blob/main/eval.sh)
|
nhung03/ddf81ae0-fd56-49a8-a0be-da1168aca772 | nhung03 | 2025-01-10T18:27:58Z | 10 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Mistral-Nemo-Instruct-2407",
"base_model:adapter:unsloth/Mistral-Nemo-Instruct-2407",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-10T17:16:43Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Mistral-Nemo-Instruct-2407
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ddf81ae0-fd56-49a8-a0be-da1168aca772
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Mistral-Nemo-Instruct-2407
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2a3a0d6fea0609c5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2a3a0d6fea0609c5_train_data.json
type:
field_input: knowledge
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung03/ddf81ae0-fd56-49a8-a0be-da1168aca772
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/2a3a0d6fea0609c5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7185e2c9-93a6-4519-a3b9-075c4614c1de
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7185e2c9-93a6-4519-a3b9-075c4614c1de
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# ddf81ae0-fd56-49a8-a0be-da1168aca772
This model is a fine-tuned version of [unsloth/Mistral-Nemo-Instruct-2407](https://huggingface.co/unsloth/Mistral-Nemo-Instruct-2407) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8120
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.0996 | 0.0175 | 200 | 0.8120 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
MinaMila/GermanCredit_ExtEval_Mistral_Base_10ep | MinaMila | 2025-01-10T18:27:46Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/mistral-7b-v0.3",
"base_model:finetune:unsloth/mistral-7b-v0.3",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-10T18:25:04Z | ---
base_model: unsloth/mistral-7b-v0.3
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-v0.3
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
adammandic87/8801d395-e3b3-4c16-8abf-1f8e88c2c2ec | adammandic87 | 2025-01-10T18:26:01Z | 10 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Math-7B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-Math-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-10T18:18:37Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Math-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8801d395-e3b3-4c16-8abf-1f8e88c2c2ec
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Math-7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 67ca02675240a51f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/67ca02675240a51f_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: adammandic87/8801d395-e3b3-4c16-8abf-1f8e88c2c2ec
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/67ca02675240a51f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 293589b7-5982-4d1d-bd8c-e478d3dfd31e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 293589b7-5982-4d1d-bd8c-e478d3dfd31e
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 8801d395-e3b3-4c16-8abf-1f8e88c2c2ec
This model is a fine-tuned version of [unsloth/Qwen2.5-Math-7B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Math-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0001 | 1 | nan |
| 0.0 | 0.0003 | 3 | nan |
| 0.0 | 0.0007 | 6 | nan |
| 0.0 | 0.0010 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/KorMedQwen-GGUF | mradermacher | 2025-01-10T18:24:33Z | 324 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:minsup562/KorMedQwen",
"base_model:quantized:minsup562/KorMedQwen",
"endpoints_compatible",
"region:us"
] | null | 2025-01-10T16:22:55Z | ---
base_model: minsup562/KorMedQwen
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/minsup562/KorMedQwen
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/KorMedQwen-GGUF/resolve/main/KorMedQwen.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/KorMedQwen-GGUF/resolve/main/KorMedQwen.Q3_K_S.gguf) | Q3_K_S | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/KorMedQwen-GGUF/resolve/main/KorMedQwen.Q3_K_M.gguf) | Q3_K_M | 4.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/KorMedQwen-GGUF/resolve/main/KorMedQwen.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/KorMedQwen-GGUF/resolve/main/KorMedQwen.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/KorMedQwen-GGUF/resolve/main/KorMedQwen.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/KorMedQwen-GGUF/resolve/main/KorMedQwen.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/KorMedQwen-GGUF/resolve/main/KorMedQwen.Q5_K_S.gguf) | Q5_K_S | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/KorMedQwen-GGUF/resolve/main/KorMedQwen.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/KorMedQwen-GGUF/resolve/main/KorMedQwen.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/KorMedQwen-GGUF/resolve/main/KorMedQwen.Q8_0.gguf) | Q8_0 | 8.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/KorMedQwen-GGUF/resolve/main/KorMedQwen.f16.gguf) | f16 | 15.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
GuillaumeG50/amelie | GuillaumeG50 | 2025-01-10T18:23:58Z | 45 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-10T17:41:32Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: amelie
---
# Amelie
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `amelie` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('GuillaumeG50/amelie', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
akshat57/custom-gpt2-model2 | akshat57 | 2025-01-10T18:22:51Z | 62 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-01-10T18:20:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kostiantynk/bf7036a8-59b1-4e0b-8311-46f1609105ab | kostiantynk | 2025-01-10T18:21:14Z | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:heegyu/WizardVicuna2-13b-hf",
"base_model:adapter:heegyu/WizardVicuna2-13b-hf",
"region:us"
] | null | 2025-01-10T18:03:42Z | ---
library_name: peft
base_model: heegyu/WizardVicuna2-13b-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: bf7036a8-59b1-4e0b-8311-46f1609105ab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: heegyu/WizardVicuna2-13b-hf
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3ad533406285a0ea_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3ad533406285a0ea_train_data.json
type:
field_instruction: dialogue
field_output: reference
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk/bf7036a8-59b1-4e0b-8311-46f1609105ab
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/3ad533406285a0ea_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7011d4ab-02bf-4cec-ad2f-ae7dc6130702
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7011d4ab-02bf-4cec-ad2f-ae7dc6130702
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# bf7036a8-59b1-4e0b-8311-46f1609105ab
This model is a fine-tuned version of [heegyu/WizardVicuna2-13b-hf](https://huggingface.co/heegyu/WizardVicuna2-13b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8854
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7899 | 0.0003 | 1 | 0.9394 |
| 1.1068 | 0.0009 | 3 | 0.9390 |
| 0.9386 | 0.0017 | 6 | 0.9298 |
| 0.6346 | 0.0026 | 9 | 0.8854 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kostiantynk1205/e0d9ad20-2f5f-46eb-b6c9-2bc106ee3189 | kostiantynk1205 | 2025-01-10T18:20:21Z | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:heegyu/WizardVicuna2-13b-hf",
"base_model:adapter:heegyu/WizardVicuna2-13b-hf",
"region:us"
] | null | 2025-01-10T18:02:01Z | ---
library_name: peft
base_model: heegyu/WizardVicuna2-13b-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e0d9ad20-2f5f-46eb-b6c9-2bc106ee3189
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: heegyu/WizardVicuna2-13b-hf
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3ad533406285a0ea_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3ad533406285a0ea_train_data.json
type:
field_instruction: dialogue
field_output: reference
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk1205/e0d9ad20-2f5f-46eb-b6c9-2bc106ee3189
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/3ad533406285a0ea_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7011d4ab-02bf-4cec-ad2f-ae7dc6130702
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7011d4ab-02bf-4cec-ad2f-ae7dc6130702
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# e0d9ad20-2f5f-46eb-b6c9-2bc106ee3189
This model is a fine-tuned version of [heegyu/WizardVicuna2-13b-hf](https://huggingface.co/heegyu/WizardVicuna2-13b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8864
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7899 | 0.0003 | 1 | 0.9394 |
| 1.1066 | 0.0009 | 3 | 0.9390 |
| 0.9387 | 0.0017 | 6 | 0.9302 |
| 0.6352 | 0.0026 | 9 | 0.8864 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
FatCat87/03b1c68f-da63-4809-b491-a3f5179f6cd3 | FatCat87 | 2025-01-10T18:19:46Z | 10 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-0.5B",
"base_model:adapter:unsloth/Qwen2.5-0.5B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-10T18:11:33Z | ---
license: apache-2.0
library_name: peft
tags:
- axolotl
- generated_from_trainer
base_model: unsloth/Qwen2.5-0.5B
model-index:
- name: 03b1c68f-da63-4809-b491-a3f5179f6cd3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-0.5B
bf16: auto
datasets:
- data_files:
- 8410094a2dce200f_train_data.json
ds_type: json
format: custom
path: 8410094a2dce200f_train_data.json
type:
field: null
field_input: null
field_instruction: instructions
field_output: target_responses
field_system: null
format: null
no_input_format: null
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_sample_packing: false
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: FatCat87/03b1c68f-da63-4809-b491-a3f5179f6cd3
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
micro_batch_size: 2
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: ./outputs/out
pad_to_sequence_len: true
resume_from_checkpoint: null
sample_packing: true
saves_per_epoch: 1
seed: 701
sequence_len: 4096
special_tokens: null
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
val_set_size: 0.1
wandb_entity: fatcat87-taopanda
wandb_log_model: null
wandb_mode: online
wandb_name: 03b1c68f-da63-4809-b491-a3f5179f6cd3
wandb_project: subnet56
wandb_runid: 03b1c68f-da63-4809-b491-a3f5179f6cd3
wandb_watch: null
warmup_ratio: 0.05
weight_decay: 0.0
xformers_attention: null
```
</details><br>
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/fatcat87-taopanda/subnet56/runs/zupghk90)
# 03b1c68f-da63-4809-b491-a3f5179f6cd3
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B](https://huggingface.co/unsloth/Qwen2.5-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4450
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 701
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.1014 | 0.0952 | 1 | 1.9755 |
| 1.878 | 0.2857 | 3 | 1.7412 |
| 1.5662 | 0.5714 | 6 | 1.5236 |
| 1.5404 | 0.8571 | 9 | 1.4450 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.42.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
Omartificial-Intelligence-Space/Marbert-all-nli-triplet-Matryoshka | Omartificial-Intelligence-Space | 2025-01-10T18:14:19Z | 407 | 1 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"mteb",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:557850",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"ar",
"dataset:Omartificial-Intelligence-Space/Arabic-NLi-Triplet",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"arxiv:2407.21139",
"base_model:UBC-NLP/MARBERTv2",
"base_model:finetune:UBC-NLP/MARBERTv2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-06-17T11:23:10Z | ---
language:
- ar
library_name: sentence-transformers
tags:
- mteb
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:557850
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: UBC-NLP/MARBERTv2
datasets:
- Omartificial-Intelligence-Space/Arabic-NLi-Triplet
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
widget:
- source_sentence: ذكر متوازن بعناية يقف على قدم واحدة بالقرب من منطقة شاطئ المحيط النظيفة
sentences:
- رجل يقدم عرضاً
- هناك رجل بالخارج قرب الشاطئ
- رجل يجلس على أريكه
- source_sentence: رجل يقفز إلى سريره القذر
sentences:
- السرير قذر.
- رجل يضحك أثناء غسيل الملابس
- الرجل على القمر
- source_sentence: الفتيات بالخارج
sentences:
- امرأة تلف الخيط إلى كرات بجانب كومة من الكرات
- فتيان يركبان في جولة متعة
- >-
ثلاث فتيات يقفون سوية في غرفة واحدة تستمع وواحدة تكتب على الحائط والثالثة
تتحدث إليهن
- source_sentence: الرجل يرتدي قميصاً أزرق.
sentences:
- >-
رجل يرتدي قميصاً أزرق يميل إلى الجدار بجانب الطريق مع شاحنة زرقاء وسيارة
حمراء مع الماء في الخلفية.
- كتاب القصص مفتوح
- رجل يرتدي قميص أسود يعزف على الجيتار.
- source_sentence: يجلس شاب ذو شعر أشقر على الحائط يقرأ جريدة بينما تمر امرأة وفتاة شابة.
sentences:
- ذكر شاب ينظر إلى جريدة بينما تمر إمرأتان بجانبه
- رجل يستلقي على وجهه على مقعد في الحديقة.
- الشاب نائم بينما الأم تقود ابنتها إلى الحديقة
pipeline_tag: sentence-similarity
model-index:
- name: Omartificial-Intelligence-Space/Marbert-all-nli-triplet-Matryoshka
results:
- dataset:
config: ar
name: MTEB MintakaRetrieval (ar)
revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e
split: test
type: mintaka/mmteb-mintaka
metrics:
- type: main_score
value: 16.058
- type: map_at_1
value: 8.398
- type: map_at_3
value: 11.681
- type: map_at_5
value: 12.616
- type: map_at_10
value: 13.281
- type: ndcg_at_1
value: 8.398
- type: ndcg_at_3
value: 12.75
- type: ndcg_at_5
value: 14.453
- type: ndcg_at_10
value: 16.058
- type: recall_at_1
value: 8.398
- type: recall_at_3
value: 15.842
- type: recall_at_5
value: 20.018
- type: recall_at_10
value: 24.966
- type: precision_at_1
value: 8.398
- type: precision_at_3
value: 5.281
- type: precision_at_5
value: 4.004
- type: precision_at_10
value: 2.497
- type: mrr_at_1
value: 8.3976
- type: mrr_at_3
value: 11.681
- type: mrr_at_5
value: 12.6161
- type: mrr_at_10
value: 13.2812
task:
type: Retrieval
- dataset:
config: ar
name: MTEB MIRACLRetrievalHardNegatives (ar)
revision: 95c8db7d4a6e9c1d8a60601afd63d553ae20a2eb
split: dev
type: miracl/mmteb-miracl-hardnegatives
metrics:
- type: main_score
value: 15.853
- type: map_at_1
value: 5.867
- type: map_at_3
value: 9.003
- type: map_at_5
value: 10.068
- type: map_at_10
value: 11.294
- type: ndcg_at_1
value: 9.0
- type: ndcg_at_3
value: 11.363
- type: ndcg_at_5
value: 12.986
- type: ndcg_at_10
value: 15.853
- type: recall_at_1
value: 5.867
- type: recall_at_3
value: 12.639
- type: recall_at_5
value: 16.649
- type: recall_at_10
value: 24.422
- type: precision_at_1
value: 9.0
- type: precision_at_3
value: 7.1
- type: precision_at_5
value: 5.82
- type: precision_at_10
value: 4.38
- type: mrr_at_1
value: 9.0
- type: mrr_at_3
value: 13.4667
- type: mrr_at_5
value: 14.6367
- type: mrr_at_10
value: 16.0177
task:
type: Retrieval
- dataset:
config: ar
name: MTEB MLQARetrieval (ar)
revision: 397ed406c1a7902140303e7faf60fff35b58d285
split: validation
type: mlqa/mmteb-mlqa
metrics:
- type: main_score
value: 58.919
- type: map_at_1
value: 44.874
- type: map_at_3
value: 51.902
- type: map_at_5
value: 53.198
- type: map_at_10
value: 54.181
- type: ndcg_at_1
value: 44.874
- type: ndcg_at_3
value: 54.218
- type: ndcg_at_5
value: 56.541
- type: ndcg_at_10
value: 58.919
- type: recall_at_1
value: 44.874
- type: recall_at_3
value: 60.928
- type: recall_at_5
value: 66.538
- type: recall_at_10
value: 73.888
- type: precision_at_1
value: 44.874
- type: precision_at_3
value: 20.309
- type: precision_at_5
value: 13.308
- type: precision_at_10
value: 7.389
- type: mrr_at_1
value: 44.8743
- type: mrr_at_3
value: 51.902
- type: mrr_at_5
value: 53.1979
- type: mrr_at_10
value: 54.1809
task:
type: Retrieval
- dataset:
config: default
name: MTEB SadeemQuestionRetrieval (ar)
revision: 3cb0752b182e5d5d740df547748b06663c8e0bd9
split: test
type: sadeem/mmteb-sadeem
metrics:
- type: main_score
value: 57.068
- type: map_at_1
value: 24.414
- type: map_at_3
value: 45.333
- type: map_at_5
value: 46.695
- type: map_at_10
value: 47.429
- type: ndcg_at_1
value: 24.414
- type: ndcg_at_3
value: 52.828
- type: ndcg_at_5
value: 55.288
- type: ndcg_at_10
value: 57.068
- type: recall_at_1
value: 24.414
- type: recall_at_3
value: 74.725
- type: recall_at_5
value: 80.708
- type: recall_at_10
value: 86.213
- type: precision_at_1
value: 24.414
- type: precision_at_3
value: 24.908
- type: precision_at_5
value: 16.142
- type: precision_at_10
value: 8.621
- type: mrr_at_1
value: 25.2753
- type: mrr_at_3
value: 45.58
- type: mrr_at_5
value: 46.8581
- type: mrr_at_10
value: 47.6414
task:
type: Retrieval
- dataset:
config: default
name: MTEB BIOSSES (default)
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
split: test
type: mteb/biosses-sts
metrics:
- type: cosine_pearson
value: 49.25240527202211
- type: cosine_spearman
value: 51.87708566904703
- type: euclidean_pearson
value: 49.790877425774696
- type: euclidean_spearman
value: 51.725274981021855
- type: main_score
value: 51.87708566904703
- type: manhattan_pearson
value: 52.31560776967401
- type: manhattan_spearman
value: 54.28979124658997
task:
type: STS
- dataset:
config: default
name: MTEB SICK-R (default)
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
split: test
type: mteb/sickr-sts
metrics:
- type: cosine_pearson
value: 65.81089479351829
- type: cosine_spearman
value: 65.80163441928238
- type: euclidean_pearson
value: 65.2718874370746
- type: euclidean_spearman
value: 65.92429031695988
- type: main_score
value: 65.80163441928238
- type: manhattan_pearson
value: 65.28701419332383
- type: manhattan_spearman
value: 65.94229793651319
task:
type: STS
- dataset:
config: default
name: MTEB STS12 (default)
revision: a0d554a64d88156834ff5ae9920b964011b16384
split: test
type: mteb/sts12-sts
metrics:
- type: cosine_pearson
value: 65.11346939995998
- type: cosine_spearman
value: 63.00297824477175
- type: euclidean_pearson
value: 63.85320097970942
- type: euclidean_spearman
value: 63.25151047701848
- type: main_score
value: 63.00297824477175
- type: manhattan_pearson
value: 64.40291990853984
- type: manhattan_spearman
value: 63.63497232399945
task:
type: STS
- dataset:
config: default
name: MTEB STS13 (default)
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
split: test
type: mteb/sts13-sts
metrics:
- type: cosine_pearson
value: 52.2735823521702
- type: cosine_spearman
value: 52.23198766098021
- type: euclidean_pearson
value: 54.12467577456837
- type: euclidean_spearman
value: 52.40014028261351
- type: main_score
value: 52.23198766098021
- type: manhattan_pearson
value: 54.38052509834607
- type: manhattan_spearman
value: 52.70836595958237
task:
type: STS
- dataset:
config: default
name: MTEB STS14 (default)
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
split: test
type: mteb/sts14-sts
metrics:
- type: cosine_pearson
value: 58.55307076840419
- type: cosine_spearman
value: 59.2261024017655
- type: euclidean_pearson
value: 59.55734715751804
- type: euclidean_spearman
value: 60.135899681574834
- type: main_score
value: 59.2261024017655
- type: manhattan_pearson
value: 59.99274396356966
- type: manhattan_spearman
value: 60.44325356503041
task:
type: STS
- dataset:
config: default
name: MTEB STS15 (default)
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
split: test
type: mteb/sts15-sts
metrics:
- type: cosine_pearson
value: 68.94418532602707
- type: cosine_spearman
value: 70.01912156519296
- type: euclidean_pearson
value: 71.67028435860581
- type: euclidean_spearman
value: 71.48252471922122
- type: main_score
value: 70.01912156519296
- type: manhattan_pearson
value: 71.9587452337792
- type: manhattan_spearman
value: 71.69160519065173
task:
type: STS
- dataset:
config: default
name: MTEB STS16 (default)
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
split: test
type: mteb/sts16-sts
metrics:
- type: cosine_pearson
value: 62.81619254162203
- type: cosine_spearman
value: 64.98814526698425
- type: euclidean_pearson
value: 66.43531796610995
- type: euclidean_spearman
value: 66.53768451143964
- type: main_score
value: 64.98814526698425
- type: manhattan_pearson
value: 66.57822125651369
- type: manhattan_spearman
value: 66.71830390508079
task:
type: STS
- dataset:
config: ar-ar
name: MTEB STS17 (ar-ar)
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
split: test
type: mteb/sts17-crosslingual-sts
metrics:
- type: cosine_pearson
value: 81.68055610903552
- type: cosine_spearman
value: 82.18125783448961
- type: euclidean_pearson
value: 80.5422740473486
- type: euclidean_spearman
value: 81.79456727036232
- type: main_score
value: 82.18125783448961
- type: manhattan_pearson
value: 80.43564733654793
- type: manhattan_spearman
value: 81.76103816207625
task:
type: STS
- dataset:
config: ar
name: MTEB STS22 (ar)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: 51.33460593849487
- type: cosine_spearman
value: 58.07741072443786
- type: euclidean_pearson
value: 54.26430308336828
- type: euclidean_spearman
value: 58.8384539429318
- type: main_score
value: 58.07741072443786
- type: manhattan_pearson
value: 54.41587176266624
- type: manhattan_spearman
value: 58.831993325957086
task:
type: STS
- dataset:
config: default
name: MTEB STSBenchmark (default)
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
split: test
type: mteb/stsbenchmark-sts
metrics:
- type: cosine_pearson
value: 61.11956207522431
- type: cosine_spearman
value: 61.16768766134144
- type: euclidean_pearson
value: 64.44141934993837
- type: euclidean_spearman
value: 63.450379593077066
- type: main_score
value: 61.16768766134144
- type: manhattan_pearson
value: 64.43852352892529
- type: manhattan_spearman
value: 63.57630045107761
task:
type: STS
- dataset:
config: default
name: MTEB SummEval (default)
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
split: test
type: mteb/summeval
metrics:
- type: cosine_pearson
value: 29.583566160417668
- type: cosine_spearman
value: 29.534419950502212
- type: dot_pearson
value: 28.13970643170574
- type: dot_spearman
value: 28.907762267009073
- type: main_score
value: 29.534419950502212
- type: pearson
value: 29.583566160417668
- type: spearman
value: 29.534419950502212
task:
type: Summarization
- name: SentenceTransformer based on UBC-NLP/MARBERTv2
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 768
type: sts-test-768
metrics:
- type: pearson_cosine
value: 0.611168498883907
name: Pearson Cosine
- type: spearman_cosine
value: 0.6116733587939157
name: Spearman Cosine
- type: pearson_manhattan
value: 0.6443687886661206
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.6358107360369792
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.644404066642609
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.6345893921062774
name: Spearman Euclidean
- type: pearson_dot
value: 0.4723643245352202
name: Pearson Dot
- type: spearman_dot
value: 0.44844519905410135
name: Spearman Dot
- type: pearson_max
value: 0.644404066642609
name: Pearson Max
- type: spearman_max
value: 0.6358107360369792
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 512
type: sts-test-512
metrics:
- type: pearson_cosine
value: 0.6664570291720014
name: Pearson Cosine
- type: spearman_cosine
value: 0.6647687532159875
name: Spearman Cosine
- type: pearson_manhattan
value: 0.6429976947418544
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.6334753432753939
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.6466249455585532
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.6373181315122213
name: Spearman Euclidean
- type: pearson_dot
value: 0.5370129457359227
name: Pearson Dot
- type: spearman_dot
value: 0.5241649973373772
name: Spearman Dot
- type: pearson_max
value: 0.6664570291720014
name: Pearson Max
- type: spearman_max
value: 0.6647687532159875
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 256
type: sts-test-256
metrics:
- type: pearson_cosine
value: 0.6601248277308522
name: Pearson Cosine
- type: spearman_cosine
value: 0.6592739654246011
name: Spearman Cosine
- type: pearson_manhattan
value: 0.6361644543165994
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.6250621947417249
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.6408426652431157
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.6300109524350457
name: Spearman Euclidean
- type: pearson_dot
value: 0.5250513197384045
name: Pearson Dot
- type: spearman_dot
value: 0.5154779060125071
name: Spearman Dot
- type: pearson_max
value: 0.6601248277308522
name: Pearson Max
- type: spearman_max
value: 0.6592739654246011
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 128
type: sts-test-128
metrics:
- type: pearson_cosine
value: 0.6549481034721005
name: Pearson Cosine
- type: spearman_cosine
value: 0.6523201621940143
name: Spearman Cosine
- type: pearson_manhattan
value: 0.6342700090917214
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.6226791710099966
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.6397224689512541
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.6280973341704362
name: Spearman Euclidean
- type: pearson_dot
value: 0.47240889358810917
name: Pearson Dot
- type: spearman_dot
value: 0.4633669926372942
name: Spearman Dot
- type: pearson_max
value: 0.6549481034721005
name: Pearson Max
- type: spearman_max
value: 0.6523201621940143
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 64
type: sts-test-64
metrics:
- type: pearson_cosine
value: 0.6367217585211098
name: Pearson Cosine
- type: spearman_cosine
value: 0.6370191671711296
name: Spearman Cosine
- type: pearson_manhattan
value: 0.6263730801254332
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.6118927366012856
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.6327699647617465
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.6180184829867724
name: Spearman Euclidean
- type: pearson_dot
value: 0.41169381399943167
name: Pearson Dot
- type: spearman_dot
value: 0.40444222536491986
name: Spearman Dot
- type: pearson_max
value: 0.6367217585211098
name: Pearson Max
- type: spearman_max
value: 0.6370191671711296
name: Spearman Max
license: apache-2.0
---
# SentenceTransformer based on UBC-NLP/MARBERTv2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [UBC-NLP/MARBERTv2](https://huggingface.co/UBC-NLP/MARBERTv2) on the Omartificial-Intelligence-Space/arabic-n_li-triplet dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [UBC-NLP/MARBERTv2](https://huggingface.co/UBC-NLP/MARBERTv2) <!-- at revision fe88db9db8ccdb0c4e1627495f405c44a5f89066 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- Omartificial-Intelligence-Space/arabic-n_li-triplet
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Omartificial-Intelligence-Space/Marbert-all-nli-triplet")
# Run inference
sentences = [
'يجلس شاب ذو شعر أشقر على الحائط يقرأ جريدة بينما تمر امرأة وفتاة شابة.',
'ذكر شاب ينظر إلى جريدة بينما تمر إمرأتان بجانبه',
'الشاب نائم بينما الأم تقود ابنتها إلى الحديقة',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `sts-test-768`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.6112 |
| **spearman_cosine** | **0.6117** |
| pearson_manhattan | 0.6444 |
| spearman_manhattan | 0.6358 |
| pearson_euclidean | 0.6444 |
| spearman_euclidean | 0.6346 |
| pearson_dot | 0.4724 |
| spearman_dot | 0.4484 |
| pearson_max | 0.6444 |
| spearman_max | 0.6358 |
#### Semantic Similarity
* Dataset: `sts-test-512`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.6665 |
| **spearman_cosine** | **0.6648** |
| pearson_manhattan | 0.643 |
| spearman_manhattan | 0.6335 |
| pearson_euclidean | 0.6466 |
| spearman_euclidean | 0.6373 |
| pearson_dot | 0.537 |
| spearman_dot | 0.5242 |
| pearson_max | 0.6665 |
| spearman_max | 0.6648 |
#### Semantic Similarity
* Dataset: `sts-test-256`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.6601 |
| **spearman_cosine** | **0.6593** |
| pearson_manhattan | 0.6362 |
| spearman_manhattan | 0.6251 |
| pearson_euclidean | 0.6408 |
| spearman_euclidean | 0.63 |
| pearson_dot | 0.5251 |
| spearman_dot | 0.5155 |
| pearson_max | 0.6601 |
| spearman_max | 0.6593 |
#### Semantic Similarity
* Dataset: `sts-test-128`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.6549 |
| **spearman_cosine** | **0.6523** |
| pearson_manhattan | 0.6343 |
| spearman_manhattan | 0.6227 |
| pearson_euclidean | 0.6397 |
| spearman_euclidean | 0.6281 |
| pearson_dot | 0.4724 |
| spearman_dot | 0.4634 |
| pearson_max | 0.6549 |
| spearman_max | 0.6523 |
#### Semantic Similarity
* Dataset: `sts-test-64`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:----------|
| pearson_cosine | 0.6367 |
| **spearman_cosine** | **0.637** |
| pearson_manhattan | 0.6264 |
| spearman_manhattan | 0.6119 |
| pearson_euclidean | 0.6328 |
| spearman_euclidean | 0.618 |
| pearson_dot | 0.4117 |
| spearman_dot | 0.4044 |
| pearson_max | 0.6367 |
| spearman_max | 0.637 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Omartificial-Intelligence-Space/arabic-n_li-triplet
* Dataset: Omartificial-Intelligence-Space/arabic-n_li-triplet
* Size: 557,850 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 7.68 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.66 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 10.47 tokens</li><li>max: 40 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:------------------------------------------------------------|:--------------------------------------------|:------------------------------------|
| <code>شخص على حصان يقفز فوق طائرة معطلة</code> | <code>شخص في الهواء الطلق، على حصان.</code> | <code>شخص في مطعم، يطلب عجة.</code> |
| <code>أطفال يبتسمون و يلوحون للكاميرا</code> | <code>هناك أطفال حاضرون</code> | <code>الاطفال يتجهمون</code> |
| <code>صبي يقفز على لوح التزلج في منتصف الجسر الأحمر.</code> | <code>الفتى يقوم بخدعة التزلج</code> | <code>الصبي يتزلج على الرصيف</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Evaluation Dataset
#### Omartificial-Intelligence-Space/arabic-n_li-triplet
* Dataset: Omartificial-Intelligence-Space/arabic-n_li-triplet
* Size: 6,584 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 14.78 tokens</li><li>max: 70 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 7.41 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 7.95 tokens</li><li>max: 21 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------|:---------------------------------------------------|
| <code>امرأتان يتعانقان بينما يحملان حزمة</code> | <code>إمرأتان يحملان حزمة</code> | <code>الرجال يتشاجرون خارج مطعم</code> |
| <code>طفلين صغيرين يرتديان قميصاً أزرق، أحدهما يرتدي الرقم 9 والآخر يرتدي الرقم 2 يقفان على خطوات خشبية في الحمام ويغسلان أيديهما في المغسلة.</code> | <code>طفلين يرتديان قميصاً مرقماً يغسلون أيديهم</code> | <code>طفلين يرتديان سترة يذهبان إلى المدرسة</code> |
| <code>رجل يبيع الدونات لعميل خلال معرض عالمي أقيم في مدينة أنجليس</code> | <code>رجل يبيع الدونات لعميل</code> | <code>امرأة تشرب قهوتها في مقهى صغير</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | sts-test-128_spearman_cosine | sts-test-256_spearman_cosine | sts-test-512_spearman_cosine | sts-test-64_spearman_cosine | sts-test-768_spearman_cosine |
|:------:|:----:|:-------------:|:----------------------------:|:----------------------------:|:----------------------------:|:---------------------------:|:----------------------------:|
| 0.0229 | 200 | 25.0771 | - | - | - | - | - |
| 0.0459 | 400 | 9.1435 | - | - | - | - | - |
| 0.0688 | 600 | 8.0492 | - | - | - | - | - |
| 0.0918 | 800 | 7.1378 | - | - | - | - | - |
| 0.1147 | 1000 | 7.6249 | - | - | - | - | - |
| 0.1377 | 1200 | 7.3604 | - | - | - | - | - |
| 0.1606 | 1400 | 6.5783 | - | - | - | - | - |
| 0.1835 | 1600 | 6.4145 | - | - | - | - | - |
| 0.2065 | 1800 | 6.1781 | - | - | - | - | - |
| 0.2294 | 2000 | 6.2375 | - | - | - | - | - |
| 0.2524 | 2200 | 6.2587 | - | - | - | - | - |
| 0.2753 | 2400 | 6.0826 | - | - | - | - | - |
| 0.2983 | 2600 | 6.1514 | - | - | - | - | - |
| 0.3212 | 2800 | 5.6949 | - | - | - | - | - |
| 0.3442 | 3000 | 6.0062 | - | - | - | - | - |
| 0.3671 | 3200 | 5.7551 | - | - | - | - | - |
| 0.3900 | 3400 | 5.658 | - | - | - | - | - |
| 0.4130 | 3600 | 5.7135 | - | - | - | - | - |
| 0.4359 | 3800 | 5.3909 | - | - | - | - | - |
| 0.4589 | 4000 | 5.5068 | - | - | - | - | - |
| 0.4818 | 4200 | 5.2261 | - | - | - | - | - |
| 0.5048 | 4400 | 5.1674 | - | - | - | - | - |
| 0.5277 | 4600 | 5.0427 | - | - | - | - | - |
| 0.5506 | 4800 | 5.3824 | - | - | - | - | - |
| 0.5736 | 5000 | 5.3063 | - | - | - | - | - |
| 0.5965 | 5200 | 5.2174 | - | - | - | - | - |
| 0.6195 | 5400 | 5.2116 | - | - | - | - | - |
| 0.6424 | 5600 | 5.2226 | - | - | - | - | - |
| 0.6654 | 5800 | 5.2051 | - | - | - | - | - |
| 0.6883 | 6000 | 5.204 | - | - | - | - | - |
| 0.7113 | 6200 | 5.154 | - | - | - | - | - |
| 0.7342 | 6400 | 5.0236 | - | - | - | - | - |
| 0.7571 | 6600 | 4.9476 | - | - | - | - | - |
| 0.7801 | 6800 | 4.0164 | - | - | - | - | - |
| 0.8030 | 7000 | 3.5707 | - | - | - | - | - |
| 0.8260 | 7200 | 3.3586 | - | - | - | - | - |
| 0.8489 | 7400 | 3.2376 | - | - | - | - | - |
| 0.8719 | 7600 | 3.0282 | - | - | - | - | - |
| 0.8948 | 7800 | 2.901 | - | - | - | - | - |
| 0.9177 | 8000 | 2.9371 | - | - | - | - | - |
| 0.9407 | 8200 | 2.8362 | - | - | - | - | - |
| 0.9636 | 8400 | 2.8121 | - | - | - | - | - |
| 0.9866 | 8600 | 2.7105 | - | - | - | - | - |
| 1.0 | 8717 | - | 0.6523 | 0.6593 | 0.6648 | 0.6370 | 0.6117 |
### Framework Versions
- Python: 3.9.18
- Sentence Transformers: 3.0.1
- Transformers: 4.40.0
- PyTorch: 2.2.2+cu121
- Accelerate: 0.26.1
- Datasets: 2.19.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## <span style="color:blue">Acknowledgments</span>
The author would like to thank Prince Sultan University for their invaluable support in this project. Their contributions and resources have been instrumental in the development and fine-tuning of these models.
```markdown
## Citation
If you use the Arabic Matryoshka Embeddings Model, please cite it as follows:
```bibtex
@misc{nacar2024enhancingsemanticsimilarityunderstanding,
title={Enhancing Semantic Similarity Understanding in Arabic NLP with Nested Embedding Learning},
author={Omer Nacar and Anis Koubaa},
year={2024},
eprint={2407.21139},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2407.21139},
} |
VaidikML0508/dqn-SpaceInvadersNoFrameskip-v4 | VaidikML0508 | 2025-01-10T18:13:48Z | 6 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-01-10T18:13:07Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 615.50 +/- 233.72
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga VaidikML0508 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga VaidikML0508 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga VaidikML0508
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
kokovova/d2ace4ce-caf2-4710-bec6-778c4b391944 | kokovova | 2025-01-10T18:12:08Z | 10 | 0 | peft | [
"peft",
"safetensors",
"olmo",
"axolotl",
"generated_from_trainer",
"base_model:katuni4ka/tiny-random-olmo-hf",
"base_model:adapter:katuni4ka/tiny-random-olmo-hf",
"region:us"
] | null | 2025-01-10T18:11:47Z | ---
library_name: peft
base_model: katuni4ka/tiny-random-olmo-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d2ace4ce-caf2-4710-bec6-778c4b391944
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: katuni4ka/tiny-random-olmo-hf
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8386a911ff2e26e3_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8386a911ff2e26e3_train_data.json
type:
field_input: keyphrases
field_instruction: title
field_output: abstract
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kokovova/d2ace4ce-caf2-4710-bec6-778c4b391944
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/8386a911ff2e26e3_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7ce4de99-20ce-46b1-beb1-963ceb5dedd8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7ce4de99-20ce-46b1-beb1-963ceb5dedd8
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# d2ace4ce-caf2-4710-bec6-778c4b391944
This model is a fine-tuned version of [katuni4ka/tiny-random-olmo-hf](https://huggingface.co/katuni4ka/tiny-random-olmo-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.8094
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0044 | 1 | 10.8436 |
| 10.8412 | 0.0354 | 8 | 10.8383 |
| 10.829 | 0.0709 | 16 | 10.8216 |
| 10.8132 | 0.1063 | 24 | 10.8094 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
filipesantoscv11/e9f22ee7-dfae-4809-90da-c3efea29dda8 | filipesantoscv11 | 2025-01-10T18:11:37Z | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:heegyu/WizardVicuna2-13b-hf",
"base_model:adapter:heegyu/WizardVicuna2-13b-hf",
"region:us"
] | null | 2025-01-10T17:51:41Z | ---
library_name: peft
base_model: heegyu/WizardVicuna2-13b-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e9f22ee7-dfae-4809-90da-c3efea29dda8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: heegyu/WizardVicuna2-13b-hf
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3ad533406285a0ea_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3ad533406285a0ea_train_data.json
type:
field_instruction: dialogue
field_output: reference
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: filipesantoscv11/e9f22ee7-dfae-4809-90da-c3efea29dda8
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/3ad533406285a0ea_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7011d4ab-02bf-4cec-ad2f-ae7dc6130702
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7011d4ab-02bf-4cec-ad2f-ae7dc6130702
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# e9f22ee7-dfae-4809-90da-c3efea29dda8
This model is a fine-tuned version of [heegyu/WizardVicuna2-13b-hf](https://huggingface.co/heegyu/WizardVicuna2-13b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8008
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0003 | 1 | 0.9095 |
| 0.9153 | 0.0023 | 8 | 0.8859 |
| 0.7232 | 0.0046 | 16 | 0.8223 |
| 0.8235 | 0.0069 | 24 | 0.8008 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
leondotle/Gemma-2-2B-it-bnb-func-call | leondotle | 2025-01-10T18:07:59Z | 45 | 0 | transformers | [
"transformers",
"gguf",
"gemma2",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/gemma-2-2b-it-bnb-4bit",
"base_model:quantized:unsloth/gemma-2-2b-it-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-10T18:06:46Z | ---
base_model: unsloth/gemma-2-2b-it-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** leondotle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-2b-it-bnb-4bit
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
cunghoctienganh/b26876a9-f446-4de6-9dce-2b4cba2ee973 | cunghoctienganh | 2025-01-10T18:06:17Z | 9 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:berkeley-nest/Starling-LM-7B-alpha",
"base_model:adapter:berkeley-nest/Starling-LM-7B-alpha",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-10T17:31:39Z | ---
library_name: peft
license: apache-2.0
base_model: berkeley-nest/Starling-LM-7B-alpha
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b26876a9-f446-4de6-9dce-2b4cba2ee973
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: berkeley-nest/Starling-LM-7B-alpha
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5f8782025623cf05_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5f8782025623cf05_train_data.json
type:
field_instruction: context
field_output: question
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: cunghoctienganh/b26876a9-f446-4de6-9dce-2b4cba2ee973
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/5f8782025623cf05_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0eaa2168-d144-4655-b824-847b031556c1
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0eaa2168-d144-4655-b824-847b031556c1
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# b26876a9-f446-4de6-9dce-2b4cba2ee973
This model is a fine-tuned version of [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2521
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 6.2058 | 0.0367 | 200 | 1.2521 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
PrunaAI/hrasto-llamas2_tok_blimp-bnb-8bit-smashed | PrunaAI | 2025-01-10T18:05:40Z | 7 | 0 | null | [
"safetensors",
"llama",
"pruna-ai",
"base_model:hrasto/llamas2_tok_blimp",
"base_model:quantized:hrasto/llamas2_tok_blimp",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-10T18:05:32Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: hrasto/llamas2_tok_blimp
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer">
<img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo hrasto/llamas2_tok_blimp installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/hrasto-llamas2_tok_blimp-bnb-8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("hrasto/llamas2_tok_blimp")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model hrasto/llamas2_tok_blimp before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Do it by yourself [here](https://docs.pruna.ai/en/latest/setup/pip.html). |
chauhoang/6483b953-68bc-21e5-870f-df3e47170d3b | chauhoang | 2025-01-10T18:03:41Z | 23 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct",
"base_model:adapter:VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct",
"license:llama3.1",
"region:us"
] | null | 2025-01-10T16:47:30Z | ---
library_name: peft
license: llama3.1
base_model: VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6483b953-68bc-21e5-870f-df3e47170d3b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 99abecc3940692d4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/99abecc3940692d4_train_data.json
type:
field_instruction: title
field_output: content
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: chauhoang/6483b953-68bc-21e5-870f-df3e47170d3b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/99abecc3940692d4_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 9a614f62-ffe2-40a4-9a16-ba60a4e21ae0
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 9a614f62-ffe2-40a4-9a16-ba60a4e21ae0
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 6483b953-68bc-21e5-870f-df3e47170d3b
This model is a fine-tuned version of [VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct](https://huggingface.co/VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 2.1410 |
| 2.0296 | 0.0009 | 10 | 2.1128 |
| 2.0385 | 0.0019 | 20 | 2.0514 |
| 1.9569 | 0.0028 | 30 | 2.0336 |
| 2.0647 | 0.0038 | 40 | 2.0265 |
| 2.0398 | 0.0047 | 50 | 2.0251 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
rsicproject/googlenet-GPT-SYDNEY-captioning | rsicproject | 2025-01-10T18:03:12Z | 40 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-01-10T18:01:39Z | ---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: googlenet-GPT-SYDNEY-captioning
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# googlenet-GPT-SYDNEY-captioning
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1141
- Rouge: 0.6996
- Bleu1: 0.7882
- Bleu2: 0.7020
- Bleu3: 0.6271
- Bleu4: 0.5653
- Meteor: 0.7048
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1024
- num_epochs: 128
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge | Bleu1 | Bleu2 | Bleu3 | Bleu4 | Meteor |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:------:|:------:|:------:|
| No log | 1.0 | 39 | 4.2302 | 0.1767 | 0.0547 | 0.0159 | 0.0027 | 0.0012 | 0.0708 |
| No log | 2.0 | 78 | 3.4015 | 0.1853 | 0.2079 | 0.1309 | 0.0631 | 0.0324 | 0.1342 |
| No log | 3.0 | 117 | 1.9854 | 0.5961 | 0.6924 | 0.5863 | 0.5014 | 0.4347 | 0.5797 |
| No log | 4.0 | 156 | 1.2550 | 0.5721 | 0.6471 | 0.5501 | 0.4709 | 0.4059 | 0.5486 |
| No log | 5.0 | 195 | 1.0187 | 0.6471 | 0.7372 | 0.6508 | 0.5696 | 0.5012 | 0.6614 |
| No log | 6.0 | 234 | 0.9720 | 0.6686 | 0.7554 | 0.6735 | 0.5875 | 0.5103 | 0.6650 |
| No log | 7.0 | 273 | 0.8957 | 0.7045 | 0.8067 | 0.7307 | 0.6477 | 0.5699 | 0.7290 |
| No log | 8.0 | 312 | 0.9155 | 0.6849 | 0.7708 | 0.6880 | 0.6054 | 0.5296 | 0.7100 |
| No log | 9.0 | 351 | 0.9240 | 0.6719 | 0.7735 | 0.6803 | 0.5997 | 0.5294 | 0.6999 |
| No log | 10.0 | 390 | 0.9262 | 0.6531 | 0.7368 | 0.6275 | 0.5363 | 0.4562 | 0.6716 |
| No log | 11.0 | 429 | 0.9618 | 0.6657 | 0.7621 | 0.6681 | 0.5843 | 0.5115 | 0.6982 |
| No log | 12.0 | 468 | 0.9606 | 0.6729 | 0.7597 | 0.6747 | 0.5885 | 0.5070 | 0.6929 |
| No log | 13.0 | 507 | 0.9388 | 0.6419 | 0.7546 | 0.6560 | 0.5711 | 0.4995 | 0.6527 |
| No log | 14.0 | 546 | 0.9249 | 0.6990 | 0.8106 | 0.7392 | 0.6698 | 0.6056 | 0.7427 |
| No log | 15.0 | 585 | 0.9734 | 0.7021 | 0.7922 | 0.7045 | 0.6201 | 0.5450 | 0.7276 |
| No log | 16.0 | 624 | 0.9999 | 0.6694 | 0.7763 | 0.6947 | 0.6189 | 0.5518 | 0.6981 |
| No log | 17.0 | 663 | 0.9872 | 0.7085 | 0.7987 | 0.7075 | 0.6152 | 0.5300 | 0.7404 |
| No log | 18.0 | 702 | 1.0115 | 0.6789 | 0.7631 | 0.6547 | 0.5629 | 0.4814 | 0.7081 |
| No log | 19.0 | 741 | 0.9698 | 0.6807 | 0.7974 | 0.7137 | 0.6430 | 0.5787 | 0.7168 |
| No log | 20.0 | 780 | 0.9934 | 0.7274 | 0.8093 | 0.7307 | 0.6541 | 0.5894 | 0.7318 |
| No log | 21.0 | 819 | 1.0731 | 0.6992 | 0.7940 | 0.7061 | 0.6245 | 0.5530 | 0.7243 |
| No log | 22.0 | 858 | 1.0864 | 0.6929 | 0.7738 | 0.6791 | 0.5904 | 0.5082 | 0.7110 |
| No log | 23.0 | 897 | 1.0116 | 0.7170 | 0.8040 | 0.7152 | 0.6336 | 0.5643 | 0.7420 |
| No log | 24.0 | 936 | 1.0736 | 0.7132 | 0.8095 | 0.7356 | 0.6633 | 0.5935 | 0.7449 |
| No log | 25.0 | 975 | 1.0366 | 0.6756 | 0.7602 | 0.6637 | 0.5823 | 0.5108 | 0.7176 |
| No log | 26.0 | 1014 | 1.0312 | 0.6511 | 0.7373 | 0.6430 | 0.5645 | 0.4996 | 0.6633 |
| 0.7581 | 27.0 | 1053 | 1.0871 | 0.6958 | 0.7740 | 0.6776 | 0.5922 | 0.5224 | 0.7086 |
| 0.7581 | 28.0 | 1092 | 1.1145 | 0.6740 | 0.7727 | 0.6754 | 0.5915 | 0.5163 | 0.6978 |
| 0.7581 | 29.0 | 1131 | 1.1446 | 0.6828 | 0.7709 | 0.6845 | 0.5979 | 0.5190 | 0.6974 |
| 0.7581 | 30.0 | 1170 | 1.1141 | 0.6996 | 0.7882 | 0.7020 | 0.6271 | 0.5653 | 0.7048 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.20.3
|
Omartificial-Intelligence-Space/Arabic-labse-Matryoshka | Omartificial-Intelligence-Space | 2025-01-10T18:03:08Z | 572 | 2 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"mteb",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:557850",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"ar",
"dataset:Omartificial-Intelligence-Space/Arabic-NLi-Triplet",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"arxiv:2407.21139",
"base_model:sentence-transformers/LaBSE",
"base_model:finetune:sentence-transformers/LaBSE",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"region:us"
] | sentence-similarity | 2024-06-16T20:56:09Z | ---
inference: false
language:
- ar
library_name: sentence-transformers
tags:
- mteb
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:557850
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: sentence-transformers/LaBSE
datasets:
- Omartificial-Intelligence-Space/Arabic-NLi-Triplet
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
widget:
- source_sentence: ذكر متوازن بعناية يقف على قدم واحدة بالقرب من منطقة شاطئ المحيط النظيفة
sentences:
- رجل يقدم عرضاً
- هناك رجل بالخارج قرب الشاطئ
- رجل يجلس على أريكه
- source_sentence: رجل يقفز إلى سريره القذر
sentences:
- السرير قذر.
- رجل يضحك أثناء غسيل الملابس
- الرجل على القمر
- source_sentence: الفتيات بالخارج
sentences:
- امرأة تلف الخيط إلى كرات بجانب كومة من الكرات
- فتيان يركبان في جولة متعة
- >-
ثلاث فتيات يقفون سوية في غرفة واحدة تستمع وواحدة تكتب على الحائط والثالثة
تتحدث إليهن
- source_sentence: الرجل يرتدي قميصاً أزرق.
sentences:
- >-
رجل يرتدي قميصاً أزرق يميل إلى الجدار بجانب الطريق مع شاحنة زرقاء وسيارة
حمراء مع الماء في الخلفية.
- كتاب القصص مفتوح
- رجل يرتدي قميص أسود يعزف على الجيتار.
- source_sentence: يجلس شاب ذو شعر أشقر على الحائط يقرأ جريدة بينما تمر امرأة وفتاة شابة.
sentences:
- ذكر شاب ينظر إلى جريدة بينما تمر إمرأتان بجانبه
- رجل يستلقي على وجهه على مقعد في الحديقة.
- الشاب نائم بينما الأم تقود ابنتها إلى الحديقة
pipeline_tag: sentence-similarity
model-index:
- name: Omartificial-Intelligence-Space/Arabic-labse-Matryoshka
results:
- dataset:
config: ar
name: MTEB MintakaRetrieval (ar)
revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e
split: test
type: mintaka/mmteb-mintaka
metrics:
- type: main_score
value: 14.585
- type: map_at_1
value: 8.352
- type: map_at_3
value: 10.917
- type: map_at_5
value: 11.634
- type: map_at_10
value: 12.254
- type: ndcg_at_1
value: 8.352
- type: ndcg_at_3
value: 11.794
- type: ndcg_at_5
value: 13.085
- type: ndcg_at_10
value: 14.585
- type: recall_at_1
value: 8.352
- type: recall_at_3
value: 14.344
- type: recall_at_5
value: 17.476
- type: recall_at_10
value: 22.106
- type: precision_at_1
value: 8.352
- type: precision_at_3
value: 4.781
- type: precision_at_5
value: 3.495
- type: precision_at_10
value: 2.211
- type: mrr_at_1
value: 8.3522
- type: mrr_at_3
value: 10.9169
- type: mrr_at_5
value: 11.6341
- type: mrr_at_10
value: 12.2543
task:
type: Retrieval
- dataset:
config: ar
name: MTEB MIRACLRetrievalHardNegatives (ar)
revision: 95c8db7d4a6e9c1d8a60601afd63d553ae20a2eb
split: dev
type: miracl/mmteb-miracl-hardnegatives
metrics:
- type: main_score
value: 18.836
- type: map_at_1
value: 6.646
- type: map_at_3
value: 10.692
- type: map_at_5
value: 11.969
- type: map_at_10
value: 13.446
- type: ndcg_at_1
value: 10.5
- type: ndcg_at_3
value: 13.645
- type: ndcg_at_5
value: 15.504
- type: ndcg_at_10
value: 18.836
- type: recall_at_1
value: 6.646
- type: recall_at_3
value: 15.361
- type: recall_at_5
value: 19.925
- type: recall_at_10
value: 28.6
- type: precision_at_1
value: 10.5
- type: precision_at_3
value: 8.533
- type: precision_at_5
value: 6.9
- type: precision_at_10
value: 5.21
- type: mrr_at_1
value: 10.5
- type: mrr_at_3
value: 16.25
- type: mrr_at_5
value: 17.68
- type: mrr_at_10
value: 19.1759
task:
type: Retrieval
- dataset:
config: ar
name: MTEB MLQARetrieval (ar)
revision: 397ed406c1a7902140303e7faf60fff35b58d285
split: validation
type: mlqa/mmteb-mlqa
metrics:
- type: main_score
value: 61.582
- type: map_at_1
value: 47.195
- type: map_at_3
value: 54.03
- type: map_at_5
value: 55.77
- type: map_at_10
value: 56.649
- type: ndcg_at_1
value: 47.195
- type: ndcg_at_3
value: 56.295
- type: ndcg_at_5
value: 59.417
- type: ndcg_at_10
value: 61.582
- type: recall_at_1
value: 47.195
- type: recall_at_3
value: 62.863
- type: recall_at_5
value: 70.406
- type: recall_at_10
value: 77.176
- type: precision_at_1
value: 47.195
- type: precision_at_3
value: 20.954
- type: precision_at_5
value: 14.081
- type: precision_at_10
value: 7.718
- type: mrr_at_1
value: 47.1954
- type: mrr_at_3
value: 54.0297
- type: mrr_at_5
value: 55.7705
- type: mrr_at_10
value: 56.6492
task:
type: Retrieval
- dataset:
config: default
name: MTEB SadeemQuestionRetrieval (ar)
revision: 3cb0752b182e5d5d740df547748b06663c8e0bd9
split: test
type: sadeem/mmteb-sadeem
metrics:
- type: main_score
value: 57.653
- type: map_at_1
value: 25.084
- type: map_at_3
value: 46.338
- type: map_at_5
value: 47.556
- type: map_at_10
value: 48.207
- type: ndcg_at_1
value: 25.084
- type: ndcg_at_3
value: 53.91
- type: ndcg_at_5
value: 56.102
- type: ndcg_at_10
value: 57.653
- type: recall_at_1
value: 25.084
- type: recall_at_3
value: 76.017
- type: recall_at_5
value: 81.331
- type: recall_at_10
value: 86.07
- type: precision_at_1
value: 25.084
- type: precision_at_3
value: 25.339
- type: precision_at_5
value: 16.266
- type: precision_at_10
value: 8.607
- type: mrr_at_1
value: 23.1211
- type: mrr_at_3
value: 44.9657
- type: mrr_at_5
value: 46.3037
- type: mrr_at_10
value: 46.8749
task:
type: Retrieval
- dataset:
config: default
name: MTEB BIOSSES (default)
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
split: test
type: mteb/biosses-sts
metrics:
- type: cosine_pearson
value: 76.46793440999714
- type: cosine_spearman
value: 76.66439745271298
- type: euclidean_pearson
value: 76.52075972347127
- type: euclidean_spearman
value: 76.66439745271298
- type: main_score
value: 76.66439745271298
- type: manhattan_pearson
value: 76.68001857069733
- type: manhattan_spearman
value: 76.73066402288269
task:
type: STS
- dataset:
config: default
name: MTEB SICK-R (default)
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
split: test
type: mteb/sickr-sts
metrics:
- type: cosine_pearson
value: 79.67657890693198
- type: cosine_spearman
value: 77.03286420274621
- type: euclidean_pearson
value: 78.1960735272073
- type: euclidean_spearman
value: 77.032855497919
- type: main_score
value: 77.03286420274621
- type: manhattan_pearson
value: 78.25627275994229
- type: manhattan_spearman
value: 77.00430810589081
task:
type: STS
- dataset:
config: default
name: MTEB STS12 (default)
revision: a0d554a64d88156834ff5ae9920b964011b16384
split: test
type: mteb/sts12-sts
metrics:
- type: cosine_pearson
value: 83.94288954523996
- type: cosine_spearman
value: 79.21432176112556
- type: euclidean_pearson
value: 81.21333251943913
- type: euclidean_spearman
value: 79.2152067330468
- type: main_score
value: 79.21432176112556
- type: manhattan_pearson
value: 81.16910737482634
- type: manhattan_spearman
value: 79.08756466301445
task:
type: STS
- dataset:
config: default
name: MTEB STS13 (default)
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
split: test
type: mteb/sts13-sts
metrics:
- type: cosine_pearson
value: 77.48393909963059
- type: cosine_spearman
value: 79.54963868861196
- type: euclidean_pearson
value: 79.28416002197451
- type: euclidean_spearman
value: 79.54963861790114
- type: main_score
value: 79.54963868861196
- type: manhattan_pearson
value: 79.18653917582513
- type: manhattan_spearman
value: 79.46713533414295
task:
type: STS
- dataset:
config: default
name: MTEB STS14 (default)
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
split: test
type: mteb/sts14-sts
metrics:
- type: cosine_pearson
value: 78.51596313692846
- type: cosine_spearman
value: 78.84601702652395
- type: euclidean_pearson
value: 78.55199809961427
- type: euclidean_spearman
value: 78.84603362286225
- type: main_score
value: 78.84601702652395
- type: manhattan_pearson
value: 78.52780170677605
- type: manhattan_spearman
value: 78.77744294039178
task:
type: STS
- dataset:
config: default
name: MTEB STS15 (default)
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
split: test
type: mteb/sts15-sts
metrics:
- type: cosine_pearson
value: 84.53393478889929
- type: cosine_spearman
value: 85.60821849381648
- type: euclidean_pearson
value: 85.32813923250558
- type: euclidean_spearman
value: 85.6081835456016
- type: main_score
value: 85.60821849381648
- type: manhattan_pearson
value: 85.32782097916476
- type: manhattan_spearman
value: 85.58098670898562
task:
type: STS
- dataset:
config: default
name: MTEB STS16 (default)
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
split: test
type: mteb/sts16-sts
metrics:
- type: cosine_pearson
value: 77.00196998325856
- type: cosine_spearman
value: 79.930951699069
- type: euclidean_pearson
value: 79.43196738390897
- type: euclidean_spearman
value: 79.93095112410258
- type: main_score
value: 79.930951699069
- type: manhattan_pearson
value: 79.33744358111427
- type: manhattan_spearman
value: 79.82939266539601
task:
type: STS
- dataset:
config: ar-ar
name: MTEB STS17 (ar-ar)
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
split: test
type: mteb/sts17-crosslingual-sts
metrics:
- type: cosine_pearson
value: 81.60289529424327
- type: cosine_spearman
value: 82.46806381979653
- type: euclidean_pearson
value: 81.32235058296072
- type: euclidean_spearman
value: 82.46676890643914
- type: main_score
value: 82.46806381979653
- type: manhattan_pearson
value: 81.43885277175312
- type: manhattan_spearman
value: 82.38955952718666
task:
type: STS
- dataset:
config: ar
name: MTEB STS22 (ar)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: 49.58293768761314
- type: cosine_spearman
value: 57.261888789832874
- type: euclidean_pearson
value: 53.36549109538782
- type: euclidean_spearman
value: 57.261888789832874
- type: main_score
value: 57.261888789832874
- type: manhattan_pearson
value: 53.06640323833928
- type: manhattan_spearman
value: 57.05837935512948
task:
type: STS
- dataset:
config: default
name: MTEB STSBenchmark (default)
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
split: test
type: mteb/stsbenchmark-sts
metrics:
- type: cosine_pearson
value: 81.43997935928729
- type: cosine_spearman
value: 82.04996129795596
- type: euclidean_pearson
value: 82.01917866996972
- type: euclidean_spearman
value: 82.04996129795596
- type: main_score
value: 82.04996129795596
- type: manhattan_pearson
value: 82.03487112040936
- type: manhattan_spearman
value: 82.03774605775651
task:
type: STS
- dataset:
config: default
name: MTEB SummEval (default)
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
split: test
type: mteb/summeval
metrics:
- type: cosine_pearson
value: 32.113475997147674
- type: cosine_spearman
value: 32.17194233764879
- type: dot_pearson
value: 32.113469728827255
- type: dot_spearman
value: 32.174771315355386
- type: main_score
value: 32.17194233764879
- type: pearson
value: 32.113475997147674
- type: spearman
value: 32.17194233764879
task:
type: Summarization
- name: SentenceTransformer based on sentence-transformers/LaBSE
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 768
type: sts-test-768
metrics:
- type: pearson_cosine
value: 0.7269177710249681
name: Pearson Cosine
- type: spearman_cosine
value: 0.7225258779395222
name: Spearman Cosine
- type: pearson_manhattan
value: 0.7259261785622463
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.7210463582530393
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.7259567884235211
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.722525823788783
name: Spearman Euclidean
- type: pearson_dot
value: 0.7269177712136122
name: Pearson Dot
- type: spearman_dot
value: 0.7225258771129475
name: Spearman Dot
- type: pearson_max
value: 0.7269177712136122
name: Pearson Max
- type: spearman_max
value: 0.7225258779395222
name: Spearman Max
- type: pearson_cosine
value: 0.8143867576376295
name: Pearson Cosine
- type: spearman_cosine
value: 0.8205044914629483
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8203365887013151
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8203816698535976
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8201809453496319
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8205044914629483
name: Spearman Euclidean
- type: pearson_dot
value: 0.8143867541070537
name: Pearson Dot
- type: spearman_dot
value: 0.8205044914629483
name: Spearman Dot
- type: pearson_max
value: 0.8203365887013151
name: Pearson Max
- type: spearman_max
value: 0.8205044914629483
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 512
type: sts-test-512
metrics:
- type: pearson_cosine
value: 0.7268389724271859
name: Pearson Cosine
- type: spearman_cosine
value: 0.7224359411000278
name: Spearman Cosine
- type: pearson_manhattan
value: 0.7241418669615103
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.7195408311833029
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.7248184919191593
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.7212936866178097
name: Spearman Euclidean
- type: pearson_dot
value: 0.7252522928016701
name: Pearson Dot
- type: spearman_dot
value: 0.7205040482865328
name: Spearman Dot
- type: pearson_max
value: 0.7268389724271859
name: Pearson Max
- type: spearman_max
value: 0.7224359411000278
name: Spearman Max
- type: pearson_cosine
value: 0.8143448965624136
name: Pearson Cosine
- type: spearman_cosine
value: 0.8211700903453509
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8217448619823571
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8216016599665544
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8216413349390971
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.82188122418776
name: Spearman Euclidean
- type: pearson_dot
value: 0.8097020064483653
name: Pearson Dot
- type: spearman_dot
value: 0.8147306090545295
name: Spearman Dot
- type: pearson_max
value: 0.8217448619823571
name: Pearson Max
- type: spearman_max
value: 0.82188122418776
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 256
type: sts-test-256
metrics:
- type: pearson_cosine
value: 0.7283468617741852
name: Pearson Cosine
- type: spearman_cosine
value: 0.7264294106954872
name: Spearman Cosine
- type: pearson_manhattan
value: 0.7227711798003426
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.718067982079232
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.7251492361775083
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.7215068115809131
name: Spearman Euclidean
- type: pearson_dot
value: 0.7243396991648858
name: Pearson Dot
- type: spearman_dot
value: 0.7221390873398206
name: Spearman Dot
- type: pearson_max
value: 0.7283468617741852
name: Pearson Max
- type: spearman_max
value: 0.7264294106954872
name: Spearman Max
- type: pearson_cosine
value: 0.8075613785257986
name: Pearson Cosine
- type: spearman_cosine
value: 0.8159258089804861
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8208711370091426
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8196747601014518
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8210210137439432
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8203004500356083
name: Spearman Euclidean
- type: pearson_dot
value: 0.7870611647231145
name: Pearson Dot
- type: spearman_dot
value: 0.7874848213991118
name: Spearman Dot
- type: pearson_max
value: 0.8210210137439432
name: Pearson Max
- type: spearman_max
value: 0.8203004500356083
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 128
type: sts-test-128
metrics:
- type: pearson_cosine
value: 0.7102082520621849
name: Pearson Cosine
- type: spearman_cosine
value: 0.7103917869311991
name: Spearman Cosine
- type: pearson_manhattan
value: 0.7134729607181519
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.708895102058259
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.7171545288118942
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.7130380237150746
name: Spearman Euclidean
- type: pearson_dot
value: 0.6777774738547628
name: Pearson Dot
- type: spearman_dot
value: 0.6746474823963989
name: Spearman Dot
- type: pearson_max
value: 0.7171545288118942
name: Pearson Max
- type: spearman_max
value: 0.7130380237150746
name: Spearman Max
- type: pearson_cosine
value: 0.8024378358145556
name: Pearson Cosine
- type: spearman_cosine
value: 0.8117561815472325
name: Spearman Cosine
- type: pearson_manhattan
value: 0.818920309459774
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8180515365910205
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8198346073356603
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8185162896024369
name: Spearman Euclidean
- type: pearson_dot
value: 0.7513270537478935
name: Pearson Dot
- type: spearman_dot
value: 0.7427542871546953
name: Spearman Dot
- type: pearson_max
value: 0.8198346073356603
name: Pearson Max
- type: spearman_max
value: 0.8185162896024369
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 64
type: sts-test-64
metrics:
- type: pearson_cosine
value: 0.6930745722517785
name: Pearson Cosine
- type: spearman_cosine
value: 0.6982194042238953
name: Spearman Cosine
- type: pearson_manhattan
value: 0.6971382079778946
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.6942362764367931
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.7012627015062325
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.6986972295835788
name: Spearman Euclidean
- type: pearson_dot
value: 0.6376735798940838
name: Pearson Dot
- type: spearman_dot
value: 0.6344835722310429
name: Spearman Dot
- type: pearson_max
value: 0.7012627015062325
name: Pearson Max
- type: spearman_max
value: 0.6986972295835788
name: Spearman Max
- type: pearson_cosine
value: 0.7855080652087961
name: Pearson Cosine
- type: spearman_cosine
value: 0.7948979371698327
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8060407473462375
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8041199691999044
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8088262858195556
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8060483394849104
name: Spearman Euclidean
- type: pearson_dot
value: 0.677754045289596
name: Pearson Dot
- type: spearman_dot
value: 0.6616232873061395
name: Spearman Dot
- type: pearson_max
value: 0.8088262858195556
name: Pearson Max
- type: spearman_max
value: 0.8060483394849104
name: Spearman Max
license: apache-2.0
---
# SentenceTransformer based on sentence-transformers/LaBSE
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE) on the Omartificial-Intelligence-Space/arabic-n_li-triplet dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE) <!-- at revision e34fab64a3011d2176c99545a93d5cbddc9a91b7 -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- Omartificial-Intelligence-Space/arabic-n_li-triplet
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
(3): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Omartificial-Intelligence-Space/Arabic-labse")
# Run inference
sentences = [
'يجلس شاب ذو شعر أشقر على الحائط يقرأ جريدة بينما تمر امرأة وفتاة شابة.',
'ذكر شاب ينظر إلى جريدة بينما تمر إمرأتان بجانبه',
'الشاب نائم بينما الأم تقود ابنتها إلى الحديقة',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `sts-test-768`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.7269 |
| **spearman_cosine** | **0.7225** |
| pearson_manhattan | 0.7259 |
| spearman_manhattan | 0.721 |
| pearson_euclidean | 0.726 |
| spearman_euclidean | 0.7225 |
| pearson_dot | 0.7269 |
| spearman_dot | 0.7225 |
| pearson_max | 0.7269 |
| spearman_max | 0.7225 |
#### Semantic Similarity
* Dataset: `sts-test-512`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.7268 |
| **spearman_cosine** | **0.7224** |
| pearson_manhattan | 0.7241 |
| spearman_manhattan | 0.7195 |
| pearson_euclidean | 0.7248 |
| spearman_euclidean | 0.7213 |
| pearson_dot | 0.7253 |
| spearman_dot | 0.7205 |
| pearson_max | 0.7268 |
| spearman_max | 0.7224 |
#### Semantic Similarity
* Dataset: `sts-test-256`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.7283 |
| **spearman_cosine** | **0.7264** |
| pearson_manhattan | 0.7228 |
| spearman_manhattan | 0.7181 |
| pearson_euclidean | 0.7251 |
| spearman_euclidean | 0.7215 |
| pearson_dot | 0.7243 |
| spearman_dot | 0.7221 |
| pearson_max | 0.7283 |
| spearman_max | 0.7264 |
#### Semantic Similarity
* Dataset: `sts-test-128`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.7102 |
| **spearman_cosine** | **0.7104** |
| pearson_manhattan | 0.7135 |
| spearman_manhattan | 0.7089 |
| pearson_euclidean | 0.7172 |
| spearman_euclidean | 0.713 |
| pearson_dot | 0.6778 |
| spearman_dot | 0.6746 |
| pearson_max | 0.7172 |
| spearman_max | 0.713 |
#### Semantic Similarity
* Dataset: `sts-test-64`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.6931 |
| **spearman_cosine** | **0.6982** |
| pearson_manhattan | 0.6971 |
| spearman_manhattan | 0.6942 |
| pearson_euclidean | 0.7013 |
| spearman_euclidean | 0.6987 |
| pearson_dot | 0.6377 |
| spearman_dot | 0.6345 |
| pearson_max | 0.7013 |
| spearman_max | 0.6987 |
#### Semantic Similarity
* Dataset: `sts-test-768`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8144 |
| **spearman_cosine** | **0.8205** |
| pearson_manhattan | 0.8203 |
| spearman_manhattan | 0.8204 |
| pearson_euclidean | 0.8202 |
| spearman_euclidean | 0.8205 |
| pearson_dot | 0.8144 |
| spearman_dot | 0.8205 |
| pearson_max | 0.8203 |
| spearman_max | 0.8205 |
#### Semantic Similarity
* Dataset: `sts-test-512`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8143 |
| **spearman_cosine** | **0.8212** |
| pearson_manhattan | 0.8217 |
| spearman_manhattan | 0.8216 |
| pearson_euclidean | 0.8216 |
| spearman_euclidean | 0.8219 |
| pearson_dot | 0.8097 |
| spearman_dot | 0.8147 |
| pearson_max | 0.8217 |
| spearman_max | 0.8219 |
#### Semantic Similarity
* Dataset: `sts-test-256`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8076 |
| **spearman_cosine** | **0.8159** |
| pearson_manhattan | 0.8209 |
| spearman_manhattan | 0.8197 |
| pearson_euclidean | 0.821 |
| spearman_euclidean | 0.8203 |
| pearson_dot | 0.7871 |
| spearman_dot | 0.7875 |
| pearson_max | 0.821 |
| spearman_max | 0.8203 |
#### Semantic Similarity
* Dataset: `sts-test-128`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8024 |
| **spearman_cosine** | **0.8118** |
| pearson_manhattan | 0.8189 |
| spearman_manhattan | 0.8181 |
| pearson_euclidean | 0.8198 |
| spearman_euclidean | 0.8185 |
| pearson_dot | 0.7513 |
| spearman_dot | 0.7428 |
| pearson_max | 0.8198 |
| spearman_max | 0.8185 |
#### Semantic Similarity
* Dataset: `sts-test-64`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.7855 |
| **spearman_cosine** | **0.7949** |
| pearson_manhattan | 0.806 |
| spearman_manhattan | 0.8041 |
| pearson_euclidean | 0.8088 |
| spearman_euclidean | 0.806 |
| pearson_dot | 0.6778 |
| spearman_dot | 0.6616 |
| pearson_max | 0.8088 |
| spearman_max | 0.806 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Omartificial-Intelligence-Space/arabic-n_li-triplet
* Dataset: Omartificial-Intelligence-Space/arabic-n_li-triplet
* Size: 557,850 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 9.99 tokens</li><li>max: 51 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 12.44 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 13.82 tokens</li><li>max: 49 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:------------------------------------------------------------|:--------------------------------------------|:------------------------------------|
| <code>شخص على حصان يقفز فوق طائرة معطلة</code> | <code>شخص في الهواء الطلق، على حصان.</code> | <code>شخص في مطعم، يطلب عجة.</code> |
| <code>أطفال يبتسمون و يلوحون للكاميرا</code> | <code>هناك أطفال حاضرون</code> | <code>الاطفال يتجهمون</code> |
| <code>صبي يقفز على لوح التزلج في منتصف الجسر الأحمر.</code> | <code>الفتى يقوم بخدعة التزلج</code> | <code>الصبي يتزلج على الرصيف</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Evaluation Dataset
#### Omartificial-Intelligence-Space/arabic-n_li-triplet
* Dataset: Omartificial-Intelligence-Space/arabic-n_li-triplet
* Size: 6,584 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 19.71 tokens</li><li>max: 100 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.37 tokens</li><li>max: 38 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 10.49 tokens</li><li>max: 34 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------|:---------------------------------------------------|
| <code>امرأتان يتعانقان بينما يحملان حزمة</code> | <code>إمرأتان يحملان حزمة</code> | <code>الرجال يتشاجرون خارج مطعم</code> |
| <code>طفلين صغيرين يرتديان قميصاً أزرق، أحدهما يرتدي الرقم 9 والآخر يرتدي الرقم 2 يقفان على خطوات خشبية في الحمام ويغسلان أيديهما في المغسلة.</code> | <code>طفلين يرتديان قميصاً مرقماً يغسلون أيديهم</code> | <code>طفلين يرتديان سترة يذهبان إلى المدرسة</code> |
| <code>رجل يبيع الدونات لعميل خلال معرض عالمي أقيم في مدينة أنجليس</code> | <code>رجل يبيع الدونات لعميل</code> | <code>امرأة تشرب قهوتها في مقهى صغير</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | sts-test-128_spearman_cosine | sts-test-256_spearman_cosine | sts-test-512_spearman_cosine | sts-test-64_spearman_cosine | sts-test-768_spearman_cosine |
|:------:|:----:|:-------------:|:----------------------------:|:----------------------------:|:----------------------------:|:---------------------------:|:----------------------------:|
| None | 0 | - | 0.7104 | 0.7264 | 0.7224 | 0.6982 | 0.7225 |
| 0.0229 | 200 | 13.1738 | - | - | - | - | - |
| 0.0459 | 400 | 8.8127 | - | - | - | - | - |
| 0.0688 | 600 | 8.0984 | - | - | - | - | - |
| 0.0918 | 800 | 7.2984 | - | - | - | - | - |
| 0.1147 | 1000 | 7.5749 | - | - | - | - | - |
| 0.1377 | 1200 | 7.1292 | - | - | - | - | - |
| 0.1606 | 1400 | 6.6146 | - | - | - | - | - |
| 0.1835 | 1600 | 6.6523 | - | - | - | - | - |
| 0.2065 | 1800 | 6.1095 | - | - | - | - | - |
| 0.2294 | 2000 | 6.0841 | - | - | - | - | - |
| 0.2524 | 2200 | 6.3024 | - | - | - | - | - |
| 0.2753 | 2400 | 6.1941 | - | - | - | - | - |
| 0.2983 | 2600 | 6.1686 | - | - | - | - | - |
| 0.3212 | 2800 | 5.8317 | - | - | - | - | - |
| 0.3442 | 3000 | 6.0597 | - | - | - | - | - |
| 0.3671 | 3200 | 5.7832 | - | - | - | - | - |
| 0.3900 | 3400 | 5.7088 | - | - | - | - | - |
| 0.4130 | 3600 | 5.6988 | - | - | - | - | - |
| 0.4359 | 3800 | 5.5268 | - | - | - | - | - |
| 0.4589 | 4000 | 5.5543 | - | - | - | - | - |
| 0.4818 | 4200 | 5.3152 | - | - | - | - | - |
| 0.5048 | 4400 | 5.2894 | - | - | - | - | - |
| 0.5277 | 4600 | 5.1805 | - | - | - | - | - |
| 0.5506 | 4800 | 5.4559 | - | - | - | - | - |
| 0.5736 | 5000 | 5.3836 | - | - | - | - | - |
| 0.5965 | 5200 | 5.2626 | - | - | - | - | - |
| 0.6195 | 5400 | 5.2511 | - | - | - | - | - |
| 0.6424 | 5600 | 5.3308 | - | - | - | - | - |
| 0.6654 | 5800 | 5.2264 | - | - | - | - | - |
| 0.6883 | 6000 | 5.2881 | - | - | - | - | - |
| 0.7113 | 6200 | 5.1349 | - | - | - | - | - |
| 0.7342 | 6400 | 5.0872 | - | - | - | - | - |
| 0.7571 | 6600 | 4.5515 | - | - | - | - | - |
| 0.7801 | 6800 | 3.4312 | - | - | - | - | - |
| 0.8030 | 7000 | 3.1008 | - | - | - | - | - |
| 0.8260 | 7200 | 2.9582 | - | - | - | - | - |
| 0.8489 | 7400 | 2.8153 | - | - | - | - | - |
| 0.8719 | 7600 | 2.7214 | - | - | - | - | - |
| 0.8948 | 7800 | 2.5392 | - | - | - | - | - |
| 0.9177 | 8000 | 2.584 | - | - | - | - | - |
| 0.9407 | 8200 | 2.5384 | - | - | - | - | - |
| 0.9636 | 8400 | 2.4937 | - | - | - | - | - |
| 0.9866 | 8600 | 2.4155 | - | - | - | - | - |
| 1.0 | 8717 | - | 0.8118 | 0.8159 | 0.8212 | 0.7949 | 0.8205 |
### Framework Versions
- Python: 3.9.18
- Sentence Transformers: 3.0.1
- Transformers: 4.40.0
- PyTorch: 2.2.2+cu121
- Accelerate: 0.26.1
- Datasets: 2.19.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## <span style="color:blue">Acknowledgments</span>
The author would like to thank Prince Sultan University for their invaluable support in this project. Their contributions and resources have been instrumental in the development and fine-tuning of these models.
```markdown
## Citation
If you use the Arabic Matryoshka Embeddings Model, please cite it as follows:
@misc{nacar2024enhancingsemanticsimilarityunderstanding,
title={Enhancing Semantic Similarity Understanding in Arabic NLP with Nested Embedding Learning},
author={Omer Nacar and Anis Koubaa},
year={2024},
eprint={2407.21139},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2407.21139},
} |
AdiCakepLabs/otti_v2 | AdiCakepLabs | 2025-01-10T18:01:33Z | 26 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:adapter:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:apache-2.0",
"region:us"
] | text-to-image | 2025-01-10T08:37:06Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: "UNICODE\0\0o\0t\0t\0i\0,\0 \0b\0o\0y\0_\0o\0t\0t\0i\0,\0 \0d\0a\0r\0k\0e\0r\0-\0s\0h\0a\0d\0e\0-\0t\0e\0a\0l\0_\0O\0t\0t\0i\0_\0e\0a\0r\0s\0,\0 \0s\0y\0m\0b\0o\0l\0-\0s\0h\0a\0p\0e\0d\0 \0p\0u\0p\0i\0l\0s\0,\0 \0b\0l\0a\0c\0k\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0,\0 \0o\0t\0t\0i\0_\0h\0o\0l\0d\0i\0n\0g\0,\0 \0:\0O\0,\0 \0b\0l\0u\0e\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0,\0 \0b\0l\0u\0e\0 \0e\0y\0e\0s\0,\0 \0b\0l\0u\0e\0 \0s\0c\0a\0r\0f\0,\0 \0c\0l\0o\0s\0e\0d\0 \0e\0y\0e\0s\0,\0 \0l\0o\0o\0k\0i\0n\0g\0 \0a\0t\0 \0v\0i\0e\0w\0e\0r\0,\0 \0f\0u\0l\0l\0 \0b\0o\0d\0y\0,\0 \0w\0h\0i\0s\0k\0e\0r\0s\0,\0 \0c\0l\0o\0s\0e\0d\0 \0m\0o\0u\0t\0h\0,\0 \0o\0t\0t\0i\0_\0b\0a\0c\0k\0_\0s\0i\0d\0e\0,\0 \0h\0e\0a\0r\0t\0,\0 \0b\0l\0a\0c\0k\0-\0l\0i\0n\0e\0_\0O\0t\0t\0i\0_\0w\0h\0i\0s\0k\0e\0r\0s\0,\0 \0b\0l\0a\0c\0k\0 \0e\0y\0e\0s\0,\0 \0o\0n\0 \0b\0a\0c\0k\0,\0 \0m\0i\0d\0d\0l\0e\0 \0f\0i\0n\0g\0e\0r\0,\0 \0c\0r\0y\0i\0n\0g\0,\0 \0n\0o\0 \0h\0u\0m\0a\0n\0s\0,\0 \0h\0o\0l\0d\0i\0n\0g\0 \0k\0n\0i\0f\0e\0"
output:
url: images/Otti v2 Model 2025-01-10.png
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
instance_prompt: otti
license: apache-2.0
---
# otti_v2
<Gallery />
## Trigger words
You should use `otti` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/AdiCakepLabs/otti_v2/tree/main) them in the Files & versions tab.
|
kostiantynk1205/420758a0-ee9a-4513-98ea-c6cdeecb5a47 | kostiantynk1205 | 2025-01-10T17:55:25Z | 10 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-2b-it",
"base_model:adapter:unsloth/gemma-2-2b-it",
"license:gemma",
"region:us"
] | null | 2025-01-10T17:24:48Z | ---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-2b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 420758a0-ee9a-4513-98ea-c6cdeecb5a47
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-2b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d8a87189652aae0f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d8a87189652aae0f_train_data.json
type:
field_input: condition
field_instruction: drugName
field_output: review
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk1205/420758a0-ee9a-4513-98ea-c6cdeecb5a47
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/d8a87189652aae0f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 25e6c5f1-d7ed-4fe4-884d-711b49dddbac
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 25e6c5f1-d7ed-4fe4-884d-711b49dddbac
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 420758a0-ee9a-4513-98ea-c6cdeecb5a47
This model is a fine-tuned version of [unsloth/gemma-2-2b-it](https://huggingface.co/unsloth/gemma-2-2b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.7053 | 0.0000 | 1 | 3.0780 |
| 3.086 | 0.0001 | 3 | 3.0555 |
| 2.8443 | 0.0002 | 6 | 2.8310 |
| 2.4184 | 0.0004 | 9 | 2.5850 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
great0001/0beeabc9-2727-4769-8a36-82195c339b2f | great0001 | 2025-01-10T17:55:24Z | 16 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m-deduped",
"base_model:adapter:EleutherAI/pythia-70m-deduped",
"license:apache-2.0",
"region:us"
] | null | 2025-01-10T17:54:47Z | ---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-70m-deduped
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0beeabc9-2727-4769-8a36-82195c339b2f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-70m-deduped
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e6a65369dcce024b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e6a65369dcce024b_train_data.json
type:
field_instruction: instruction
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: great0001/0beeabc9-2727-4769-8a36-82195c339b2f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/e6a65369dcce024b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2c4b003d-1db2-4e2d-bd96-c12e7ade5571
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2c4b003d-1db2-4e2d-bd96-c12e7ade5571
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 0beeabc9-2727-4769-8a36-82195c339b2f
This model is a fine-tuned version of [EleutherAI/pythia-70m-deduped](https://huggingface.co/EleutherAI/pythia-70m-deduped) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 32.1478
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 461.7433 | 0.0006 | 1 | 32.1927 |
| 44.2237 | 0.0018 | 3 | 32.1861 |
| 72.9652 | 0.0037 | 6 | 32.1752 |
| 143.8256 | 0.0055 | 9 | 32.1478 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
tuanna08go/f8d99c6f-3a86-315c-763e-ffb180dc039a | tuanna08go | 2025-01-10T17:54:00Z | 13 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-2b-it",
"base_model:adapter:unsloth/gemma-2-2b-it",
"license:gemma",
"region:us"
] | null | 2025-01-10T16:45:53Z | ---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-2b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f8d99c6f-3a86-315c-763e-ffb180dc039a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-2b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d8a87189652aae0f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d8a87189652aae0f_train_data.json
type:
field_input: condition
field_instruction: drugName
field_output: review
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: tuanna08go/f8d99c6f-3a86-315c-763e-ffb180dc039a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/d8a87189652aae0f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 25e6c5f1-d7ed-4fe4-884d-711b49dddbac
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 25e6c5f1-d7ed-4fe4-884d-711b49dddbac
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# f8d99c6f-3a86-315c-763e-ffb180dc039a
This model is a fine-tuned version of [unsloth/gemma-2-2b-it](https://huggingface.co/unsloth/gemma-2-2b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3943
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 3.0783 |
| 2.8299 | 0.0004 | 10 | 2.7094 |
| 2.5219 | 0.0008 | 20 | 2.4601 |
| 2.353 | 0.0012 | 30 | 2.4072 |
| 2.3631 | 0.0016 | 40 | 2.3964 |
| 2.4035 | 0.0020 | 50 | 2.3943 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
hryzion/pokemon-lora | hryzion | 2025-01-10T17:52:58Z | 12 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2025-01-09T07:20:48Z | ---
base_model: runwayml/stable-diffusion-v1-5
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - hryzion/pokemon-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the svjack/pokemon-blip-captions-en-zh dataset. You can find some example images in the following.




## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
dimasik1987/cb6ae267-aa18-49f8-b1a7-6b9646d55ce1 | dimasik1987 | 2025-01-10T17:52:53Z | 11 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/CodeLlama-13b-hf-flash",
"base_model:adapter:NousResearch/CodeLlama-13b-hf-flash",
"region:us"
] | null | 2025-01-10T17:38:30Z | ---
library_name: peft
base_model: NousResearch/CodeLlama-13b-hf-flash
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cb6ae267-aa18-49f8-b1a7-6b9646d55ce1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/CodeLlama-13b-hf-flash
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- de51b1266f93339a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/de51b1266f93339a_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: dimasik1987/cb6ae267-aa18-49f8-b1a7-6b9646d55ce1
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/de51b1266f93339a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3739b0f7-2f6d-494a-a810-4bb94c46aa31
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 3739b0f7-2f6d-494a-a810-4bb94c46aa31
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# cb6ae267-aa18-49f8-b1a7-6b9646d55ce1
This model is a fine-tuned version of [NousResearch/CodeLlama-13b-hf-flash](https://huggingface.co/NousResearch/CodeLlama-13b-hf-flash) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0972
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0005 | 1 | 2.1816 |
| 8.2767 | 0.0040 | 8 | 2.1498 |
| 8.3844 | 0.0080 | 16 | 2.1088 |
| 7.9056 | 0.0119 | 24 | 2.0972 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Subsets and Splits