modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
jeFF9999/qwen2.5-7b-instruct-trl-sft-model | jeFF9999 | 2025-04-27T02:22:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-04-27T02:11:37Z | ---
base_model: Qwen/Qwen2.5-VL-7B-Instruct
library_name: transformers
model_name: qwen2.5-7b-instruct-trl-sft-model
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen2.5-7b-instruct-trl-sft-model
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jeFF9999/qwen2.5-7b-instruct-trl-sft-model", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/maxime-imbeau-umaneo/qwen2.5-7b-instruct-trl-sft/runs/yqxlpl0y)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.1
- Transformers: 4.51.1
- Pytorch: 2.1.0+cu118
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
genki10/BERT_V8_sp10_lw40_ex100_lo100_k5_k5_fold4 | genki10 | 2025-04-27T00:51:23Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-27T00:33:31Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: BERT_V8_sp10_lw40_ex100_lo100_k5_k5_fold4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_V8_sp10_lw40_ex100_lo100_k5_k5_fold4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8568
- Qwk: 0.4126
- Mse: 0.8568
- Rmse: 0.9257
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:------:|
| No log | 1.0 | 4 | 10.0080 | 0.0066 | 10.0080 | 3.1635 |
| No log | 2.0 | 8 | 5.8547 | 0.0505 | 5.8547 | 2.4196 |
| No log | 3.0 | 12 | 3.1352 | 0.0118 | 3.1352 | 1.7707 |
| No log | 4.0 | 16 | 1.7320 | 0.0394 | 1.7320 | 1.3161 |
| No log | 5.0 | 20 | 1.2269 | 0.0342 | 1.2269 | 1.1076 |
| No log | 6.0 | 24 | 0.8727 | 0.1948 | 0.8727 | 0.9342 |
| No log | 7.0 | 28 | 1.0243 | 0.0495 | 1.0243 | 1.0121 |
| No log | 8.0 | 32 | 0.7590 | 0.4097 | 0.7590 | 0.8712 |
| No log | 9.0 | 36 | 0.6756 | 0.5025 | 0.6756 | 0.8220 |
| No log | 10.0 | 40 | 1.0251 | 0.3609 | 1.0251 | 1.0124 |
| No log | 11.0 | 44 | 0.9219 | 0.4012 | 0.9219 | 0.9601 |
| No log | 12.0 | 48 | 0.6766 | 0.4951 | 0.6766 | 0.8225 |
| No log | 13.0 | 52 | 1.1625 | 0.3650 | 1.1625 | 1.0782 |
| No log | 14.0 | 56 | 0.8185 | 0.4478 | 0.8185 | 0.9047 |
| No log | 15.0 | 60 | 1.1450 | 0.3743 | 1.1450 | 1.0701 |
| No log | 16.0 | 64 | 0.7286 | 0.4854 | 0.7286 | 0.8536 |
| No log | 17.0 | 68 | 0.9480 | 0.3957 | 0.9480 | 0.9736 |
| No log | 18.0 | 72 | 0.9902 | 0.3894 | 0.9902 | 0.9951 |
| No log | 19.0 | 76 | 0.7143 | 0.5412 | 0.7143 | 0.8451 |
| No log | 20.0 | 80 | 1.3726 | 0.3112 | 1.3726 | 1.1716 |
| No log | 21.0 | 84 | 0.9579 | 0.4055 | 0.9579 | 0.9787 |
| No log | 22.0 | 88 | 0.9382 | 0.3987 | 0.9382 | 0.9686 |
| No log | 23.0 | 92 | 0.9784 | 0.3937 | 0.9784 | 0.9891 |
| No log | 24.0 | 96 | 1.3741 | 0.2886 | 1.3741 | 1.1722 |
| No log | 25.0 | 100 | 0.7647 | 0.4519 | 0.7647 | 0.8745 |
| No log | 26.0 | 104 | 1.6422 | 0.2269 | 1.6422 | 1.2815 |
| No log | 27.0 | 108 | 0.7155 | 0.4536 | 0.7155 | 0.8459 |
| No log | 28.0 | 112 | 0.9718 | 0.3850 | 0.9718 | 0.9858 |
| No log | 29.0 | 116 | 0.8900 | 0.3954 | 0.8900 | 0.9434 |
| No log | 30.0 | 120 | 1.1999 | 0.3136 | 1.1999 | 1.0954 |
| No log | 31.0 | 124 | 0.8515 | 0.4071 | 0.8515 | 0.9228 |
| No log | 32.0 | 128 | 1.1243 | 0.3429 | 1.1243 | 1.0603 |
| No log | 33.0 | 132 | 1.0474 | 0.3745 | 1.0474 | 1.0234 |
| No log | 34.0 | 136 | 0.9949 | 0.3880 | 0.9949 | 0.9974 |
| No log | 35.0 | 140 | 1.3042 | 0.2998 | 1.3042 | 1.1420 |
| No log | 36.0 | 144 | 0.7902 | 0.3926 | 0.7902 | 0.8889 |
| No log | 37.0 | 148 | 1.0854 | 0.3277 | 1.0854 | 1.0418 |
| No log | 38.0 | 152 | 0.8275 | 0.4027 | 0.8275 | 0.9097 |
| No log | 39.0 | 156 | 1.1221 | 0.3287 | 1.1221 | 1.0593 |
| No log | 40.0 | 160 | 0.8769 | 0.3993 | 0.8769 | 0.9364 |
| No log | 41.0 | 164 | 1.1536 | 0.3024 | 1.1536 | 1.0741 |
| No log | 42.0 | 168 | 0.9203 | 0.3626 | 0.9203 | 0.9593 |
| No log | 43.0 | 172 | 1.1411 | 0.3009 | 1.1411 | 1.0682 |
| No log | 44.0 | 176 | 0.8892 | 0.4035 | 0.8892 | 0.9430 |
| No log | 45.0 | 180 | 1.2542 | 0.3188 | 1.2542 | 1.1199 |
| No log | 46.0 | 184 | 0.8217 | 0.4340 | 0.8217 | 0.9065 |
| No log | 47.0 | 188 | 1.2440 | 0.3004 | 1.2440 | 1.1153 |
| No log | 48.0 | 192 | 0.7930 | 0.4453 | 0.7930 | 0.8905 |
| No log | 49.0 | 196 | 0.8568 | 0.4126 | 0.8568 | 0.9257 |
### Framework versions
- Transformers 4.51.1
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
|
genki10/BERT_V8_sp10_lw40_ex100_lo100_k5_k5_fold3 | genki10 | 2025-04-27T00:33:26Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-27T00:14:56Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: BERT_V8_sp10_lw40_ex100_lo100_k5_k5_fold3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_V8_sp10_lw40_ex100_lo100_k5_k5_fold3
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0506
- Qwk: 0.3377
- Mse: 1.0507
- Rmse: 1.0250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:------:|
| No log | 1.0 | 4 | 10.6077 | 0.0 | 10.6056 | 3.2566 |
| No log | 2.0 | 8 | 8.3655 | 0.0 | 8.3637 | 2.8920 |
| No log | 3.0 | 12 | 7.9363 | 0.0 | 7.9346 | 2.8168 |
| No log | 4.0 | 16 | 7.4788 | 0.0 | 7.4771 | 2.7344 |
| No log | 5.0 | 20 | 6.5038 | -0.0017 | 6.5023 | 2.5500 |
| No log | 6.0 | 24 | 4.0815 | 0.0 | 4.0805 | 2.0200 |
| No log | 7.0 | 28 | 2.7843 | 0.0003 | 2.7834 | 1.6684 |
| No log | 8.0 | 32 | 1.7015 | 0.0093 | 1.7007 | 1.3041 |
| No log | 9.0 | 36 | 1.1864 | 0.0 | 1.1861 | 1.0891 |
| No log | 10.0 | 40 | 1.0123 | 0.0 | 1.0121 | 1.0060 |
| No log | 11.0 | 44 | 1.2791 | 0.0129 | 1.2788 | 1.1309 |
| No log | 12.0 | 48 | 1.0698 | 0.1225 | 1.0696 | 1.0342 |
| No log | 13.0 | 52 | 1.2781 | 0.1472 | 1.2780 | 1.1305 |
| No log | 14.0 | 56 | 1.2359 | 0.1895 | 1.2357 | 1.1116 |
| No log | 15.0 | 60 | 0.6045 | 0.4462 | 0.6046 | 0.7776 |
| No log | 16.0 | 64 | 0.9881 | 0.2481 | 0.9879 | 0.9939 |
| No log | 17.0 | 68 | 0.7992 | 0.3210 | 0.7994 | 0.8941 |
| No log | 18.0 | 72 | 0.7167 | 0.4027 | 0.7171 | 0.8468 |
| No log | 19.0 | 76 | 1.6460 | 0.2078 | 1.6458 | 1.2829 |
| No log | 20.0 | 80 | 0.6952 | 0.5174 | 0.6955 | 0.8340 |
| No log | 21.0 | 84 | 1.8536 | 0.1954 | 1.8531 | 1.3613 |
| No log | 22.0 | 88 | 0.9072 | 0.3954 | 0.9072 | 0.9525 |
| No log | 23.0 | 92 | 1.4696 | 0.2580 | 1.4694 | 1.2122 |
| No log | 24.0 | 96 | 1.3728 | 0.2841 | 1.3727 | 1.1716 |
| No log | 25.0 | 100 | 1.1966 | 0.3212 | 1.1966 | 1.0939 |
| No log | 26.0 | 104 | 1.2964 | 0.2944 | 1.2964 | 1.1386 |
| No log | 27.0 | 108 | 1.5836 | 0.2521 | 1.5835 | 1.2584 |
| No log | 28.0 | 112 | 1.0246 | 0.3770 | 1.0248 | 1.0123 |
| No log | 29.0 | 116 | 1.8242 | 0.1960 | 1.8240 | 1.3506 |
| No log | 30.0 | 120 | 1.4130 | 0.2677 | 1.4131 | 1.1887 |
| No log | 31.0 | 124 | 0.8526 | 0.3840 | 0.8529 | 0.9235 |
| No log | 32.0 | 128 | 1.5843 | 0.2363 | 1.5842 | 1.2587 |
| No log | 33.0 | 132 | 0.8602 | 0.4028 | 0.8603 | 0.9275 |
| No log | 34.0 | 136 | 1.3910 | 0.2563 | 1.3909 | 1.1793 |
| No log | 35.0 | 140 | 1.2751 | 0.2854 | 1.2750 | 1.1292 |
| No log | 36.0 | 144 | 1.0376 | 0.3437 | 1.0376 | 1.0186 |
| No log | 37.0 | 148 | 1.1208 | 0.3265 | 1.1208 | 1.0587 |
| No log | 38.0 | 152 | 1.1748 | 0.3165 | 1.1749 | 1.0839 |
| No log | 39.0 | 156 | 1.1776 | 0.3215 | 1.1776 | 1.0852 |
| No log | 40.0 | 160 | 1.2587 | 0.2935 | 1.2587 | 1.1219 |
| No log | 41.0 | 164 | 0.9508 | 0.3482 | 0.9509 | 0.9751 |
| No log | 42.0 | 168 | 1.2712 | 0.2868 | 1.2711 | 1.1274 |
| No log | 43.0 | 172 | 0.8891 | 0.3756 | 0.8893 | 0.9430 |
| No log | 44.0 | 176 | 1.5354 | 0.2425 | 1.5353 | 1.2391 |
| No log | 45.0 | 180 | 1.0074 | 0.3544 | 1.0076 | 1.0038 |
| No log | 46.0 | 184 | 1.0331 | 0.3554 | 1.0332 | 1.0165 |
| No log | 47.0 | 188 | 1.3824 | 0.2874 | 1.3824 | 1.1757 |
| No log | 48.0 | 192 | 0.8946 | 0.4039 | 0.8947 | 0.9459 |
| No log | 49.0 | 196 | 1.1818 | 0.3135 | 1.1819 | 1.0871 |
| No log | 50.0 | 200 | 1.0506 | 0.3377 | 1.0507 | 1.0250 |
### Framework versions
- Transformers 4.51.1
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
|
ASethi04/meta-llama-Llama-3.1-8B-opc-sft-first-full-parameter-4-1e-05 | ASethi04 | 2025-04-27T00:02:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-26T22:49:42Z | ---
base_model: meta-llama/Llama-3.1-8B
library_name: transformers
model_name: meta-llama-Llama-3.1-8B-opc-sft-first-full-parameter-4-1e-05
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for meta-llama-Llama-3.1-8B-opc-sft-first-full-parameter-4-1e-05
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ASethi04/meta-llama-Llama-3.1-8B-opc-sft-first-full-parameter-4-1e-05", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/56xi3yti)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.1
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
shovit/medbot-llama-3.2-3B-gguf | shovit | 2025-04-26T23:32:28Z | 0 | 1 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:quantized:unsloth/Llama-3.2-3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-26T09:33:02Z | ---
base_model: unsloth/Llama-3.2-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** shovit
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
DayanandaThokchom/whisper-small-mni | DayanandaThokchom | 2025-04-26T23:25:47Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"mni",
"dataset:DayanandaThokchom/Manipur-meiteilon-ASR-DUAL-script",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-04-26T21:54:16Z | ---
library_name: transformers
language:
- mni
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- DayanandaThokchom/Manipur-meiteilon-ASR-DUAL-script
metrics:
- wer
model-index:
- name: Whisper Small mni - DayanandaThokchom
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Manipur-meiteilon-ASR
type: DayanandaThokchom/Manipur-meiteilon-ASR-DUAL-script
args: 'config: mni, split: test'
metrics:
- name: Wer
type: wer
value: 102.86995515695068
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small mni - DayanandaThokchom
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Manipur-meiteilon-ASR dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3511
- Wer: 102.8700
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 1.5783 | 0.1410 | 100 | 1.8435 | 129.0859 |
| 1.1639 | 0.2820 | 200 | 1.5188 | 122.7389 |
| 1.119 | 0.4230 | 300 | 1.4262 | 90.9417 |
| 1.0938 | 0.5640 | 400 | 1.3730 | 105.8710 |
| 1.0367 | 0.7050 | 500 | 1.3511 | 102.8700 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
marialvsantiago/3ebd6c20-1b4b-4abb-8221-0a3a58c0ea7c | marialvsantiago | 2025-04-26T23:25:21Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-instruct-v0.2",
"base_model:adapter:unsloth/mistral-7b-instruct-v0.2",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-26T23:00:55Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-instruct-v0.2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3ebd6c20-1b4b-4abb-8221-0a3a58c0ea7c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-instruct-v0.2
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9ddca7aa4e960bfe_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9ddca7aa4e960bfe_train_data.json
type:
field_input: text
field_instruction: prompt
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: marialvsantiago/3ebd6c20-1b4b-4abb-8221-0a3a58c0ea7c
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/9ddca7aa4e960bfe_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8dd130e9-5673-4235-a933-9e6897947742
wandb_project: s56-33
wandb_run: your_name
wandb_runid: 8dd130e9-5673-4235-a933-9e6897947742
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 3ebd6c20-1b4b-4abb-8221-0a3a58c0ea7c
This model is a fine-tuned version of [unsloth/mistral-7b-instruct-v0.2](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0244
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0126 | 0.0077 | 200 | 0.0244 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
xw17/Qwen2-1.5B-Instruct_finetuned__optimized1_lora_universal | xw17 | 2025-04-26T22:58:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-26T22:58:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mlfoundations-dev/c1_code_nod_4s | mlfoundations-dev | 2025-04-26T22:49:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-26T07:05:01Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: c1_code_nod_4s
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# c1_code_nod_4s
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/c1_code_nod_4s dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- total_eval_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1
- Datasets 3.0.2
- Tokenizers 0.20.3
|
ariaattarml/Preview-TensorStax-TS-7b-1.0 | ariaattarml | 2025-04-26T22:13:05Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"region:us"
] | null | 2025-04-26T22:04:09Z | ## Overview
Early research preview of ariaattarml/TensorStax-TS-7b-1.0
## System prompt format:
```python
Task Overview:
You are a data science expert. Below, you are provided with a database schema and a natural language question. Your task is to understand the schema and generate a valid SQL query to answer the question.
Database Engine:
SQLite
Database Schema:
Table: circuits
circuit_id (INT(11))
circuit_ref (VARCHAR(255))
name (VARCHAR(255))
location (VARCHAR(255))
country (VARCHAR(255))
lat (FLOAT)
lng (FLOAT)
alt (INT(11))
url (VARCHAR(255))
Table: constructor_results
constructor_results_id (INT(11))
race_id (INT(11))
constructor_id (INT(11))
points (FLOAT)
status (VARCHAR(255))
Table: constructor_standings
constructor_standings_id (INT(11))
race_id (INT(11))
constructor_id (INT(11))
points (FLOAT)
position (INT(11))
position_text (VARCHAR(255))
wins (INT(11))
Table: constructors
constructor_id (INT(11))
constructor_ref (VARCHAR(255))
name (VARCHAR(255))
nationality (VARCHAR(255))
url (VARCHAR(255))
Table: driver_standings
driver_standings_id (INT(11))
race_id (INT(11))
driver_id (INT(11))
points (FLOAT)
position (INT(11))
position_text (VARCHAR(255))
wins (INT(11))
Table: drivers
driver_id (INT(11))
driver_ref (VARCHAR(255))
number (INT(11))
code (VARCHAR(3))
forename (VARCHAR(255))
surname (VARCHAR(255))
dob (DATE)
nationality (VARCHAR(255))
url (VARCHAR(255))
Table: lap_times
race_id (INT(11))
driver_id (INT(11))
lap (INT(11))
position (INT(11))
time (VARCHAR(255))
milliseconds (INT(11))
Table: pit_stops
race_id (INT(11))
driver_id (INT(11))
stop (INT(11))
lap (INT(11))
time (TIME)
duration (VARCHAR(255))
milliseconds (INT(11))
Table: qualifying
qualify_id (INT(11))
race_id (INT(11))
driver_id (INT(11))
constructor_id (INT(11))
number (INT(11))
position (INT(11))
q1 (VARCHAR(255))
q2 (VARCHAR(255))
q3 (VARCHAR(255))
Table: races
race_id (INT(11))
year (INT(11))
round (INT(11))
circuit_id (INT(11))
name (VARCHAR(255))
date (DATE)
time (TIME)
url (VARCHAR(255))
fp1_date (VARCHAR(255))
fp1_time (VARCHAR(255))
fp2_date (VARCHAR(255))
fp2_time (VARCHAR(255))
fp3_date (VARCHAR(255))
fp3_time (VARCHAR(255))
quali_date (VARCHAR(255))
quali_time (VARCHAR(255))
sprint_date (VARCHAR(255))
sprint_time (VARCHAR(255))
Table: results
result_id (INT(11))
race_id (INT(11))
driver_id (INT(11))
constructor_id (INT(11))
number (INT(11))
grid (INT(11))
position (INT(11))
position_text (VARCHAR(255))
position_order (INT(11))
points (FLOAT)
laps (INT(11))
time (VARCHAR(255))
milliseconds (INT(11))
fastest_lap (INT(11))
rank (INT(11))
fastest_lap_time (VARCHAR(255))
fastest_lap_speed (VARCHAR(255))
status_id (INT(11))
Table: seasons
year (INT(11))
url (VARCHAR(255))
Table: status
status_id (INT(11))
status (VARCHAR(255))
Table: sprint_results
result_id (INT(11))
race_id (INT(11))
driver_id (INT(11))
constructor_id (INT(11))
number (INT(11))
grid (INT(11))
position (INT(11))
position_text (VARCHAR(255))
position_order (INT(11))
points (FLOAT)
laps (INT(11))
time (VARCHAR(255))
milliseconds (INT(11))
fastest_lap (INT(11))
fastest_lap_time (VARCHAR(255))
fastest_lap_speed (VARCHAR(255))
status_id (INT(11))
Table: short_grand_prix_names
full_name (VARCHAR(255))
short_name (VARCHAR(255))
Table: short_constructor_names
constructor_ref (VARCHAR(255))
short_name (VARCHAR(255))
Table: liveries
constructor_ref (VARCHAR(255))
start_year (INT(11))
end_year (INT(11))
primary_hex_code (VARCHAR(255))
Table: tdr_overrides
year (INT(11))
constructor_ref (VARCHAR(255))
driver_ref (VARCHAR(255))
team_driver_rank (INT(11))
Table: circuits_ext
circuit_id (INT)
circuit_ref (TEXT)
name (TEXT)
location (TEXT)
country (TEXT)
lat (REAL)
lng (REAL)
alt (INT)
url (TEXT)
last_race_year ()
number_of_races ()
Table: constructors_ext
constructor_id (INT)
constructor_ref (TEXT)
name (TEXT)
nationality (TEXT)
url (TEXT)
short_name ()
Table: drivers_ext
driver_id (INT)
driver_ref (TEXT)
number (INT)
code ()
forename (TEXT)
surname (TEXT)
full_name (TEXT)
dob (NUM)
nationality (TEXT)
url (TEXT)
Table: driver_standings_ext
driver_standings_id (INT)
race_id (INT)
driver_id (INT)
points (REAL)
position (INT)
position_text (TEXT)
wins (INT)
Table: lap_times_ext
race_id (INT)
driver_id (INT)
lap (INT)
position (INT)
time (TEXT)
milliseconds (INT)
seconds (REAL)
running_milliseconds ()
Table: lap_time_stats
race_id (INT)
driver_id (INT)
avg_milliseconds ()
avg_seconds ()
stdev_milliseconds ()
stdev_seconds ()
Table: races_ext
race_id (INT)
year (INT)
round (INT)
circuit_id (INT)
name (TEXT)
date (NUM)
time (NUM)
url (TEXT)
fp1_date (TEXT)
fp1_time (TEXT)
fp2_date (TEXT)
fp2_time (TEXT)
fp3_date (TEXT)
fp3_time (TEXT)
quali_date (TEXT)
quali_time (TEXT)
sprint_date (TEXT)
sprint_time (TEXT)
is_pit_data_available ()
short_name ()
has_sprint ()
max_points ()
Table: team_driver_ranks
year (INT)
constructor_id (INT)
constructor_ref (TEXT)
driver_id (INT)
driver_ref (TEXT)
team_driver_rank ()
Table: drives
year (INT)
driver_id (INT)
drive_id ()
constructor_id (INT)
first_round (INT)
last_round (INT)
is_first_drive_of_season ()
is_final_drive_of_season ()
Table: retirements
race_id (INT)
driver_id (INT)
lap ()
position_order (INT)
status_id (INT)
retirement_type ()
Table: lap_positions
race_id (INT)
driver_id (INT)
lap (INT)
position (INT)
lap_type ()
Output Format:
In your answer, please provide:
<thinking>
Think through the problem step-by-step, analyzing the database schema and considering different approaches.
</thinking>
<answer>
```sql
--- Your SQL query
```
</answer>
''' |
haizelabs-org/epic-judge-4-24-v2 | haizelabs-org | 2025-04-26T21:36:08Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"region:us"
] | null | 2025-04-26T21:35:02Z | ---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
genki10/BERT_V8_sp10_lw40_ex50_lo100_k5_k5_fold2 | genki10 | 2025-04-26T21:16:10Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-26T20:58:08Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: BERT_V8_sp10_lw40_ex50_lo100_k5_k5_fold2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_V8_sp10_lw40_ex50_lo100_k5_k5_fold2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6533
- Qwk: 0.4873
- Mse: 0.6530
- Rmse: 0.8081
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 1.0 | 3 | 8.5936 | 0.0 | 8.5938 | 2.9315 |
| No log | 2.0 | 6 | 6.4304 | 0.0 | 6.4305 | 2.5359 |
| No log | 3.0 | 9 | 4.8713 | 0.0102 | 4.8715 | 2.2072 |
| No log | 4.0 | 12 | 3.4208 | 0.0 | 3.4211 | 1.8496 |
| No log | 5.0 | 15 | 2.3530 | 0.1286 | 2.3534 | 1.5341 |
| No log | 6.0 | 18 | 1.6334 | 0.0280 | 1.6337 | 1.2782 |
| No log | 7.0 | 21 | 1.1215 | 0.0107 | 1.1219 | 1.0592 |
| No log | 8.0 | 24 | 1.1657 | 0.0107 | 1.1662 | 1.0799 |
| No log | 9.0 | 27 | 0.7898 | 0.3442 | 0.7903 | 0.8890 |
| No log | 10.0 | 30 | 0.8306 | 0.1611 | 0.8310 | 0.9116 |
| No log | 11.0 | 33 | 1.0342 | 0.0875 | 1.0346 | 1.0171 |
| No log | 12.0 | 36 | 0.6920 | 0.4277 | 0.6923 | 0.8320 |
| No log | 13.0 | 39 | 0.8449 | 0.2860 | 0.8452 | 0.9194 |
| No log | 14.0 | 42 | 0.7467 | 0.3972 | 0.7470 | 0.8643 |
| No log | 15.0 | 45 | 0.6532 | 0.4403 | 0.6534 | 0.8083 |
| No log | 16.0 | 48 | 0.7208 | 0.4676 | 0.7210 | 0.8491 |
| No log | 17.0 | 51 | 0.6031 | 0.4712 | 0.6032 | 0.7767 |
| No log | 18.0 | 54 | 0.6595 | 0.5072 | 0.6595 | 0.8121 |
| No log | 19.0 | 57 | 0.6646 | 0.5208 | 0.6646 | 0.8152 |
| No log | 20.0 | 60 | 0.5305 | 0.6035 | 0.5303 | 0.7282 |
| No log | 21.0 | 63 | 1.4858 | 0.3180 | 1.4858 | 1.2189 |
| No log | 22.0 | 66 | 0.6223 | 0.5293 | 0.6221 | 0.7888 |
| No log | 23.0 | 69 | 0.5328 | 0.5917 | 0.5326 | 0.7298 |
| No log | 24.0 | 72 | 0.8937 | 0.3542 | 0.8937 | 0.9454 |
| No log | 25.0 | 75 | 0.5174 | 0.5633 | 0.5174 | 0.7193 |
| No log | 26.0 | 78 | 0.5613 | 0.5248 | 0.5612 | 0.7492 |
| No log | 27.0 | 81 | 0.9702 | 0.3565 | 0.9702 | 0.9850 |
| No log | 28.0 | 84 | 0.5281 | 0.5528 | 0.5279 | 0.7266 |
| No log | 29.0 | 87 | 0.9891 | 0.3671 | 0.9890 | 0.9945 |
| No log | 30.0 | 90 | 0.9355 | 0.3809 | 0.9354 | 0.9672 |
| No log | 31.0 | 93 | 0.4945 | 0.5867 | 0.4943 | 0.7031 |
| No log | 32.0 | 96 | 0.6917 | 0.4518 | 0.6915 | 0.8316 |
| No log | 33.0 | 99 | 0.8254 | 0.4023 | 0.8252 | 0.9084 |
| No log | 34.0 | 102 | 0.5053 | 0.5452 | 0.5052 | 0.7108 |
| No log | 35.0 | 105 | 0.7413 | 0.4330 | 0.7411 | 0.8609 |
| No log | 36.0 | 108 | 0.6220 | 0.4819 | 0.6217 | 0.7885 |
| No log | 37.0 | 111 | 0.6184 | 0.5053 | 0.6182 | 0.7862 |
| No log | 38.0 | 114 | 0.7743 | 0.4270 | 0.7741 | 0.8798 |
| No log | 39.0 | 117 | 0.6755 | 0.4591 | 0.6753 | 0.8217 |
| No log | 40.0 | 120 | 0.6848 | 0.4384 | 0.6846 | 0.8274 |
| No log | 41.0 | 123 | 0.5342 | 0.5320 | 0.5340 | 0.7308 |
| No log | 42.0 | 126 | 0.6045 | 0.5006 | 0.6042 | 0.7773 |
| No log | 43.0 | 129 | 0.5897 | 0.5168 | 0.5895 | 0.7678 |
| No log | 44.0 | 132 | 0.5954 | 0.5035 | 0.5952 | 0.7715 |
| No log | 45.0 | 135 | 0.7638 | 0.4302 | 0.7636 | 0.8738 |
| No log | 46.0 | 138 | 0.6257 | 0.5051 | 0.6254 | 0.7908 |
| No log | 47.0 | 141 | 0.7981 | 0.4183 | 0.7979 | 0.8933 |
| No log | 48.0 | 144 | 0.6160 | 0.5173 | 0.6158 | 0.7847 |
| No log | 49.0 | 147 | 0.7008 | 0.4770 | 0.7007 | 0.8371 |
| No log | 50.0 | 150 | 0.6533 | 0.4873 | 0.6530 | 0.8081 |
### Framework versions
- Transformers 4.51.1
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
|
marialvsantiago/5a44958b-cb47-4a2f-be91-e6a75660bbf0 | marialvsantiago | 2025-04-26T21:06:10Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:aisingapore/Llama-SEA-LION-v2-8B-IT",
"base_model:adapter:aisingapore/Llama-SEA-LION-v2-8B-IT",
"license:llama3",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-26T20:33:26Z | ---
library_name: peft
license: llama3
base_model: aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5a44958b-cb47-4a2f-be91-e6a75660bbf0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 290f33bc12bd7560_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/290f33bc12bd7560_train_data.json
type:
field_instruction: text
field_output: title
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: marialvsantiago/5a44958b-cb47-4a2f-be91-e6a75660bbf0
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/290f33bc12bd7560_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e22eed74-bb16-4ff3-b35b-e7aa69efe8f6
wandb_project: s56-33
wandb_run: your_name
wandb_runid: e22eed74-bb16-4ff3-b35b-e7aa69efe8f6
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 5a44958b-cb47-4a2f-be91-e6a75660bbf0
This model is a fine-tuned version of [aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct](https://huggingface.co/aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4265
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.581 | 0.0048 | 200 | 1.4265 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/ibm-granite3.3-8b-ethical-v0.2-GGUF | mradermacher | 2025-04-26T21:00:10Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Mahesh2841/ibm-granite3.3-8b-ethical-v0.2",
"base_model:quantized:Mahesh2841/ibm-granite3.3-8b-ethical-v0.2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-26T20:25:38Z | ---
base_model: Mahesh2841/ibm-granite3.3-8b-ethical-v0.2
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Mahesh2841/ibm-granite3.3-8b-ethical-v0.2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ibm-granite3.3-8b-ethical-v0.2-GGUF/resolve/main/ibm-granite3.3-8b-ethical-v0.2.Q2_K.gguf) | Q2_K | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/ibm-granite3.3-8b-ethical-v0.2-GGUF/resolve/main/ibm-granite3.3-8b-ethical-v0.2.Q3_K_S.gguf) | Q3_K_S | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/ibm-granite3.3-8b-ethical-v0.2-GGUF/resolve/main/ibm-granite3.3-8b-ethical-v0.2.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ibm-granite3.3-8b-ethical-v0.2-GGUF/resolve/main/ibm-granite3.3-8b-ethical-v0.2.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/ibm-granite3.3-8b-ethical-v0.2-GGUF/resolve/main/ibm-granite3.3-8b-ethical-v0.2.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/ibm-granite3.3-8b-ethical-v0.2-GGUF/resolve/main/ibm-granite3.3-8b-ethical-v0.2.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ibm-granite3.3-8b-ethical-v0.2-GGUF/resolve/main/ibm-granite3.3-8b-ethical-v0.2.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ibm-granite3.3-8b-ethical-v0.2-GGUF/resolve/main/ibm-granite3.3-8b-ethical-v0.2.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/ibm-granite3.3-8b-ethical-v0.2-GGUF/resolve/main/ibm-granite3.3-8b-ethical-v0.2.Q5_K_M.gguf) | Q5_K_M | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/ibm-granite3.3-8b-ethical-v0.2-GGUF/resolve/main/ibm-granite3.3-8b-ethical-v0.2.Q6_K.gguf) | Q6_K | 6.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ibm-granite3.3-8b-ethical-v0.2-GGUF/resolve/main/ibm-granite3.3-8b-ethical-v0.2.Q8_0.gguf) | Q8_0 | 8.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ibm-granite3.3-8b-ethical-v0.2-GGUF/resolve/main/ibm-granite3.3-8b-ethical-v0.2.f16.gguf) | f16 | 16.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mlfoundations-dev/b2_science_fasttext_pos_scp116k_3k | mlfoundations-dev | 2025-04-26T20:49:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-25T01:12:32Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: b2_science_fasttext_pos_scp116k_3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# b2_science_fasttext_pos_scp116k_3k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/b2_science_fasttext_pos_scp116k_3k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 24
- total_train_batch_size: 96
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Triangle104/Dolphin3.0-Qwen2.5-0.5B-Q5_K_S-GGUF | Triangle104 | 2025-04-26T19:49:59Z | 2 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:OpenCoder-LLM/opc-sft-stage1",
"dataset:OpenCoder-LLM/opc-sft-stage2",
"dataset:microsoft/orca-agentinstruct-1M-v1",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:NousResearch/hermes-function-calling-v1",
"dataset:AI-MO/NuminaMath-CoT",
"dataset:AI-MO/NuminaMath-TIR",
"dataset:allenai/tulu-3-sft-mixture",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:HuggingFaceTB/smoltalk",
"dataset:cognitivecomputations/samantha-data",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:m-a-p/Code-Feedback",
"base_model:cognitivecomputations/Dolphin3.0-Qwen2.5-0.5B",
"base_model:quantized:cognitivecomputations/Dolphin3.0-Qwen2.5-0.5B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-06T06:43:31Z | ---
base_model: cognitivecomputations/Dolphin3.0-Qwen2.5-0.5B
datasets:
- OpenCoder-LLM/opc-sft-stage1
- OpenCoder-LLM/opc-sft-stage2
- microsoft/orca-agentinstruct-1M-v1
- microsoft/orca-math-word-problems-200k
- NousResearch/hermes-function-calling-v1
- AI-MO/NuminaMath-CoT
- AI-MO/NuminaMath-TIR
- allenai/tulu-3-sft-mixture
- cognitivecomputations/dolphin-coder
- HuggingFaceTB/smoltalk
- cognitivecomputations/samantha-data
- m-a-p/CodeFeedback-Filtered-Instruction
- m-a-p/Code-Feedback
language:
- en
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/Dolphin3.0-Qwen2.5-0.5B-Q5_K_S-GGUF
This model was converted to GGUF format from [`cognitivecomputations/Dolphin3.0-Qwen2.5-0.5B`](https://huggingface.co/cognitivecomputations/Dolphin3.0-Qwen2.5-0.5B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/cognitivecomputations/Dolphin3.0-Qwen2.5-0.5B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Dolphin3.0-Qwen2.5-0.5B-Q5_K_S-GGUF --hf-file dolphin3.0-qwen2.5-0.5b-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Dolphin3.0-Qwen2.5-0.5B-Q5_K_S-GGUF --hf-file dolphin3.0-qwen2.5-0.5b-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Dolphin3.0-Qwen2.5-0.5B-Q5_K_S-GGUF --hf-file dolphin3.0-qwen2.5-0.5b-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Dolphin3.0-Qwen2.5-0.5B-Q5_K_S-GGUF --hf-file dolphin3.0-qwen2.5-0.5b-q5_k_s.gguf -c 2048
```
|
mradermacher/grpo-qwen7b-triton-5ep-GGUF | mradermacher | 2025-04-26T18:46:31Z | 0 | 1 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"base_model:tcapelle/grpo-qwen7b-triton-5ep",
"base_model:quantized:tcapelle/grpo-qwen7b-triton-5ep",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-26T18:09:53Z | ---
base_model: tcapelle/grpo-qwen7b-triton-5ep
language:
- en
library_name: transformers
model_name: workspace/data/axolotl-artifacts/grpo-beta-zero
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/tcapelle/grpo-qwen7b-triton-5ep
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/grpo-qwen7b-triton-5ep-GGUF/resolve/main/grpo-qwen7b-triton-5ep.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/grpo-qwen7b-triton-5ep-GGUF/resolve/main/grpo-qwen7b-triton-5ep.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/grpo-qwen7b-triton-5ep-GGUF/resolve/main/grpo-qwen7b-triton-5ep.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/grpo-qwen7b-triton-5ep-GGUF/resolve/main/grpo-qwen7b-triton-5ep.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/grpo-qwen7b-triton-5ep-GGUF/resolve/main/grpo-qwen7b-triton-5ep.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/grpo-qwen7b-triton-5ep-GGUF/resolve/main/grpo-qwen7b-triton-5ep.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/grpo-qwen7b-triton-5ep-GGUF/resolve/main/grpo-qwen7b-triton-5ep.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/grpo-qwen7b-triton-5ep-GGUF/resolve/main/grpo-qwen7b-triton-5ep.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/grpo-qwen7b-triton-5ep-GGUF/resolve/main/grpo-qwen7b-triton-5ep.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/grpo-qwen7b-triton-5ep-GGUF/resolve/main/grpo-qwen7b-triton-5ep.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/grpo-qwen7b-triton-5ep-GGUF/resolve/main/grpo-qwen7b-triton-5ep.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/grpo-qwen7b-triton-5ep-GGUF/resolve/main/grpo-qwen7b-triton-5ep.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
sergrabo/q-FrozenLake-v1-4x4-noSlippery | sergrabo | 2025-04-26T18:44:55Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-26T18:40:59Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="sergrabo/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
rizalmn23/Rizal_Muttaqin | rizalmn23 | 2025-04-26T18:32:58Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-26T18:32:58Z | ---
license: apache-2.0
---
|
leiredsol/mdeberta-v3-base-majority1.1 | leiredsol | 2025-04-26T18:16:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-26T18:15:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
leiredsol/roberta-base-majority1.1 | leiredsol | 2025-04-26T17:48:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-26T17:48:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Triangle104/QwQ-32B-ArliAI-RpR-v2-Q4_K_S-GGUF | Triangle104 | 2025-04-26T17:44:32Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:ArliAI/QwQ-32B-ArliAI-RpR-v2",
"base_model:quantized:ArliAI/QwQ-32B-ArliAI-RpR-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-26T17:41:17Z | ---
base_model: ArliAI/QwQ-32B-ArliAI-RpR-v2
language:
- en
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
thumbnail: https://cdn-uploads.huggingface.co/production/uploads/6625f4a8a8d1362ebcc3851a/9TIfNBdy29CDnn8NNIQPt.jpeg
---
# Triangle104/QwQ-32B-ArliAI-RpR-v2-Q4_K_S-GGUF
This model was converted to GGUF format from [`ArliAI/QwQ-32B-ArliAI-RpR-v2`](https://huggingface.co/ArliAI/QwQ-32B-ArliAI-RpR-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ArliAI/QwQ-32B-ArliAI-RpR-v2) for more details on the model.
---
RpR (RolePlay with Reasoning) is a new series of models from ArliAI. This series builds directly upon the successful dataset curation methodology and training methods developed for the RPMax series.
RpR models use the same curated, deduplicated RP and creative writing dataset used for RPMax, with a focus on variety to ensure high creativity and minimize cross-context repetition. Users familiar with RPMax will recognize the unique, non-repetitive writing style unlike other finetuned-for-RP models.
With the release of QwQ as the first high performing open-source reasoning model that can be easily trained, it was clear that the available instruct and creative writing reasoning datasets contains only one response per example. This is type of single response dataset used for training reasoning models causes degraded output quality in long multi-turn chats. Which is why Arli AI decided to create a real RP model capable of long multi-turn chat with reasoning.
In order to create RpR, we first had to actually create the reasoning RP dataset by re-processing our existing known-good RPMax dataset into a reasoning dataset. This was possible by using the base QwQ Instruct model itself to create the reasoning process for every turn in the RPMax dataset conversation examples, which is then further refined in order to make sure the reasoning is in-line with the actual response examples from the dataset.
Another important thing to get right is to make sure the model is trained on examples that present reasoning blocks in the same way as it encounters it during inference. Which is, never seeing the reasoning blocks in it's context. In order to do this, the training run was completed using axolotl with manual template-free segments dataset in order to make sure that the model is never trained to see the reasoning block in the context. Just like how the model will be used during inference time.
The result of training QwQ on this dataset with this method are consistently coherent and interesting outputs even in long multi-turn RP chats. This is as far as we know the first true correctly-trained reasoning model trained for RP and creative writing.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/QwQ-32B-ArliAI-RpR-v2-Q4_K_S-GGUF --hf-file qwq-32b-arliai-rpr-v2-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/QwQ-32B-ArliAI-RpR-v2-Q4_K_S-GGUF --hf-file qwq-32b-arliai-rpr-v2-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/QwQ-32B-ArliAI-RpR-v2-Q4_K_S-GGUF --hf-file qwq-32b-arliai-rpr-v2-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/QwQ-32B-ArliAI-RpR-v2-Q4_K_S-GGUF --hf-file qwq-32b-arliai-rpr-v2-q4_k_s.gguf -c 2048
```
|
yqyqyq123/Qwen2-0.5B-SFT | yqyqyq123 | 2025-04-26T16:34:23Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"dataset:AI-MO/NuminaMath-TIR",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-04-26T14:08:10Z | ---
base_model: Qwen/Qwen2-0.5B-Instruct
datasets: AI-MO/NuminaMath-TIR
library_name: transformers
model_name: Qwen2-0.5B-SFT
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen2-0.5B-SFT
This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on the [AI-MO/NuminaMath-TIR](https://huggingface.co/datasets/AI-MO/NuminaMath-TIR) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="yqyqyq123/Qwen2-0.5B-SFT", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.14.0
- Transformers: 4.47.1
- Pytorch: 2.6.0+cu124
- Datasets: 3.2.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Triangle104/Gemma-3-Glitter-27B-Q8_0-GGUF | Triangle104 | 2025-04-26T16:19:23Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:allura-org/Gemma-3-Glitter-27B",
"base_model:quantized:allura-org/Gemma-3-Glitter-27B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-26T16:15:31Z | ---
base_model: allura-org/Gemma-3-Glitter-27B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Triangle104/Gemma-3-Glitter-27B-Q8_0-GGUF
This model was converted to GGUF format from [`allura-org/Gemma-3-Glitter-27B`](https://huggingface.co/allura-org/Gemma-3-Glitter-27B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/allura-org/Gemma-3-Glitter-27B) for more details on the model.
---
A creative writing model based on Gemma 3 27B.
Columbidae/gemma-3-27b-half, a 50/50 merge of 27B IT and 27B PT, was used as the base model. (This was done because of the success of Starshine, a 50/50 IT and PT merge.)
The inclusion of PT model does weaken the instruct, but it also weakens the censorship/hesitancy to participate in certain fictional stories. The prose also becomes more natural with less of the IT model included.
This model does better with short and to-the-point prompts. Long, detailed system prompts will often confuse it. (Tested with 1000-2000 token system prompts to lackluster results compared to 100-500 token prompts).
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Gemma-3-Glitter-27B-Q8_0-GGUF --hf-file gemma-3-glitter-27b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Gemma-3-Glitter-27B-Q8_0-GGUF --hf-file gemma-3-glitter-27b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Gemma-3-Glitter-27B-Q8_0-GGUF --hf-file gemma-3-glitter-27b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Gemma-3-Glitter-27B-Q8_0-GGUF --hf-file gemma-3-glitter-27b-q8_0.gguf -c 2048
```
|
kenonix/gemma-3-ko-4B-1-qat2-Q8_0-GGUF | kenonix | 2025-04-26T15:56:15Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma3",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:kenonix/gemma-3-ko-4B-1-qat2",
"base_model:quantized:kenonix/gemma-3-ko-4B-1-qat2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-26T15:55:40Z | ---
base_model: kenonix/gemma-3-ko-4B-1-qat2
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- llama-cpp
- gguf-my-repo
---
# kenonix/gemma-3-ko-4B-1-qat2-Q8_0-GGUF
This model was converted to GGUF format from [`kenonix/gemma-3-ko-4B-1-qat2`](https://huggingface.co/kenonix/gemma-3-ko-4B-1-qat2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/kenonix/gemma-3-ko-4B-1-qat2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo kenonix/gemma-3-ko-4B-1-qat2-Q8_0-GGUF --hf-file gemma-3-ko-4b-1-qat2-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo kenonix/gemma-3-ko-4B-1-qat2-Q8_0-GGUF --hf-file gemma-3-ko-4b-1-qat2-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo kenonix/gemma-3-ko-4B-1-qat2-Q8_0-GGUF --hf-file gemma-3-ko-4b-1-qat2-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo kenonix/gemma-3-ko-4B-1-qat2-Q8_0-GGUF --hf-file gemma-3-ko-4b-1-qat2-q8_0.gguf -c 2048
```
|
vermoney/dc79f207-3fc1-4657-8cfb-3590de7b59bb | vermoney | 2025-04-26T15:34:07Z | 0 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:adapter:microsoft/Phi-3.5-mini-instruct",
"license:mit",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-26T15:19:16Z | ---
library_name: peft
license: mit
base_model: microsoft/Phi-3.5-mini-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: dc79f207-3fc1-4657-8cfb-3590de7b59bb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/Phi-3.5-mini-instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0d6a0bc04d9e6de0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0d6a0bc04d9e6de0_train_data.json
type:
field_instruction: instruction
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: vermoney/dc79f207-3fc1-4657-8cfb-3590de7b59bb
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/0d6a0bc04d9e6de0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 88de6a8c-4476-48c4-ae4c-40759385d6e5
wandb_project: s56-9
wandb_run: your_name
wandb_runid: 88de6a8c-4476-48c4-ae4c-40759385d6e5
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# dc79f207-3fc1-4657-8cfb-3590de7b59bb
This model is a fine-tuned version of [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4411
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.3719 | 0.0068 | 200 | 0.4411 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
chenggong1995/Qwen-2.5-Base-7B-gen8-math3to5_olympiads_aime-ghpo-cold0-3Dhint-prompt1 | chenggong1995 | 2025-04-26T11:53:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:chenggong1995/math3to5_olympiads_aime",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-7B",
"base_model:finetune:Qwen/Qwen2.5-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-25T08:04:08Z | ---
base_model: Qwen/Qwen2.5-7B
datasets: chenggong1995/math3to5_olympiads_aime
library_name: transformers
model_name: Qwen-2.5-Base-7B-gen8-math3to5_olympiads_aime-ghpo-cold0-3Dhint-prompt1
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-Base-7B-gen8-math3to5_olympiads_aime-ghpo-cold0-3Dhint-prompt1
This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) on the [chenggong1995/math3to5_olympiads_aime](https://huggingface.co/datasets/chenggong1995/math3to5_olympiads_aime) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chenggong1995/Qwen-2.5-Base-7B-gen8-math3to5_olympiads_aime-ghpo-cold0-3Dhint-prompt1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/gongc1995-city-university-of-hong-kong/huggingface/runs/kw3pmaxd)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0
- Transformers: 4.50.0
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf | RichardErkhov | 2025-04-26T10:33:22Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-26T08:34:00Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen2-7B-Instruct-it-v1.1-v1.0 - GGUF
- Model creator: https://huggingface.co/homeb82784/
- Original model: https://huggingface.co/homeb82784/Qwen2-7B-Instruct-it-v1.1-v1.0/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.Q2_K.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.Q2_K.gguf) | Q2_K | 2.81GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.IQ3_XS.gguf) | IQ3_XS | 3.12GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.IQ3_S.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.IQ3_S.gguf) | IQ3_S | 3.26GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.Q3_K_S.gguf) | Q3_K_S | 3.25GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.IQ3_M.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.IQ3_M.gguf) | IQ3_M | 3.33GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.Q3_K.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.Q3_K.gguf) | Q3_K | 3.55GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.Q3_K_M.gguf) | Q3_K_M | 3.55GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.Q3_K_L.gguf) | Q3_K_L | 3.81GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.IQ4_XS.gguf) | IQ4_XS | 3.96GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.Q4_0.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.Q4_0.gguf) | Q4_0 | 4.13GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.IQ4_NL.gguf) | IQ4_NL | 4.16GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.Q4_K_S.gguf) | Q4_K_S | 4.15GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.Q4_K.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.Q4_K.gguf) | Q4_K | 4.36GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.Q4_K_M.gguf) | Q4_K_M | 4.36GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.Q4_1.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.Q4_1.gguf) | Q4_1 | 4.54GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.Q5_0.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.Q5_0.gguf) | Q5_0 | 4.95GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.Q5_K_S.gguf) | Q5_K_S | 4.95GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.Q5_K.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.Q5_K.gguf) | Q5_K | 5.07GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.Q5_K_M.gguf) | Q5_K_M | 5.07GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.Q5_1.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.Q5_1.gguf) | Q5_1 | 5.36GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.Q6_K.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.Q6_K.gguf) | Q6_K | 5.82GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.Q8_0.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.Q8_0.gguf) | Q8_0 | 7.54GB |
Original model description:
---
base_model: Qwen2-7B-Instruct-it-v1.1
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- krx
license: apache-2.0
language:
- en
---
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
atha182/karin | atha182 | 2025-04-26T10:19:30Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-26T10:19:30Z | ---
license: apache-2.0
---
|
nice2mitya/a_818450456 | nice2mitya | 2025-04-26T09:50:00Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-04-26T09:22:10Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
JeorgeC/BocchiGemma3_1BVer02 | JeorgeC | 2025-04-26T09:21:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3_text",
"trl",
"en",
"base_model:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-26T09:21:03Z | ---
base_model: unsloth/gemma-3-1b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** JeorgeC
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it-unsloth-bnb-4bit
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Vaishu16/SmolLM2-FT-DPO | Vaishu16 | 2025-04-26T08:59:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"smol-course",
"module_1",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:HuggingFaceTB/SmolLM2-135M-Instruct",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-27T05:11:06Z | ---
base_model: HuggingFaceTB/SmolLM2-135M-Instruct
library_name: transformers
model_name: SmolLM2-FT-DPO
tags:
- generated_from_trainer
- smol-course
- module_1
- trl
- dpo
licence: license
---
# Model Card for SmolLM2-FT-DPO
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-135M-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Vaishu16/SmolLM2-FT-DPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
greenwich157/testmodel | greenwich157 | 2025-04-26T07:19:56Z | 0 | 0 | null | [
"safetensors",
"llama",
"license:apache-2.0",
"region:us"
] | null | 2025-04-26T07:10:45Z | ---
license: apache-2.0
---
|
mechai-copilot/qwen2.5-0.5B-instruct-apply | mechai-copilot | 2025-04-26T07:19:32Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-26T03:58:52Z | ---
base_model: unsloth/qwen2.5-0.5b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** mechai-copilot
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-0.5b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Drlegit12/TheFuture | Drlegit12 | 2025-04-26T07:07:58Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-26T07:07:58Z | ---
license: apache-2.0
---
|
BootesVoid/cm9xopb9n00ukrbgi878ctkmt_cm9xt7fc50142rbgigxilj1mp | BootesVoid | 2025-04-26T07:00:41Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-26T07:00:39Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ALYSSA
---
# Cm9Xopb9N00Ukrbgi878Ctkmt_Cm9Xt7Fc50142Rbgigxilj1Mp
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ALYSSA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "ALYSSA",
"lora_weights": "https://huggingface.co/BootesVoid/cm9xopb9n00ukrbgi878ctkmt_cm9xt7fc50142rbgigxilj1mp/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cm9xopb9n00ukrbgi878ctkmt_cm9xt7fc50142rbgigxilj1mp', weight_name='lora.safetensors')
image = pipeline('ALYSSA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cm9xopb9n00ukrbgi878ctkmt_cm9xt7fc50142rbgigxilj1mp/discussions) to add images that show off what you’ve made with this LoRA.
|
onnx-community/opus-mt-zh-en | onnx-community | 2025-04-26T06:00:27Z | 9 | 1 | transformers.js | [
"transformers.js",
"onnx",
"marian",
"text2text-generation",
"translation",
"base_model:Helsinki-NLP/opus-mt-zh-en",
"base_model:quantized:Helsinki-NLP/opus-mt-zh-en",
"license:cc-by-4.0",
"region:us"
] | translation | 2024-08-27T18:57:36Z | ---
base_model: Helsinki-NLP/opus-mt-zh-en
library_name: transformers.js
license: cc-by-4.0
pipeline_tag: translation
---
https://huggingface.co/Helsinki-NLP/opus-mt-zh-en with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). |
fedorabrenda/fedorabrenda | fedorabrenda | 2025-04-26T05:31:59Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-04-26T05:31:58Z | ---
license: creativeml-openrail-m
---
|
cl-nagoya/ruri-base-v2 | cl-nagoya | 2025-04-26T04:04:35Z | 10,220 | 4 | null | [
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ja",
"dataset:cl-nagoya/ruri-dataset-v2-ft",
"arxiv:2409.07737",
"base_model:cl-nagoya/ruri-pt-base-v2",
"base_model:finetune:cl-nagoya/ruri-pt-base-v2",
"license:apache-2.0",
"region:us"
] | sentence-similarity | 2024-12-05T01:25:34Z | ---
language:
- ja
tags:
- sentence-similarity
- feature-extraction
base_model: cl-nagoya/ruri-pt-base-v2
widget: []
pipeline_tag: sentence-similarity
license: apache-2.0
datasets:
- cl-nagoya/ruri-dataset-v2-ft
---
# Ruri: Japanese General Text Embeddings
**Notes: v3 models are out!**
We recommend using the following v3 models going forward.
|ID| #Param.|Max Len.|Avg. JMTEB|
|-|-|-|-|
|[cl-nagoya/ruri-v3-30m](https://huggingface.co/cl-nagoya/ruri-v3-30m)|37M|8192|74.51|
|[cl-nagoya/ruri-v3-70m](https://huggingface.co/cl-nagoya/ruri-v3-70m)|70M|8192|75.48|
|[cl-nagoya/ruri-v3-130m](https://huggingface.co/cl-nagoya/ruri-v3-130m)|132M|8192|76.55|
|[cl-nagoya/ruri-v3-310m](https://huggingface.co/cl-nagoya/ruri-v3-310m)|315M|8192|77.24|
## Usage
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers fugashi sentencepiece unidic-lite
```
Then you can load this model and run inference.
```python
import torch.nn.functional as F
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("cl-nagoya/ruri-base-v2")
# Don't forget to add the prefix "クエリ: " for query-side or "文章: " for passage-side texts.
sentences = [
"クエリ: 瑠璃色はどんな色?",
"文章: 瑠璃色(るりいろ)は、紫みを帯びた濃い青。名は、半貴石の瑠璃(ラピスラズリ、英: lapis lazuli)による。JIS慣用色名では「こい紫みの青」(略号 dp-pB)と定義している[1][2]。",
"クエリ: ワシやタカのように、鋭いくちばしと爪を持った大型の鳥類を総称して「何類」というでしょう?",
"文章: ワシ、タカ、ハゲワシ、ハヤブサ、コンドル、フクロウが代表的である。これらの猛禽類はリンネ前後の時代(17~18世紀)には鷲類・鷹類・隼類及び梟類に分類された。ちなみにリンネは狩りをする鳥を単一の目(もく)にまとめ、vultur(コンドル、ハゲワシ)、falco(ワシ、タカ、ハヤブサなど)、strix(フクロウ)、lanius(モズ)の4属を含めている。",
]
embeddings = model.encode(sentences, convert_to_tensor=True)
print(embeddings.size())
# [4, 768]
similarities = F.cosine_similarity(embeddings.unsqueeze(0), embeddings.unsqueeze(1), dim=2)
print(similarities)
```
## Benchmarks
### JMTEB
Evaluated with [JMTEB](https://github.com/sbintuitions/JMTEB).
|Model|#Param.|Avg.|Retrieval|STS|Classfification|Reranking|Clustering|PairClassification|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|[cl-nagoya/sup-simcse-ja-base](https://huggingface.co/cl-nagoya/sup-simcse-ja-base)|111M|68.56|49.64|82.05|73.47|91.83|51.79|62.57|
|[cl-nagoya/sup-simcse-ja-large](https://huggingface.co/cl-nagoya/sup-simcse-ja-large)|337M|66.51|37.62|83.18|73.73|91.48|50.56|62.51|
|[cl-nagoya/unsup-simcse-ja-base](https://huggingface.co/cl-nagoya/unsup-simcse-ja-base)|111M|65.07|40.23|78.72|73.07|91.16|44.77|62.44|
|[cl-nagoya/unsup-simcse-ja-large](https://huggingface.co/cl-nagoya/unsup-simcse-ja-large)|337M|66.27|40.53|80.56|74.66|90.95|48.41|62.49|
|[pkshatech/GLuCoSE-base-ja](https://huggingface.co/pkshatech/GLuCoSE-base-ja)|133M|70.44|59.02|78.71|76.82|91.90|49.78|66.39|
||||||||||
|[sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE)|472M|64.70|40.12|76.56|72.66|91.63|44.88|62.33|
|[intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small)|118M|69.52|67.27|80.07|67.62|93.03|46.91|62.19|
|[intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base)|278M|70.12|68.21|79.84|69.30|92.85|48.26|62.26|
|[intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large)|560M|71.65|70.98|79.70|72.89|92.96|51.24|62.15|
||||||||||
|OpenAI/text-embedding-ada-002|-|69.48|64.38|79.02|69.75|93.04|48.30|62.40|
|OpenAI/text-embedding-3-small|-|70.86|66.39|79.46|73.06|92.92|51.06|62.27|
|OpenAI/text-embedding-3-large|-|73.97|74.48|82.52|77.58|93.58|53.32|62.35|
||||||||||
|[Ruri-Small](https://huggingface.co/cl-nagoya/ruri-small)|68M|71.53|69.41|82.79|76.22|93.00|51.19|62.11|
|[Ruri-Small v2](https://huggingface.co/cl-nagoya/ruri-small-v2)|68M|73.30|73.94|82.91|76.17|93.20|51.58|62.32|
|[Ruri-Base](https://huggingface.co/cl-nagoya/ruri-base)|111M|71.91|69.82|82.87|75.58|92.91|54.16|62.38|
|[**Ruri-Base v2**](https://huggingface.co/cl-nagoya/ruri-base-v2) (this model)|111M|**72.48**|72.33|83.03|75.34|93.17|51.38|62.35|
|[Ruri-Large](https://huggingface.co/cl-nagoya/ruri-large)|337M|73.31|73.02|83.13|77.43|92.99|51.82|62.29|
|[Ruri-Large v2](https://huggingface.co/cl-nagoya/ruri-large-v2)|337M|74.55|76.34|83.17|77.18|93.21|52.14|62.27|
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [cl-nagoya/ruri-pt-base-v2](https://huggingface.co/cl-nagoya/ruri-pt-base-v2)
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768
- **Similarity Function:** Cosine Similarity
- **Language:** Japanese
- **License:** Apache 2.0
- **Paper:** https://arxiv.org/abs/2409.07737
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
### Framework Versions
- Python: 3.10.13
- Sentence Transformers: 3.0.0
- Transformers: 4.41.2
- PyTorch: 2.3.1+cu118
- Accelerate: 0.30.1
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
```bibtex
@misc{
Ruri,
title={{Ruri: Japanese General Text Embeddings}},
author={Hayato Tsukagoshi and Ryohei Sasano},
year={2024},
eprint={2409.07737},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2409.07737},
}
```
## License
This model is published under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). |
YWChang/Llama-3.2-finetuned-Q4_K_M-GGUF | YWChang | 2025-04-26T03:41:52Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:YWChang/Llama-3.2-finetuned",
"base_model:quantized:YWChang/Llama-3.2-finetuned",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-26T03:41:46Z | ---
base_model: YWChang/Llama-3.2-finetuned
tags:
- llama-cpp
- gguf-my-repo
---
# YWChang/Llama-3.2-finetuned-Q4_K_M-GGUF
This model was converted to GGUF format from [`YWChang/Llama-3.2-finetuned`](https://huggingface.co/YWChang/Llama-3.2-finetuned) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/YWChang/Llama-3.2-finetuned) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo YWChang/Llama-3.2-finetuned-Q4_K_M-GGUF --hf-file llama-3.2-finetuned-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo YWChang/Llama-3.2-finetuned-Q4_K_M-GGUF --hf-file llama-3.2-finetuned-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo YWChang/Llama-3.2-finetuned-Q4_K_M-GGUF --hf-file llama-3.2-finetuned-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo YWChang/Llama-3.2-finetuned-Q4_K_M-GGUF --hf-file llama-3.2-finetuned-q4_k_m.gguf -c 2048
```
|
minhtuan7akp/qwen2.5_0.5b_base_scratch_reasoning_finetune | minhtuan7akp | 2025-04-26T03:06:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/Qwen2.5-0.5B",
"base_model:finetune:unsloth/Qwen2.5-0.5B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-26T03:05:28Z | ---
base_model: unsloth/Qwen2.5-0.5B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhtuan7akp
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-0.5B
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
dgambettaphd/M_llm3_gen8_run0_X_doc1000_synt64_tot128_MPP | dgambettaphd | 2025-04-25T22:49:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-25T22:48:30Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hasdal/43a3f83a-b7d5-498e-b2b9-9d6d4964b867 | hasdal | 2025-04-25T20:54:27Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.1-Storm-8B",
"base_model:adapter:unsloth/Llama-3.1-Storm-8B",
"license:llama3.1",
"region:us"
] | null | 2025-04-25T20:47:54Z | ---
library_name: peft
license: llama3.1
base_model: unsloth/Llama-3.1-Storm-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 43a3f83a-b7d5-498e-b2b9-9d6d4964b867
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.1-Storm-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 47fc413d9583e070_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/47fc413d9583e070_train_data.json
type:
field_input: functions
field_instruction: user
field_output: function_call
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: hasdal/43a3f83a-b7d5-498e-b2b9-9d6d4964b867
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000208
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_bias: none
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 128
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
- gate_proj
- up_proj
- down_proj
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/47fc413d9583e070_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: false
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d8c09ec2-5766-457f-b30d-6bbac24d824e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d8c09ec2-5766-457f-b30d-6bbac24d824e
warmup_steps: 10
weight_decay: 0.0
xformers_attention: false
```
</details><br>
# 43a3f83a-b7d5-498e-b2b9-9d6d4964b867
This model is a fine-tuned version of [unsloth/Llama-3.1-Storm-8B](https://huggingface.co/unsloth/Llama-3.1-Storm-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000208
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0004 | 1 | nan |
| 0.0 | 0.0013 | 3 | nan |
| 0.0 | 0.0026 | 6 | nan |
| 0.0 | 0.0039 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Taklacii/hds | Taklacii | 2025-04-25T17:56:54Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro-2.0",
"base_model:adapter:Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro-2.0",
"region:us"
] | text-to-image | 2025-04-25T17:44:52Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: a
parameters:
negative_prompt: a
output:
url: images/ayarlar.png
base_model: Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro-2.0
instance_prompt: s
---
# s
<Gallery />
## Model description
s
## Trigger words
You should use `s` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Taklacii/hds/tree/main) them in the Files & versions tab.
|
chenjoya/LiveCC-7B-Instruct | chenjoya | 2025-04-25T13:48:32Z | 2,919 | 19 | null | [
"safetensors",
"qwen2_vl",
"qwen_vl",
"video",
"real-time",
"multimodal",
"LLM",
"en",
"dataset:chenjoya/Live-CC-5M",
"dataset:chenjoya/Live-WhisperX-526K",
"dataset:lmms-lab/LLaVA-Video-178K",
"arxiv:2504.16030",
"base_model:Qwen/Qwen2-VL-7B",
"base_model:finetune:Qwen/Qwen2-VL-7B",
"license:apache-2.0",
"region:us"
] | null | 2025-03-25T12:07:01Z | ---
license: apache-2.0
datasets:
- chenjoya/Live-CC-5M
- chenjoya/Live-WhisperX-526K
- lmms-lab/LLaVA-Video-178K
language:
- en
base_model:
- Qwen/Qwen2-VL-7B
tags:
- qwen_vl
- video
- real-time
- multimodal
- LLM
---
# LiveCC-7B-Instruct
## Introduction
We introduce LiveCC, the first video LLM capable of real-time commentary, trained with a novel video-ASR streaming method, SOTA on both streaming and offline benchmarks.
- Project Page: https://showlab.github.io/livecc
> [!Important]
> This is the SFT model. The base model is at [LiveCC-7B-Base](https://huggingface.co/chenjoya/LiveCC-7B-Base).
## Training with Streaming Frame-Words Paradigm

## Quickstart
### Gradio Demo
Please refer to https://github.com/showlab/livecc:

### Hands-on
Like qwen-vl-utils, we offer a toolkit to help you handle various types of visual input more conveniently, **especially on video streaming inputs**. You can install it using the following command:
```bash
pip install qwen-vl-utils livecc-utils liger_kernel
```
Here we show a code snippet to show you how to do **real-time video commentary** with `transformers` and the above utils:
```python
import functools, torch, os, tqdm
from liger_kernel.transformers import apply_liger_kernel_to_qwen2_vl
apply_liger_kernel_to_qwen2_vl() # important. our model is trained with this. keep consistency
from transformers import Qwen2VLForConditionalGeneration, AutoProcessor, LogitsProcessor, logging
from livecc_utils import prepare_multiturn_multimodal_inputs_for_generation, get_smart_resized_clip, get_smart_resized_video_reader
from qwen_vl_utils import process_vision_info
class LiveCCDemoInfer:
fps = 2
initial_fps_frames = 6
streaming_fps_frames = 2
initial_time_interval = initial_fps_frames / fps
streaming_time_interval = streaming_fps_frames / fps
frame_time_interval = 1 / fps
def __init__(self, model_path: str = None, device_id: int = 0):
self.model = Qwen2VLForConditionalGeneration.from_pretrained(
model_path, torch_dtype="auto",
device_map=f'cuda:{device_id}',
attn_implementation='flash_attention_2'
)
self.processor = AutoProcessor.from_pretrained(model_path, use_fast=False)
self.model.prepare_inputs_for_generation = functools.partial(prepare_multiturn_multimodal_inputs_for_generation, self.model)
message = {
"role": "user",
"content": [
{"type": "text", "text": 'livecc'},
]
}
texts = self.processor.apply_chat_template([message], tokenize=False)
self.system_prompt_offset = texts.index('<|im_start|>user')
self._cached_video_readers_with_hw = {}
def live_cc(
self,
query: str,
state: dict,
max_pixels: int = 384 * 28 * 28,
default_query: str = 'Please describe the video.',
do_sample: bool = True,
repetition_penalty: float = 1.05,
**kwargs,
):
"""
state: dict, (maybe) with keys:
video_path: str, video path
video_timestamp: float, current video timestamp
last_timestamp: float, last processed video timestamp
last_video_pts_index: int, last processed video frame index
video_pts: np.ndarray, video pts
last_history: list, last processed history
past_key_values: llm past_key_values
past_ids: past generated ids
"""
# 1. preparation: video_reader, and last processing info
video_timestamp, last_timestamp = state.get('video_timestamp', 0), state.get('last_timestamp', -1 / self.fps)
video_path = state['video_path']
if video_path not in self._cached_video_readers_with_hw:
self._cached_video_readers_with_hw[video_path] = get_smart_resized_video_reader(video_path, max_pixels)
video_reader = self._cached_video_readers_with_hw[video_path][0]
video_reader.get_frame_timestamp(0)
state['video_pts'] = torch.from_numpy(video_reader._frame_pts[:, 1])
state['last_video_pts_index'] = -1
video_pts = state['video_pts']
if last_timestamp + self.frame_time_interval > video_pts[-1]:
state['video_end'] = True
return
video_reader, resized_height, resized_width = self._cached_video_readers_with_hw[video_path]
last_video_pts_index = state['last_video_pts_index']
# 2. which frames will be processed
initialized = last_timestamp >= 0
if not initialized:
video_timestamp = max(video_timestamp, self.initial_time_interval)
if video_timestamp <= last_timestamp + self.frame_time_interval:
return
timestamps = torch.arange(last_timestamp + self.frame_time_interval, video_timestamp, self.frame_time_interval) # add compensation
# 3. fetch frames in required timestamps
clip, clip_timestamps, clip_idxs = get_smart_resized_clip(video_reader, resized_height, resized_width, timestamps, video_pts, video_pts_index_from=last_video_pts_index+1)
state['last_video_pts_index'] = clip_idxs[-1]
state['last_timestamp'] = clip_timestamps[-1]
# 4. organize to interleave frames
interleave_clips, interleave_timestamps = [], []
if not initialized:
interleave_clips.append(clip[:self.initial_fps_frames])
interleave_timestamps.append(clip_timestamps[:self.initial_fps_frames])
clip = clip[self.initial_fps_frames:]
clip_timestamps = clip_timestamps[self.initial_fps_frames:]
if len(clip) > 0:
interleave_clips.extend(list(clip.split(self.streaming_fps_frames)))
interleave_timestamps.extend(list(clip_timestamps.split(self.streaming_fps_frames)))
# 5. make conversation and send to model
for clip, timestamps in zip(interleave_clips, interleave_timestamps):
start_timestamp, stop_timestamp = timestamps[0].item(), timestamps[-1].item() + self.frame_time_interval
message = {
"role": "user",
"content": [
{"type": "text", "text": f'Time={start_timestamp:.1f}-{stop_timestamp:.1f}s'},
{"type": "video", "video": clip}
]
}
if not query and not state.get('query', None):
query = default_query
print(f'No query provided, use default_query={default_query}')
if query and state.get('query', None) != query:
message['content'].append({"type": "text", "text": query})
state['query'] = query
texts = self.processor.apply_chat_template([message], tokenize=False, add_generation_prompt=True, return_tensors='pt')
past_ids = state.get('past_ids', None)
if past_ids is not None:
texts = '<|im_end|>\n' + texts[self.system_prompt_offset:]
inputs = self.processor(
text=texts,
images=None,
videos=[clip],
return_tensors="pt",
return_attention_mask=False
)
inputs.to('cuda')
if past_ids is not None:
inputs['input_ids'] = torch.cat([past_ids, inputs.input_ids], dim=1)
outputs = self.model.generate(
**inputs, past_key_values=state.get('past_key_values', None),
return_dict_in_generate=True, do_sample=do_sample,
repetition_penalty=repetition_penalty,
)
state['past_key_values'] = outputs.past_key_values
state['past_ids'] = outputs.sequences[:, :-1]
yield (start_timestamp, stop_timestamp), self.processor.decode(outputs.sequences[0, inputs.input_ids.size(1):], skip_special_tokens=True), state
model_path = 'chenjoya/LiveCC-7B-Instruct'
# download a test video at: https://github.com/showlab/livecc/blob/main/demo/sources/howto_fix_laptop_mute_1080p.mp4
video_path = "demo/sources/howto_fix_laptop_mute_1080p.mp4"
query = "Please describe the video."
infer = LiveCCDemoInfer(model_path=model_path)
state = {'video_path': video_path}
commentaries = []
t = 0
for t in range(31):
state['video_timestamp'] = t
for (start_t, stop_t), response, state in infer.live_cc(
query=query, state=state,
max_pixels = 384 * 28 * 28, repetition_penalty=1.05,
streaming_eos_base_threshold=0.0, streaming_eos_threshold_step=0
):
print(f'{start_t}s-{stop_t}s: {response}')
commentaries.append([start_t, stop_t, response])
if state.get('video_end', False):
break
t += 1
```
Here we show a code snippet to show you how to do **common video (multi-turn) qa** with `transformers` and the above utils:
```python
import functools, torch
from liger_kernel.transformers import apply_liger_kernel_to_qwen2_vl
apply_liger_kernel_to_qwen2_vl() # important. our model is trained with this. keep consistency
from transformers import Qwen2VLForConditionalGeneration, AutoProcessor, LogitsProcessor, logging
from livecc_utils import prepare_multiturn_multimodal_inputs_for_generation, get_smart_resized_clip, get_smart_resized_video_reader
from qwen_vl_utils import process_vision_info
class LiveCCDemoInfer:
fps = 2
initial_fps_frames = 6
streaming_fps_frames = 2
initial_time_interval = initial_fps_frames / fps
streaming_time_interval = streaming_fps_frames / fps
frame_time_interval = 1 / fps
def __init__(self, model_path: str = None, device: str = 'cuda'):
self.model = Qwen2VLForConditionalGeneration.from_pretrained(
model_path, torch_dtype="auto",
device_map=device,
attn_implementation='flash_attention_2'
)
self.processor = AutoProcessor.from_pretrained(model_path, use_fast=False)
self.streaming_eos_token_id = self.processor.tokenizer(' ...').input_ids[-1]
self.model.prepare_inputs_for_generation = functools.partial(prepare_multiturn_multimodal_inputs_for_generation, self.model)
message = {
"role": "user",
"content": [
{"type": "text", "text": 'livecc'},
]
}
texts = self.processor.apply_chat_template([message], tokenize=False)
self.system_prompt_offset = texts.index('<|im_start|>user')
def video_qa(
self,
message: str,
state: dict,
do_sample: bool = True,
repetition_penalty: float = 1.05,
**kwargs,
):
"""
state: dict, (maybe) with keys:
video_path: str, video path
video_timestamp: float, current video timestamp
last_timestamp: float, last processed video timestamp
last_video_pts_index: int, last processed video frame index
video_pts: np.ndarray, video pts
last_history: list, last processed history
past_key_values: llm past_key_values
past_ids: past generated ids
"""
video_path = state.get('video_path', None)
conversation = []
past_ids = state.get('past_ids', None)
content = [{"type": "text", "text": message}]
if past_ids is None and video_path: # only use once
content.insert(0, {"type": "video", "video": video_path})
conversation.append({"role": "user", "content": content})
image_inputs, video_inputs = process_vision_info(conversation)
texts = self.processor.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True, return_tensors='pt')
if past_ids is not None:
texts = '<|im_end|>\n' + texts[self.system_prompt_offset:]
inputs = self.processor(
text=texts,
images=image_inputs,
videos=video_inputs,
return_tensors="pt",
return_attention_mask=False
)
inputs.to(self.model.device)
if past_ids is not None:
inputs['input_ids'] = torch.cat([past_ids, inputs.input_ids], dim=1)
outputs = self.model.generate(
**inputs, past_key_values=state.get('past_key_values', None),
return_dict_in_generate=True, do_sample=do_sample,
repetition_penalty=repetition_penalty,
max_new_tokens=512,
)
state['past_key_values'] = outputs.past_key_values
state['past_ids'] = outputs.sequences[:, :-1]
response = self.processor.decode(outputs.sequences[0, inputs.input_ids.size(1):], skip_special_tokens=True)
return response, state
model_path = 'chenjoya/LiveCC-7B-Instruct'
# download a test video at: https://github.com/showlab/livecc/blob/main/demo/sources/howto_fix_laptop_mute_1080p.mp4
video_path = "demo/sources/howto_fix_laptop_mute_1080p.mp4"
infer = LiveCCDemoInfer(model_path=model_path)
state = {'video_path': video_path}
# first round
query1 = 'What is the video?'
response1, state = infer.video_qa(message=query1, state=state)
print(f'Q1: {query1}\nA1: {response1}')
# second round
query2 = 'How do you know that?'
response2, state = infer.video_qa(message=query2, state=state)
print(f'Q2: {query2}\nA2: {response2}')
```
## Performance


## Limitations
- This model is finetuned on LiveCC-7B-Base, which is starting from Qwen2-VL-7B-Base, so it may have limitations mentioned in https://huggingface.co/Qwen/Qwen2-VL-7B.
- When performing real-time video commentary, it may appear collapse --- e.g., repeat pattern. If you encounter this situation, try to adjust repetition_penalty, streaming_eos_base_threshold, and streaming_eos_threshold_step.
- This model only has a context window of 32768. Using more visual tokens per frame (e.g. 768 * 28 * 28) will have better performance, but will shorten the working duration.
These limitations serve as ongoing directions for model optimization and improvement, and we are committed to continually enhancing the model's performance and scope of application.
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{livecc,
author = {Joya Chen and Ziyun Zeng and Yiqi Lin and Wei Li and Zejun Ma and Mike Zheng Shou},
title = {LiveCC: Learning Video LLM with Streaming Speech Transcription at Scale},
journal = {arXiv preprint arXiv:2504.16030}
year = {2025},
}
``` |
parvk11/intent_classification_model | parvk11 | 2025-04-25T04:12:12Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-25T04:08:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LUcowork/e5_stage2 | LUcowork | 2025-04-25T03:19:36Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:128997",
"loss:MultipleNegativesRankingLoss",
"dataset:hobbang/stage2-dataset",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:suhwan3/e5-step1",
"base_model:finetune:suhwan3/e5-step1",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-04-25T03:17:29Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:128997
- loss:MultipleNegativesRankingLoss
base_model: suhwan3/e5-step1
widget:
- source_sentence: The Global X S&P 500 Risk Managed Income ETF seeks to track the
Cboe S&P 500 Risk Managed Income Index by investing at least 80% of its assets
in index securities. The index's strategy involves holding the underlying stocks
of the S&P 500 Index while applying an options collar, specifically selling at-the-money
covered call options and buying monthly 5% out-of-the-money put options corresponding
to the portfolio's value. This approach aims to generate income, ideally resulting
in a net credit from the options premiums, and provide risk management, though
selling at-the-money calls inherently caps the fund's potential for upside participation.
sentences:
- Nasdaq, Inc. operates as a technology company that serves capital markets and
other industries worldwide. The Market Technology segment includes anti financial
crime technology business, which offers Nasdaq Trade Surveillance, a SaaS solution
for brokers and other market participants to assist them in complying with market
rules, regulations, and internal market surveillance policies; Nasdaq Automated
Investigator, a cloud-deployed anti-money laundering tool; and Verafin, a SaaS
technology provider of anti-financial crime management solutions. This segment
also handles assets, such as cash equities, equity derivatives, currencies, interest-bearing
securities, commodities, energy products, and digital currencies. The Investment
Intelligence segment sells and distributes historical and real-time market data;
develops and licenses Nasdaq-branded indexes and financial products; and provides
investment insights and workflow solutions. The Corporate Platforms segment operates
listing platforms; and offers investor relations intelligence and governance solutions.
As of December 31, 2021, it had 4,178 companies listed securities on The Nasdaq
Stock Market, including 1,632 listings on The Nasdaq Global Select Market; 1,169
on The Nasdaq Global Market; and 1,377 on The Nasdaq Capital Market. The Market
Services segment includes equity derivative trading and clearing, cash equity
trading, fixed income and commodities trading and clearing, and trade management
service businesses. This segment operates various exchanges and other marketplace
facilities across various asset classes, which include derivatives, commodities,
cash equity, debt, structured products, and exchange traded products; and provides
broker, clearing, settlement, and central depository services. The company was
formerly known as The NASDAQ OMX Group, Inc. and changed its name to Nasdaq, Inc.
in September 2015. Nasdaq, Inc. was founded in 1971 and is headquartered in New
York, New York.
- Jabil Inc. provides manufacturing services and solutions worldwide. The company
operates in two segments, Electronics Manufacturing Services and Diversified Manufacturing
Services. It offers electronics design, production, and product management services.
The company provides electronic design services, such as application-specific
integrated circuit design, firmware development, and rapid prototyping services;
and designs plastic and metal enclosures that include the electro-mechanics, such
as the printed circuit board assemblies (PCBA). It also specializes in the three-dimensional
mechanical design comprising the analysis of electronic, electro-mechanical, and
optical assemblies, as well as offers various industrial design, mechanism development,
and tooling management services. In addition, the company provides computer-assisted
design services consisting of PCBA design, as well as PCBA design validation and
verification services; and other consulting services, such as the generation of
a bill of materials, approved vendor list, and assembly equipment configuration
for various PCBA designs. Further, it offers product and process validation services,
such as product system, product safety, regulatory compliance, and reliability
tests, as well as manufacturing test solution development services. Additionally,
the company provides systems assembly, test, direct-order fulfillment, and configure-to-order
services. It serves 5G, wireless and cloud, digital print and retail, industrial
and semi-cap, networking and storage, automotive and transportation, connected
devices, healthcare and packaging, and mobility industries. The company was formerly
known as Jabil Circuit, Inc. and changed its name to Jabil Inc. in June 2017.
Jabil Inc. was founded in 1966 and is headquartered in Saint Petersburg, Florida.
- 'Realty Income, The Monthly Dividend Company, is an S&P 500 company dedicated
to providing stockholders with dependable monthly income. The company is structured
as a REIT, and its monthly dividends are supported by the cash flow from over
6,500 real estate properties owned under long-term lease agreements with our commercial
clients. To date, the company has declared 608 consecutive common stock monthly
dividends throughout its 52-year operating history and increased the dividend
109 times since Realty Income''s public listing in 1994 (NYSE: O). The company
is a member of the S&P 500 Dividend Aristocrats index. Additional information
about the company can be obtained from the corporate website at www.realtyincome.com.'
- source_sentence: The iShares U.S. Telecommunications ETF (IYZ) seeks to track the
investment results of the Russell 1000 Telecommunications RIC 22.5/45 Capped Index,
which measures the performance of the U.S. telecommunications sector of the U.S.
equity market as defined by FTSE Russell. This market-cap-weighted index includes
large-cap companies involved in telecom equipment and service provision and is
subject to regulatory capping that limits single holdings to 22.5% and aggregate
large holdings to 45%. The fund generally invests at least 80% of its assets in
the component securities of its underlying index and is non-diversified; the underlying
index is rebalanced quarterly.
sentences:
- Kanzhun Limited operates an online recruitment platform, BOSS Zhipin in the People's
Republic of China. Its recruitment platform assists the recruitment process between
job seekers and employers for enterprises, and corporations. The company was founded
in 2013 and is headquartered in Beijing, the People's Republic of China.
- Frontier Communications Parent, Inc., together with its subsidiaries, provides
communications services for consumer and business customers in 25 states in the
United States. It offers data and Internet, voice, video, and other services.
The company was formerly known as Frontier Communications Corporation and changed
its name to Frontier Communications Parent, Inc. in April 2021. Frontier Communications
Parent, Inc. was incorporated in 1935 and is based in Norwalk, Connecticut.
- Broadcom Inc. designs, develops, and supplies various semiconductor devices with
a focus on complex digital and mixed signal complementary metal oxide semiconductor
based devices and analog III-V based products worldwide. The company operates
in two segments, Semiconductor Solutions and Infrastructure Software. It provides
set-top box system-on-chips (SoCs); cable, digital subscriber line, and passive
optical networking central office/consumer premise equipment SoCs; wireless local
area network access point SoCs; Ethernet switching and routing merchant silicon
products; embedded processors and controllers; serializer/deserializer application
specific integrated circuits; optical and copper, and physical layers; and fiber
optic transmitter and receiver components. The company also offers RF front end
modules, filters, and power amplifiers; Wi-Fi, Bluetooth, and global positioning
system/global navigation satellite system SoCs; custom touch controllers; serial
attached small computer system interface, and redundant array of independent disks
controllers and adapters; peripheral component interconnect express switches;
fiber channel host bus adapters; read channel based SoCs; custom flash controllers;
preamplifiers; and optocouplers, industrial fiber optics, and motion control encoders
and subsystems. Its products are used in various applications, including enterprise
and data center networking, home connectivity, set-top boxes, broadband access,
telecommunication equipment, smartphones and base stations, data center servers
and storage systems, factory automation, power generation and alternative energy
systems, and electronic displays. Broadcom Inc. was incorporated in 2018 and is
headquartered in San Jose, California.
- source_sentence: The Xtrackers MSCI Emerging Markets ESG Leaders Equity ETF tracks
an index of large- and mid-cap emerging market stocks that emphasize strong environmental,
social, and governance (ESG) characteristics. The index first excludes companies
involved in specific controversial industries. From the remaining universe, it
ranks stocks based on MSCI ESG scores, including a controversy component, to identify
and select the highest-ranking ESG leaders, effectively screening out ESG laggards.
To maintain market-like country and sector weights, the index selects the top
ESG-scoring stocks within each sector until a specified market capitalization
threshold is reached. Selected stocks are then weighted by market capitalization
within their respective sectors. The fund typically invests over 80% of its assets
in the securities of this underlying index.
sentences:
- Info Edge (India) Limited operates as an online classifieds company in the areas
of recruitment, matrimony, real estate, and education and related services in
India and internationally. It operates through Recruitment Solutions, 99acres,
and Other segments. The company offers recruitment services through naukri.com,
an online job website for job seekers and corporate customers, including hiring
consultants; firstnaukri.com, a job search network for college students and recent
graduates; naukrigulf.com, a website catering to Gulf markets; and quadranglesearch.com,
a site that provides off-line placement services to middle and senior management,
as well as Highorbit/iimjobs.com, zwayam.com, hirist.com, doselect.com, ambitionbox.com,
bigshyft.com, and jobhai.com. It also provides 99acres.com, which offers listing
of properties for sale, purchase, and rent; Jeevansathi.com, an online matrimonial
classifieds services; and shiksha.com, an education classified website that helps
students to decide their undergraduate and postgraduate options by providing useful
information on careers, exams, colleges, and courses, as well as operates multiple
dating platforms on the web through its mobile apps Aisle, Anbe, Arike and HeyDil.
In addition, the company provides internet, computer, and electronic and related
services; and software development, consultancy, technical support for consumer
companies, SAAS providers, and other services in the field of information technology
and product development, as well as brokerage services in the real estate sector.
Further, it acts as an investment adviser and manager, financial and management
consultant, and sponsor of alternative investment funds, as well as provides advertising
space for colleges and universities on www.shiksha.com. Info Edge (India) Limited
was incorporated in 1995 and is based in Noida, India.
- China Overseas Land & Investment Limited, an investment holding company, engages
in the property development and investment, and other operations in the People's
Republic of China and the United Kingdom. The company operates through Property
Development, Property Investment, and Other Operations segments. It is involved
in the investment, development, and rental of residential and commercial properties;
issuance of guaranteed notes and corporate bonds; and hotel operation activities.
The company also provides construction and building design consultancy services.
In addition, it engages in the investment and financing, land consolidation, regional
planning, engineering construction, industrial import, commercial operation, and
property management. Further, the company offers urban services, including office
buildings, flexible working space, shopping malls, star-rated hotels, long-term
rental apartments, logistics parks, and architectural design and construction.
The company was founded in 1979 and is based in Central, Hong Kong. China Overseas
Land & Investment Limited is a subsidiary of China Overseas Holdings Limited.
- Mastercard Incorporated, a technology company, provides transaction processing
and other payment-related products and services in the United States and internationally.
It facilitates the processing of payment transactions, including authorization,
clearing, and settlement, as well as delivers other payment-related products and
services. The company offers integrated products and value-added services for
account holders, merchants, financial institutions, businesses, governments, and
other organizations, such as programs that enable issuers to provide consumers
with credits to defer payments; prepaid programs and management services; commercial
credit and debit payment products and solutions; and payment products and solutions
that allow its customers to access funds in deposit and other accounts. It also
provides value-added products and services comprising cyber and intelligence solutions
for parties to transact, as well as proprietary insights, drawing on principled
use of consumer, and merchant data services. In addition, the company offers analytics,
test and learn, consulting, managed services, loyalty, processing, and payment
gateway solutions for e-commerce merchants. Further, it provides open banking
and digital identity platforms services. The company offers payment solutions
and services under the MasterCard, Maestro, and Cirrus. Mastercard Incorporated
was founded in 1966 and is headquartered in Purchase, New York.
- source_sentence: The Global X S&P 500 Risk Managed Income ETF seeks to track the
Cboe S&P 500 Risk Managed Income Index by investing at least 80% of its assets
in index securities. The index's strategy involves holding the underlying stocks
of the S&P 500 Index while applying an options collar, specifically selling at-the-money
covered call options and buying monthly 5% out-of-the-money put options corresponding
to the portfolio's value. This approach aims to generate income, ideally resulting
in a net credit from the options premiums, and provide risk management, though
selling at-the-money calls inherently caps the fund's potential for upside participation.
sentences:
- Incyte Corporation, a biopharmaceutical company, focuses on the discovery, development,
and commercialization of proprietary therapeutics in the United States and internationally.
The company offers JAKAFI, a drug for the treatment of myelofibrosis and polycythemia
vera; PEMAZYRE, a fibroblast growth factor receptor kinase inhibitor that act
as oncogenic drivers in various liquid and solid tumor types; and ICLUSIG, a kinase
inhibitor to treat chronic myeloid leukemia and philadelphia-chromosome positive
acute lymphoblastic leukemia. Its clinical stage products include ruxolitinib,
a steroid-refractory chronic graft-versus-host-diseases (GVHD); itacitinib, which
is in Phase II/III clinical trial to treat naive chronic GVHD; and pemigatinib
for treating bladder cancer, cholangiocarcinoma, myeloproliferative syndrome,
and tumor agnostic. In addition, the company engages in developing Parsaclisib,
which is in Phase II clinical trial for follicular lymphoma, marginal zone lymphoma,
and mantel cell lymphoma. Additionally, it develops Retifanlimab that is in Phase
II clinical trials for MSI-high endometrial cancer, merkel cell carcinoma, and
anal cancer, as well as in Phase II clinical trials for patients with non-small
cell lung cancer. It has collaboration agreements with Novartis International
Pharmaceutical Ltd.; Eli Lilly and Company; Agenus Inc.; Calithera Biosciences,
Inc; MacroGenics, Inc.; Merus N.V.; Syros Pharmaceuticals, Inc.; Innovent Biologics,
Inc.; Zai Lab Limited; Cellenkos, Inc.; and Nimble Therapeutics, as well as clinical
collaborations with MorphoSys AG and Xencor, Inc. to investigate the combination
of tafasitamab, plamotamab, and lenalidomide in patients with relapsed or refractory
diffuse large B-cell lymphoma, and relapsed or refractory follicular lymphoma.
The company was incorporated in 1991 and is headquartered in Wilmington, Delaware.
- Omnicom Group Inc., together with its subsidiaries, provides advertising, marketing,
and corporate communications services. It provides a range of services in the
areas of advertising, customer relationship management, public relations, and
healthcare. The company's services include advertising, branding, content marketing,
corporate social responsibility consulting, crisis communications, custom publishing,
data analytics, database management, digital/direct marketing, digital transformation,
entertainment marketing, experiential marketing, field marketing, financial/corporate
business-to-business advertising, graphic arts/digital imaging, healthcare marketing
and communications, and in-store design services. Its services also comprise interactive
marketing, investor relations, marketing research, media planning and buying,
merchandising and point of sale, mobile marketing, multi-cultural marketing, non-profit
marketing, organizational communications, package design, product placement, promotional
marketing, public affairs, retail marketing, sales support, search engine marketing,
shopper marketing, social media marketing, and sports and event marketing services.
It operates in the United States, Canada, Puerto Rico, South America, Mexico,
Europe, the Middle East, Africa, Australia, Greater China, India, Japan, Korea,
New Zealand, Singapore, and other Asian countries. The company was incorporated
in 1944 and is based in New York, New York.
- NetApp, Inc. provides cloud-led and data-centric services to manage and share
data on-premises, and private and public clouds worldwide. It operates in two
segments, Hybrid Cloud and Public Could. The company offers intelligent data management
software, such as NetApp ONTAP, NetApp Snapshot, NetApp SnapCenter Backup Management,
NetApp SnapMirror Data Replication, NetApp SnapLock Data Compliance, NetApp ElementOS
software, and NetApp SANtricity software; and storage infrastructure solutions,
including NetApp All-Flash FAS series, NetApp Fabric Attached Storage, NetApp
FlexPod, NetApp E/EF series, NetApp StorageGRID, and NetApp SolidFire. It also
provides cloud storage and data services comprising NetApp Cloud Volumes ONTAP,
Azure NetApp Files, Amazon FSx for NetApp ONTAP, NetApp Cloud Volumes Service
for Google Cloud, NetApp Cloud Sync, NetApp Cloud Tiering, NetApp Cloud Backup,
NetApp Cloud Data Sense, and NetApp Cloud Volumes Edge Cache; and cloud operations
services, such as NetApp Cloud Insights, Spot Ocean Kubernetes Suite, Spot Security,
Spot Eco, and Spot CloudCheckr. In addition, the company offers application-aware
data management service under the NetApp Astra name; and professional and support
services, such as strategic consulting, professional, managed, and support services.
Further, it provides assessment, design, implementation, and migration services.
The company serves the energy, financial service, government, technology, internet,
life science, healthcare service, manufacturing, media, entertainment, animation,
video postproduction, and telecommunication markets through a direct sales force
and an ecosystem of partners. NetApp, Inc. was incorporated in 1992 and is headquartered
in San Jose, California.
- source_sentence: The Global X S&P 500 Risk Managed Income ETF seeks to track the
Cboe S&P 500 Risk Managed Income Index by investing at least 80% of its assets
in index securities. The index's strategy involves holding the underlying stocks
of the S&P 500 Index while applying an options collar, specifically selling at-the-money
covered call options and buying monthly 5% out-of-the-money put options corresponding
to the portfolio's value. This approach aims to generate income, ideally resulting
in a net credit from the options premiums, and provide risk management, though
selling at-the-money calls inherently caps the fund's potential for upside participation.
sentences:
- Walgreens Boots Alliance, Inc. operates as a pharmacy-led health and beauty retail
company. It operates through two segments, the United States and International.
The United States segment sells prescription drugs and an assortment of retail
products, including health, wellness, beauty, personal care, consumable, and general
merchandise products through its retail drugstores. It also provides central specialty
pharmacy services and mail services. As of August 31, 2021, this segment operated
8,965 retail stores under the Walgreens and Duane Reade brands in the United States;
and five specialty pharmacies. The International segment sells prescription drugs;
and health and wellness, beauty, personal care, and other consumer products through
its pharmacy-led health and beauty retail stores and optical practices, as well
as through boots.com and an integrated mobile application. It also engages in
pharmaceutical wholesaling and distribution business in Germany. As of August
31, 2021, this segment operated 4,031 retail stores under the Boots, Benavides,
and Ahumada in the United Kingdom, Thailand, Norway, the Republic of Ireland,
the Netherlands, Mexico, and Chile; and 548 optical practices, including 160 on
a franchise basis. Walgreens Boots Alliance, Inc. was founded in 1901 and is based
in Deerfield, Illinois.
- Middlesex Water Company owns and operates regulated water utility and wastewater
systems. It operates in two segments, Regulated and Non-Regulated. The Regulated
segment collects, treats, and distributes water on a retail and wholesale basis
to residential, commercial, industrial, and fire protection customers, as well
as provides regulated wastewater systems in New Jersey and Delaware. The Non-Regulated
segment provides non-regulated contract services for the operation and maintenance
of municipal and private water and wastewater systems in New Jersey and Delaware.
The company was incorporated in 1896 and is headquartered in Iselin, New Jersey.
- Liberty Broadband Corporation engages in the communications businesses. It operates
through GCI Holdings and Charter segments. The GCI Holdings segment provides a
range of wireless, data, video, voice, and managed services to residential customers,
businesses, governmental entities, and educational and medical institutions primarily
in Alaska under the GCI brand. The Charter segment offers subscription-based video
services comprising video on demand, high-definition television, and digital video
recorder service; local and long-distance calling, voicemail, call waiting, caller
ID, call forwarding, and other voice services, as well as international calling
services; and Spectrum TV. It also provides internet services, including an in-home
Wi-Fi product that provides customers with high-performance wireless routers and
managed Wi-Fi services; advanced community Wi-Fi; mobile internet; and a security
suite that offers protection against computer viruses and spyware. In addition,
this segment offers internet access, data networking, fiber connectivity to cellular
towers and office buildings, video entertainment, and business telephone services;
advertising services on cable television networks and digital outlets; and operates
regional sports and news networks. Liberty Broadband Corporation was incorporated
in 2014 and is based in Englewood, Colorado.
datasets:
- hobbang/stage2-dataset
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on suhwan3/e5-step1
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [suhwan3/e5-step1](https://huggingface.co/suhwan3/e5-step1) on the [stage2-dataset](https://huggingface.co/datasets/hobbang/stage2-dataset) dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [suhwan3/e5-step1](https://huggingface.co/suhwan3/e5-step1) <!-- at revision 9208a43bc7f1394fe52e954e6a6661be1c113ebc -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [stage2-dataset](https://huggingface.co/datasets/hobbang/stage2-dataset)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
"The Global X S&P 500 Risk Managed Income ETF seeks to track the Cboe S&P 500 Risk Managed Income Index by investing at least 80% of its assets in index securities. The index's strategy involves holding the underlying stocks of the S&P 500 Index while applying an options collar, specifically selling at-the-money covered call options and buying monthly 5% out-of-the-money put options corresponding to the portfolio's value. This approach aims to generate income, ideally resulting in a net credit from the options premiums, and provide risk management, though selling at-the-money calls inherently caps the fund's potential for upside participation.",
'Walgreens Boots Alliance, Inc. operates as a pharmacy-led health and beauty retail company. It operates through two segments, the United States and International. The United States segment sells prescription drugs and an assortment of retail products, including health, wellness, beauty, personal care, consumable, and general merchandise products through its retail drugstores. It also provides central specialty pharmacy services and mail services. As of August 31, 2021, this segment operated 8,965 retail stores under the Walgreens and Duane Reade brands in the United States; and five specialty pharmacies. The International segment sells prescription drugs; and health and wellness, beauty, personal care, and other consumer products through its pharmacy-led health and beauty retail stores and optical practices, as well as through boots.com and an integrated mobile application. It also engages in pharmaceutical wholesaling and distribution business in Germany. As of August 31, 2021, this segment operated 4,031 retail stores under the Boots, Benavides, and Ahumada in the United Kingdom, Thailand, Norway, the Republic of Ireland, the Netherlands, Mexico, and Chile; and 548 optical practices, including 160 on a franchise basis. Walgreens Boots Alliance, Inc. was founded in 1901 and is based in Deerfield, Illinois.',
'Liberty Broadband Corporation engages in the communications businesses. It operates through GCI Holdings and Charter segments. The GCI Holdings segment provides a range of wireless, data, video, voice, and managed services to residential customers, businesses, governmental entities, and educational and medical institutions primarily in Alaska under the GCI brand. The Charter segment offers subscription-based video services comprising video on demand, high-definition television, and digital video recorder service; local and long-distance calling, voicemail, call waiting, caller ID, call forwarding, and other voice services, as well as international calling services; and Spectrum TV. It also provides internet services, including an in-home Wi-Fi product that provides customers with high-performance wireless routers and managed Wi-Fi services; advanced community Wi-Fi; mobile internet; and a security suite that offers protection against computer viruses and spyware. In addition, this segment offers internet access, data networking, fiber connectivity to cellular towers and office buildings, video entertainment, and business telephone services; advertising services on cable television networks and digital outlets; and operates regional sports and news networks. Liberty Broadband Corporation was incorporated in 2014 and is based in Englewood, Colorado.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### stage2-dataset
* Dataset: [stage2-dataset](https://huggingface.co/datasets/hobbang/stage2-dataset) at [cd393c2](https://huggingface.co/datasets/hobbang/stage2-dataset/tree/cd393c24f4017971e95aa6f73736f2fcb45e30a0)
* Size: 128,997 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:--------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 117 tokens</li><li>mean: 166.66 tokens</li><li>max: 210 tokens</li></ul> | <ul><li>min: 42 tokens</li><li>mean: 280.1 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor | positive |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>The Invesco Financial Preferred ETF (PGF) seeks to track the ICE Exchange-Listed Fixed Rate Financial Preferred Securities Index, primarily by investing at least 90% of its total assets in the securities comprising the index. The underlying index is market capitalization weighted and designed to track the performance of exchange-listed, fixed rate, U.S. dollar denominated preferred securities, including functionally equivalent instruments, issued by U.S. financial companies. PGF provides a concentrated portfolio exclusively focused on financial-sector preferred securities and is considered non-diversified, holding both investment- and non-investment-grade securities within this focus.</code> | <code>JPMorgan Chase & Co. operates as a financial services company worldwide. It operates through four segments: Consumer & Community Banking (CCB), Corporate & Investment Bank (CIB), Commercial Banking (CB), and Asset & Wealth Management (AWM). The CCB segment offers s deposit, investment and lending products, payments, and services to consumers; lending, deposit, and cash management and payment solutions to small businesses; mortgage origination and servicing activities; residential mortgages and home equity loans; and credit card, auto loan, and leasing services. The CIB segment provides investment banking products and services, including corporate strategy and structure advisory, and equity and debt markets capital-raising services, as well as loan origination and syndication; payments and cross-border financing; and cash and derivative instruments, risk management solutions, prime brokerage, and research. This segment also offers securities services, including custody, fund accounting ...</code> |
| <code>The Invesco Financial Preferred ETF (PGF) seeks to track the ICE Exchange-Listed Fixed Rate Financial Preferred Securities Index, primarily by investing at least 90% of its total assets in the securities comprising the index. The underlying index is market capitalization weighted and designed to track the performance of exchange-listed, fixed rate, U.S. dollar denominated preferred securities, including functionally equivalent instruments, issued by U.S. financial companies. PGF provides a concentrated portfolio exclusively focused on financial-sector preferred securities and is considered non-diversified, holding both investment- and non-investment-grade securities within this focus.</code> | <code>JPMorgan Chase & Co. operates as a financial services company worldwide. It operates through four segments: Consumer & Community Banking (CCB), Corporate & Investment Bank (CIB), Commercial Banking (CB), and Asset & Wealth Management (AWM). The CCB segment offers s deposit, investment and lending products, payments, and services to consumers; lending, deposit, and cash management and payment solutions to small businesses; mortgage origination and servicing activities; residential mortgages and home equity loans; and credit card, auto loan, and leasing services. The CIB segment provides investment banking products and services, including corporate strategy and structure advisory, and equity and debt markets capital-raising services, as well as loan origination and syndication; payments and cross-border financing; and cash and derivative instruments, risk management solutions, prime brokerage, and research. This segment also offers securities services, including custody, fund accounting ...</code> |
| <code>The Invesco Financial Preferred ETF (PGF) seeks to track the ICE Exchange-Listed Fixed Rate Financial Preferred Securities Index, primarily by investing at least 90% of its total assets in the securities comprising the index. The underlying index is market capitalization weighted and designed to track the performance of exchange-listed, fixed rate, U.S. dollar denominated preferred securities, including functionally equivalent instruments, issued by U.S. financial companies. PGF provides a concentrated portfolio exclusively focused on financial-sector preferred securities and is considered non-diversified, holding both investment- and non-investment-grade securities within this focus.</code> | <code>The Allstate Corporation, together with its subsidiaries, provides property and casualty, and other insurance products in the United States and Canada. The company operates through Allstate Protection; Protection Services; Allstate Health and Benefits; and Run-off Property-Liability segments. The Allstate Protection segment offers private passenger auto and homeowners insurance; other personal lines products; and commercial lines products under the Allstate and Encompass brand names. The Protection Services segment provides consumer product protection plans and related technical support for mobile phones, consumer electronics, furniture, and appliances; finance and insurance products, including vehicle service contracts, guaranteed asset protection waivers, road hazard tire and wheel, and paint and fabric protection; towing, jump-start, lockout, fuel delivery, and tire change services; device and mobile data collection services; data and analytic solutions using automotive telematics i...</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### stage2-dataset
* Dataset: [stage2-dataset](https://huggingface.co/datasets/hobbang/stage2-dataset) at [cd393c2](https://huggingface.co/datasets/hobbang/stage2-dataset/tree/cd393c24f4017971e95aa6f73736f2fcb45e30a0)
* Size: 16,944 evaluation samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 161 tokens</li><li>mean: 176.19 tokens</li><li>max: 249 tokens</li></ul> | <ul><li>min: 47 tokens</li><li>mean: 294.34 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor | positive |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>The Global X S&P 500 Risk Managed Income ETF seeks to track the Cboe S&P 500 Risk Managed Income Index by investing at least 80% of its assets in index securities. The index's strategy involves holding the underlying stocks of the S&P 500 Index while applying an options collar, specifically selling at-the-money covered call options and buying monthly 5% out-of-the-money put options corresponding to the portfolio's value. This approach aims to generate income, ideally resulting in a net credit from the options premiums, and provide risk management, though selling at-the-money calls inherently caps the fund's potential for upside participation.</code> | <code>Apple Inc. designs, manufactures, and markets smartphones, personal computers, tablets, wearables, and accessories worldwide. The company offers iPhone, a line of smartphones; Mac, a line of personal computers; iPad, a line of multi-purpose tablets; and wearables, home, and accessories comprising AirPods, Apple TV, Apple Watch, Beats products, and HomePod. It also provides AppleCare support and cloud services; and operates various platforms, including the App Store that allow customers to discover and download applications and digital content, such as books, music, video, games, and podcasts, as well as advertising services include third-party licensing arrangements and its own advertising platforms. In addition, the company offers various subscription-based services, such as Apple Arcade, a game subscription service; Apple Fitness+, a personalized fitness service; Apple Music, which offers users a curated listening experience with on-demand radio stations; Apple News+, a subscription ...</code> |
| <code>The Global X S&P 500 Risk Managed Income ETF seeks to track the Cboe S&P 500 Risk Managed Income Index by investing at least 80% of its assets in index securities. The index's strategy involves holding the underlying stocks of the S&P 500 Index while applying an options collar, specifically selling at-the-money covered call options and buying monthly 5% out-of-the-money put options corresponding to the portfolio's value. This approach aims to generate income, ideally resulting in a net credit from the options premiums, and provide risk management, though selling at-the-money calls inherently caps the fund's potential for upside participation.</code> | <code>Microsoft Corporation develops, licenses, and supports software, services, devices, and solutions worldwide. The company operates in three segments: Productivity and Business Processes, Intelligent Cloud, and More Personal Computing. The Productivity and Business Processes segment offers Office, Exchange, SharePoint, Microsoft Teams, Office 365 Security and Compliance, Microsoft Viva, and Skype for Business; Skype, Outlook.com, OneDrive, and LinkedIn; and Dynamics 365, a set of cloud-based and on-premises business solutions for organizations and enterprise divisions. The Intelligent Cloud segment licenses SQL, Windows Servers, Visual Studio, System Center, and related Client Access Licenses; GitHub that provides a collaboration platform and code hosting service for developers; Nuance provides healthcare and enterprise AI solutions; and Azure, a cloud platform. It also offers enterprise support, Microsoft consulting, and nuance professional services to assist customers in developing, de...</code> |
| <code>The Global X S&P 500 Risk Managed Income ETF seeks to track the Cboe S&P 500 Risk Managed Income Index by investing at least 80% of its assets in index securities. The index's strategy involves holding the underlying stocks of the S&P 500 Index while applying an options collar, specifically selling at-the-money covered call options and buying monthly 5% out-of-the-money put options corresponding to the portfolio's value. This approach aims to generate income, ideally resulting in a net credit from the options premiums, and provide risk management, though selling at-the-money calls inherently caps the fund's potential for upside participation.</code> | <code>NVIDIA Corporation provides graphics, and compute and networking solutions in the United States, Taiwan, China, and internationally. The company's Graphics segment offers GeForce GPUs for gaming and PCs, the GeForce NOW game streaming service and related infrastructure, and solutions for gaming platforms; Quadro/NVIDIA RTX GPUs for enterprise workstation graphics; vGPU software for cloud-based visual and virtual computing; automotive platforms for infotainment systems; and Omniverse software for building 3D designs and virtual worlds. Its Compute & Networking segment provides Data Center platforms and systems for AI, HPC, and accelerated computing; Mellanox networking and interconnect solutions; automotive AI Cockpit, autonomous driving development agreements, and autonomous vehicle solutions; cryptocurrency mining processors; Jetson for robotics and other embedded platforms; and NVIDIA AI Enterprise and other software. The company's products are used in gaming, professional visualizat...</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `learning_rate`: 3e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `bf16`: True
- `dataloader_drop_last`: True
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 3e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | Validation Loss |
|:----------:|:--------:|:-------------:|:---------------:|
| 0.0025 | 10 | 3.2434 | - |
| 0.0050 | 20 | 3.1529 | - |
| 0.0074 | 30 | 3.1541 | - |
| 0.0099 | 40 | 3.1721 | - |
| 0.0124 | 50 | 2.8615 | - |
| 0.0149 | 60 | 2.7943 | - |
| 0.0174 | 70 | 2.8572 | - |
| 0.0198 | 80 | 2.8025 | - |
| 0.0223 | 90 | 2.7688 | - |
| 0.0248 | 100 | 2.7029 | - |
| 0.0273 | 110 | 2.6609 | - |
| 0.0298 | 120 | 2.6807 | - |
| 0.0323 | 130 | 2.5567 | - |
| 0.0347 | 140 | 2.6335 | - |
| 0.0372 | 150 | 2.6509 | - |
| 0.0397 | 160 | 2.6173 | - |
| 0.0422 | 170 | 2.5776 | - |
| 0.0447 | 180 | 2.6556 | - |
| 0.0471 | 190 | 2.5436 | - |
| 0.0496 | 200 | 2.6695 | - |
| 0.0521 | 210 | 2.6238 | - |
| 0.0546 | 220 | 2.5281 | - |
| 0.0571 | 230 | 2.5471 | - |
| 0.0595 | 240 | 2.5133 | - |
| 0.0620 | 250 | 2.515 | - |
| 0.0645 | 260 | 2.549 | - |
| 0.0670 | 270 | 2.4789 | - |
| 0.0695 | 280 | 2.529 | - |
| 0.0719 | 290 | 2.4778 | - |
| 0.0744 | 300 | 2.6365 | - |
| 0.0769 | 310 | 2.4869 | - |
| 0.0794 | 320 | 2.4804 | - |
| 0.0819 | 330 | 2.6349 | - |
| 0.0843 | 340 | 2.5421 | - |
| 0.0868 | 350 | 2.6261 | - |
| 0.0893 | 360 | 2.4998 | - |
| 0.0918 | 370 | 2.4604 | - |
| 0.0943 | 380 | 2.4391 | - |
| 0.0968 | 390 | 2.4586 | - |
| 0.0992 | 400 | 2.363 | - |
| 0.1017 | 410 | 2.4781 | - |
| 0.1042 | 420 | 2.3992 | - |
| 0.1067 | 430 | 2.5011 | - |
| 0.1092 | 440 | 2.4925 | - |
| 0.1116 | 450 | 2.4634 | - |
| 0.1141 | 460 | 2.374 | - |
| 0.1166 | 470 | 2.47 | - |
| 0.1191 | 480 | 2.3879 | - |
| 0.1216 | 490 | 2.4724 | - |
| 0.1240 | 500 | 2.3785 | - |
| 0.1265 | 510 | 2.465 | - |
| 0.1290 | 520 | 2.4031 | - |
| 0.1315 | 530 | 2.479 | - |
| 0.1340 | 540 | 2.3908 | - |
| 0.1364 | 550 | 2.424 | - |
| 0.1389 | 560 | 2.5066 | - |
| 0.1414 | 570 | 2.4195 | - |
| 0.1439 | 580 | 2.3403 | - |
| 0.1464 | 590 | 2.4056 | - |
| 0.1488 | 600 | 2.5169 | - |
| 0.1513 | 610 | 2.3982 | - |
| 0.1538 | 620 | 2.3388 | - |
| 0.1563 | 630 | 2.3661 | - |
| 0.1588 | 640 | 2.3944 | - |
| 0.1613 | 650 | 2.4447 | - |
| 0.1637 | 660 | 2.3494 | - |
| 0.1662 | 670 | 2.4022 | - |
| 0.1687 | 680 | 2.4189 | - |
| 0.1712 | 690 | 2.5578 | - |
| 0.1737 | 700 | 2.3257 | - |
| 0.1761 | 710 | 2.3886 | - |
| 0.1786 | 720 | 2.4123 | - |
| 0.1811 | 730 | 2.3356 | - |
| 0.1836 | 740 | 2.3251 | - |
| 0.1861 | 750 | 2.3763 | - |
| 0.1885 | 760 | 2.3461 | - |
| 0.1910 | 770 | 2.3906 | - |
| 0.1935 | 780 | 2.3079 | - |
| 0.1960 | 790 | 2.3625 | - |
| 0.1985 | 800 | 2.407 | - |
| 0.2009 | 810 | 2.4349 | - |
| 0.2034 | 820 | 2.6694 | - |
| 0.2059 | 830 | 2.4116 | - |
| 0.2084 | 840 | 2.3552 | - |
| 0.2109 | 850 | 2.4232 | - |
| 0.2133 | 860 | 2.455 | - |
| 0.2158 | 870 | 2.331 | - |
| 0.2183 | 880 | 2.3231 | - |
| 0.2208 | 890 | 2.3441 | - |
| 0.2233 | 900 | 2.2612 | - |
| 0.2258 | 910 | 2.2744 | - |
| 0.2282 | 920 | 2.2202 | - |
| 0.2307 | 930 | 2.3144 | - |
| 0.2332 | 940 | 2.2821 | - |
| 0.2357 | 950 | 2.3194 | - |
| 0.2382 | 960 | 2.4394 | - |
| 0.2406 | 970 | 2.1918 | - |
| 0.2431 | 980 | 2.3256 | - |
| 0.2456 | 990 | 2.3285 | - |
| 0.2481 | 1000 | 2.3288 | 1.9891 |
| 0.2506 | 1010 | 2.3462 | - |
| 0.2530 | 1020 | 2.3088 | - |
| 0.2555 | 1030 | 2.215 | - |
| 0.2580 | 1040 | 2.3241 | - |
| 0.2605 | 1050 | 2.2073 | - |
| 0.2630 | 1060 | 2.1959 | - |
| 0.2654 | 1070 | 2.37 | - |
| 0.2679 | 1080 | 2.3663 | - |
| 0.2704 | 1090 | 2.2008 | - |
| 0.2729 | 1100 | 2.3766 | - |
| 0.2754 | 1110 | 2.3042 | - |
| 0.2778 | 1120 | 2.2124 | - |
| 0.2803 | 1130 | 2.1839 | - |
| 0.2828 | 1140 | 2.2635 | - |
| 0.2853 | 1150 | 2.2726 | - |
| 0.2878 | 1160 | 2.3131 | - |
| 0.2903 | 1170 | 2.2244 | - |
| 0.2927 | 1180 | 2.2071 | - |
| 0.2952 | 1190 | 2.2722 | - |
| 0.2977 | 1200 | 2.2883 | - |
| 0.3002 | 1210 | 2.2805 | - |
| 0.3027 | 1220 | 2.268 | - |
| 0.3051 | 1230 | 2.2111 | - |
| 0.3076 | 1240 | 2.2381 | - |
| 0.3101 | 1250 | 2.3316 | - |
| 0.3126 | 1260 | 2.2579 | - |
| 0.3151 | 1270 | 2.3303 | - |
| 0.3175 | 1280 | 2.1496 | - |
| 0.3200 | 1290 | 2.2816 | - |
| 0.3225 | 1300 | 2.2676 | - |
| 0.3250 | 1310 | 2.4031 | - |
| 0.3275 | 1320 | 2.2962 | - |
| 0.3299 | 1330 | 2.357 | - |
| 0.3324 | 1340 | 2.1618 | - |
| 0.3349 | 1350 | 2.2292 | - |
| 0.3374 | 1360 | 2.3064 | - |
| 0.3399 | 1370 | 2.2085 | - |
| 0.3423 | 1380 | 2.3681 | - |
| 0.3448 | 1390 | 2.185 | - |
| 0.3473 | 1400 | 2.2346 | - |
| 0.3498 | 1410 | 2.3735 | - |
| 0.3523 | 1420 | 2.3221 | - |
| 0.3548 | 1430 | 2.3357 | - |
| 0.3572 | 1440 | 2.2943 | - |
| 0.3597 | 1450 | 2.0894 | - |
| 0.3622 | 1460 | 2.2957 | - |
| 0.3647 | 1470 | 2.1793 | - |
| 0.3672 | 1480 | 2.2257 | - |
| 0.3696 | 1490 | 2.2414 | - |
| 0.3721 | 1500 | 2.1285 | - |
| 0.3746 | 1510 | 2.4221 | - |
| 0.3771 | 1520 | 2.2476 | - |
| 0.3796 | 1530 | 2.1072 | - |
| 0.3820 | 1540 | 2.2527 | - |
| 0.3845 | 1550 | 2.3188 | - |
| 0.3870 | 1560 | 2.2599 | - |
| 0.3895 | 1570 | 2.2309 | - |
| 0.3920 | 1580 | 2.2227 | - |
| 0.3944 | 1590 | 2.2546 | - |
| 0.3969 | 1600 | 2.1462 | - |
| 0.3994 | 1610 | 2.12 | - |
| 0.4019 | 1620 | 2.233 | - |
| 0.4044 | 1630 | 2.205 | - |
| 0.4068 | 1640 | 2.2024 | - |
| 0.4093 | 1650 | 2.2486 | - |
| 0.4118 | 1660 | 2.289 | - |
| 0.4143 | 1670 | 2.3016 | - |
| 0.4168 | 1680 | 2.063 | - |
| 0.4193 | 1690 | 2.1364 | - |
| 0.4217 | 1700 | 2.2191 | - |
| 0.4242 | 1710 | 2.1718 | - |
| 0.4267 | 1720 | 2.1524 | - |
| 0.4292 | 1730 | 2.2658 | - |
| 0.4317 | 1740 | 2.2978 | - |
| 0.4341 | 1750 | 2.1527 | - |
| 0.4366 | 1760 | 2.2312 | - |
| 0.4391 | 1770 | 2.2462 | - |
| 0.4416 | 1780 | 2.2673 | - |
| 0.4441 | 1790 | 2.2392 | - |
| 0.4465 | 1800 | 2.1426 | - |
| 0.4490 | 1810 | 2.3702 | - |
| 0.4515 | 1820 | 2.3869 | - |
| 0.4540 | 1830 | 2.2688 | - |
| 0.4565 | 1840 | 2.1012 | - |
| 0.4589 | 1850 | 2.1748 | - |
| 0.4614 | 1860 | 2.2232 | - |
| 0.4639 | 1870 | 2.1726 | - |
| 0.4664 | 1880 | 2.2097 | - |
| 0.4689 | 1890 | 2.2102 | - |
| 0.4713 | 1900 | 2.3145 | - |
| 0.4738 | 1910 | 2.1053 | - |
| 0.4763 | 1920 | 2.1154 | - |
| 0.4788 | 1930 | 2.1107 | - |
| 0.4813 | 1940 | 2.1472 | - |
| 0.4838 | 1950 | 2.1771 | - |
| 0.4862 | 1960 | 2.0639 | - |
| 0.4887 | 1970 | 2.0658 | - |
| 0.4912 | 1980 | 2.2208 | - |
| 0.4937 | 1990 | 2.21 | - |
| 0.4962 | 2000 | 2.2042 | 1.8790 |
| 0.4986 | 2010 | 2.1517 | - |
| 0.5011 | 2020 | 2.1699 | - |
| 0.5036 | 2030 | 2.1208 | - |
| 0.5061 | 2040 | 2.043 | - |
| 0.5086 | 2050 | 2.0806 | - |
| 0.5110 | 2060 | 2.1554 | - |
| 0.5135 | 2070 | 2.1162 | - |
| 0.5160 | 2080 | 2.0013 | - |
| 0.5185 | 2090 | 2.0849 | - |
| 0.5210 | 2100 | 2.2321 | - |
| 0.5234 | 2110 | 2.2313 | - |
| 0.5259 | 2120 | 2.0902 | - |
| 0.5284 | 2130 | 2.1391 | - |
| 0.5309 | 2140 | 2.0864 | - |
| 0.5334 | 2150 | 2.1168 | - |
| 0.5358 | 2160 | 2.1015 | - |
| 0.5383 | 2170 | 2.1222 | - |
| 0.5408 | 2180 | 2.2427 | - |
| 0.5433 | 2190 | 2.1443 | - |
| 0.5458 | 2200 | 2.1604 | - |
| 0.5483 | 2210 | 2.0717 | - |
| 0.5507 | 2220 | 2.2068 | - |
| 0.5532 | 2230 | 2.0467 | - |
| 0.5557 | 2240 | 2.121 | - |
| 0.5582 | 2250 | 2.1791 | - |
| 0.5607 | 2260 | 2.0827 | - |
| 0.5631 | 2270 | 2.1643 | - |
| 0.5656 | 2280 | 2.2075 | - |
| 0.5681 | 2290 | 2.1106 | - |
| 0.5706 | 2300 | 2.1194 | - |
| 0.5731 | 2310 | 2.2137 | - |
| 0.5755 | 2320 | 2.0811 | - |
| 0.5780 | 2330 | 2.1033 | - |
| 0.5805 | 2340 | 1.9524 | - |
| 0.5830 | 2350 | 2.1022 | - |
| 0.5855 | 2360 | 2.127 | - |
| 0.5879 | 2370 | 2.1746 | - |
| 0.5904 | 2380 | 2.1557 | - |
| 0.5929 | 2390 | 2.1646 | - |
| 0.5954 | 2400 | 2.0664 | - |
| 0.5979 | 2410 | 2.1212 | - |
| 0.6003 | 2420 | 2.173 | - |
| 0.6028 | 2430 | 2.102 | - |
| 0.6053 | 2440 | 2.0702 | - |
| 0.6078 | 2450 | 1.9177 | - |
| 0.6103 | 2460 | 2.163 | - |
| 0.6128 | 2470 | 2.0541 | - |
| 0.6152 | 2480 | 2.1842 | - |
| 0.6177 | 2490 | 2.1937 | - |
| 0.6202 | 2500 | 2.143 | - |
| 0.6227 | 2510 | 2.1004 | - |
| 0.6252 | 2520 | 2.1145 | - |
| 0.6276 | 2530 | 2.0726 | - |
| 0.6301 | 2540 | 2.065 | - |
| 0.6326 | 2550 | 2.1342 | - |
| 0.6351 | 2560 | 2.0643 | - |
| 0.6376 | 2570 | 2.0675 | - |
| 0.6400 | 2580 | 2.0014 | - |
| 0.6425 | 2590 | 2.1966 | - |
| 0.6450 | 2600 | 2.1159 | - |
| 0.6475 | 2610 | 2.0157 | - |
| 0.6500 | 2620 | 2.0803 | - |
| 0.6524 | 2630 | 2.0227 | - |
| 0.6549 | 2640 | 2.0492 | - |
| 0.6574 | 2650 | 2.1155 | - |
| 0.6599 | 2660 | 2.0301 | - |
| 0.6624 | 2670 | 2.1791 | - |
| 0.6648 | 2680 | 2.2047 | - |
| 0.6673 | 2690 | 1.995 | - |
| 0.6698 | 2700 | 1.9908 | - |
| 0.6723 | 2710 | 2.0663 | - |
| 0.6748 | 2720 | 2.1336 | - |
| 0.6773 | 2730 | 1.9984 | - |
| 0.6797 | 2740 | 2.0234 | - |
| 0.6822 | 2750 | 2.0607 | - |
| 0.6847 | 2760 | 2.0391 | - |
| 0.6872 | 2770 | 2.2076 | - |
| 0.6897 | 2780 | 2.0322 | - |
| 0.6921 | 2790 | 2.0302 | - |
| 0.6946 | 2800 | 1.9063 | - |
| 0.6971 | 2810 | 1.9939 | - |
| 0.6996 | 2820 | 2.2912 | - |
| 0.7021 | 2830 | 2.0652 | - |
| 0.7045 | 2840 | 2.1049 | - |
| 0.7070 | 2850 | 1.9113 | - |
| 0.7095 | 2860 | 2.0191 | - |
| 0.7120 | 2870 | 2.0719 | - |
| 0.7145 | 2880 | 1.9679 | - |
| 0.7169 | 2890 | 1.9377 | - |
| 0.7194 | 2900 | 2.0376 | - |
| 0.7219 | 2910 | 2.0183 | - |
| 0.7244 | 2920 | 2.0292 | - |
| 0.7269 | 2930 | 2.0002 | - |
| 0.7293 | 2940 | 1.9756 | - |
| 0.7318 | 2950 | 1.9684 | - |
| 0.7343 | 2960 | 2.0488 | - |
| 0.7368 | 2970 | 1.9472 | - |
| 0.7393 | 2980 | 2.0093 | - |
| 0.7418 | 2990 | 2.0945 | - |
| **0.7442** | **3000** | **2.06** | **1.8518** |
| 0.7467 | 3010 | 2.1229 | - |
| 0.7492 | 3020 | 2.0158 | - |
| 0.7517 | 3030 | 2.0899 | - |
| 0.7542 | 3040 | 2.0648 | - |
| 0.7566 | 3050 | 1.9429 | - |
| 0.7591 | 3060 | 2.1461 | - |
| 0.7616 | 3070 | 1.9435 | - |
| 0.7641 | 3080 | 2.0605 | - |
| 0.7666 | 3090 | 2.0657 | - |
| 0.7690 | 3100 | 2.1311 | - |
| 0.7715 | 3110 | 2.0691 | - |
| 0.7740 | 3120 | 1.9691 | - |
| 0.7765 | 3130 | 2.0362 | - |
| 0.7790 | 3140 | 2.0247 | - |
| 0.7814 | 3150 | 2.1573 | - |
| 0.7839 | 3160 | 2.0435 | - |
| 0.7864 | 3170 | 2.0407 | - |
| 0.7889 | 3180 | 2.0048 | - |
| 0.7914 | 3190 | 1.9889 | - |
| 0.7938 | 3200 | 2.1159 | - |
| 0.7963 | 3210 | 1.8981 | - |
| 0.7988 | 3220 | 1.8512 | - |
| 0.8013 | 3230 | 1.9925 | - |
| 0.8038 | 3240 | 2.0142 | - |
| 0.8063 | 3250 | 1.9632 | - |
| 0.8087 | 3260 | 2.0138 | - |
| 0.8112 | 3270 | 2.0144 | - |
| 0.8137 | 3280 | 2.097 | - |
| 0.8162 | 3290 | 2.0671 | - |
| 0.8187 | 3300 | 2.105 | - |
| 0.8211 | 3310 | 2.1392 | - |
| 0.8236 | 3320 | 2.0254 | - |
| 0.8261 | 3330 | 2.0963 | - |
| 0.8286 | 3340 | 2.0252 | - |
| 0.8311 | 3350 | 2.2256 | - |
| 0.8335 | 3360 | 1.9461 | - |
| 0.8360 | 3370 | 2.0253 | - |
| 0.8385 | 3380 | 1.9796 | - |
| 0.8410 | 3390 | 2.0018 | - |
| 0.8435 | 3400 | 2.0701 | - |
| 0.8459 | 3410 | 2.052 | - |
| 0.8484 | 3420 | 1.9837 | - |
| 0.8509 | 3430 | 1.9627 | - |
| 0.8534 | 3440 | 1.921 | - |
| 0.8559 | 3450 | 1.9698 | - |
| 0.8583 | 3460 | 2.0254 | - |
| 0.8608 | 3470 | 1.9404 | - |
| 0.8633 | 3480 | 1.9509 | - |
| 0.8658 | 3490 | 2.0727 | - |
| 0.8683 | 3500 | 1.844 | - |
| 0.8708 | 3510 | 1.9206 | - |
| 0.8732 | 3520 | 2.0281 | - |
| 0.8757 | 3530 | 1.9659 | - |
| 0.8782 | 3540 | 2.023 | - |
| 0.8807 | 3550 | 2.0457 | - |
| 0.8832 | 3560 | 2.0822 | - |
| 0.8856 | 3570 | 2.0736 | - |
| 0.8881 | 3580 | 2.0323 | - |
| 0.8906 | 3590 | 1.9307 | - |
| 0.8931 | 3600 | 2.0086 | - |
| 0.8956 | 3610 | 2.0197 | - |
| 0.8980 | 3620 | 1.8615 | - |
| 0.9005 | 3630 | 1.8747 | - |
| 0.9030 | 3640 | 2.0277 | - |
| 0.9055 | 3650 | 2.0774 | - |
| 0.9080 | 3660 | 1.9351 | - |
| 0.9104 | 3670 | 2.0159 | - |
| 0.9129 | 3680 | 2.0375 | - |
| 0.9154 | 3690 | 1.9994 | - |
| 0.9179 | 3700 | 1.9926 | - |
| 0.9204 | 3710 | 1.8202 | - |
| 0.9228 | 3720 | 1.9775 | - |
| 0.9253 | 3730 | 2.0521 | - |
| 0.9278 | 3740 | 1.9616 | - |
| 0.9303 | 3750 | 2.0131 | - |
| 0.9328 | 3760 | 2.0278 | - |
| 0.9353 | 3770 | 1.8954 | - |
| 0.9377 | 3780 | 2.0879 | - |
| 0.9402 | 3790 | 1.995 | - |
| 0.9427 | 3800 | 1.9958 | - |
| 0.9452 | 3810 | 1.9921 | - |
| 0.9477 | 3820 | 1.964 | - |
| 0.9501 | 3830 | 2.0655 | - |
| 0.9526 | 3840 | 2.0815 | - |
| 0.9551 | 3850 | 2.034 | - |
| 0.9576 | 3860 | 1.9623 | - |
| 0.9601 | 3870 | 1.9913 | - |
| 0.9625 | 3880 | 1.8262 | - |
| 0.9650 | 3890 | 2.0192 | - |
| 0.9675 | 3900 | 1.9874 | - |
| 0.9700 | 3910 | 2.0218 | - |
| 0.9725 | 3920 | 1.9251 | - |
| 0.9749 | 3930 | 1.9167 | - |
| 0.9774 | 3940 | 1.9559 | - |
* The bold row denotes the saved checkpoint.
</details>
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.1.0+cu118
- Accelerate: 1.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
egerber1/classifier-de2 | egerber1 | 2025-04-25T02:40:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-german-cased",
"base_model:finetune:google-bert/bert-base-german-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-25T02:40:03Z | ---
library_name: transformers
license: mit
base_model: bert-base-german-cased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: classifier-de2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# classifier-de2
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3294
- Accuracy: 0.8826
- Precision: 0.5399
- Recall: 0.3576
- F1: 0.4302
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.2757 | 0.0923 | 900 | 0.3526 | 0.8732 | 0.4426 | 0.0879 | 0.1466 |
| 0.2537 | 0.1845 | 1800 | 0.3498 | 0.8739 | 0.4782 | 0.1823 | 0.2640 |
| 0.2242 | 0.2768 | 2700 | 0.3381 | 0.8815 | 0.5739 | 0.1712 | 0.2637 |
| 0.2061 | 0.3690 | 3600 | 0.3430 | 0.8763 | 0.5022 | 0.2519 | 0.3355 |
| 0.1914 | 0.4613 | 4500 | 0.3435 | 0.8784 | 0.5202 | 0.2482 | 0.3360 |
| 0.1798 | 0.5535 | 5400 | 0.3240 | 0.8817 | 0.5554 | 0.2291 | 0.3243 |
| 0.1899 | 0.6458 | 6300 | 0.3206 | 0.8768 | 0.5052 | 0.3153 | 0.3883 |
| 0.1761 | 0.7380 | 7200 | 0.3340 | 0.8846 | 0.5955 | 0.2170 | 0.3181 |
| 0.189 | 0.8303 | 8100 | 0.3241 | 0.8860 | 0.6141 | 0.2160 | 0.3196 |
| 0.1644 | 0.9225 | 9000 | 0.3278 | 0.8861 | 0.6105 | 0.2251 | 0.3289 |
| 0.1582 | 1.0148 | 9900 | 0.3437 | 0.8847 | 0.5773 | 0.2633 | 0.3616 |
| 0.1511 | 1.1070 | 10800 | 0.3187 | 0.8836 | 0.5556 | 0.3076 | 0.3960 |
| 0.1602 | 1.1993 | 11700 | 0.3198 | 0.8860 | 0.5858 | 0.2764 | 0.3756 |
| 0.149 | 1.2915 | 12600 | 0.3244 | 0.8842 | 0.5635 | 0.2945 | 0.3868 |
| 0.1512 | 1.3838 | 13500 | 0.3281 | 0.8863 | 0.5792 | 0.3040 | 0.3987 |
| 0.1463 | 1.4760 | 14400 | 0.3228 | 0.8869 | 0.5947 | 0.2753 | 0.3763 |
| 0.1372 | 1.5683 | 15300 | 0.3300 | 0.8872 | 0.5869 | 0.3048 | 0.4012 |
| 0.1545 | 1.6605 | 16200 | 0.3229 | 0.8866 | 0.5807 | 0.3086 | 0.4030 |
| 0.1755 | 1.7528 | 17100 | 0.3070 | 0.8854 | 0.5652 | 0.3280 | 0.4151 |
| 0.1403 | 1.8450 | 18000 | 0.3212 | 0.8877 | 0.5995 | 0.2836 | 0.3851 |
| 0.1425 | 1.9373 | 18900 | 0.3179 | 0.8861 | 0.5722 | 0.3235 | 0.4133 |
| 0.1271 | 2.0295 | 19800 | 0.3483 | 0.8843 | 0.5545 | 0.3411 | 0.4224 |
| 0.1235 | 2.1218 | 20700 | 0.3362 | 0.8858 | 0.5685 | 0.3294 | 0.4171 |
| 0.1324 | 2.2140 | 21600 | 0.3294 | 0.8826 | 0.5399 | 0.3576 | 0.4302 |
| 0.1236 | 2.3063 | 22500 | 0.3345 | 0.8859 | 0.5712 | 0.3214 | 0.4113 |
| 0.1264 | 2.3985 | 23400 | 0.3575 | 0.8876 | 0.5879 | 0.3141 | 0.4094 |
| 0.1157 | 2.4908 | 24300 | 0.3405 | 0.8872 | 0.5863 | 0.3058 | 0.4020 |
| 0.1261 | 2.5830 | 25200 | 0.3372 | 0.8874 | 0.5853 | 0.3165 | 0.4109 |
| 0.1346 | 2.6753 | 26100 | 0.3398 | 0.8863 | 0.5747 | 0.3205 | 0.4115 |
| 0.1099 | 2.7675 | 27000 | 0.3492 | 0.8872 | 0.5843 | 0.3122 | 0.4070 |
| 0.1295 | 2.8598 | 27900 | 0.3374 | 0.8871 | 0.5813 | 0.3191 | 0.4120 |
| 0.1259 | 2.9520 | 28800 | 0.3410 | 0.8875 | 0.5863 | 0.3152 | 0.4100 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.5.0
- Tokenizers 0.21.1
|
genki10/BERT_V8_sp10_lw40_ex50_lo00_k5_k5_fold4 | genki10 | 2025-04-24T21:08:19Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-24T20:52:52Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: BERT_V8_sp10_lw40_ex50_lo00_k5_k5_fold4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_V8_sp10_lw40_ex50_lo00_k5_k5_fold4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1705
- Qwk: 0.2843
- Mse: 1.1705
- Rmse: 1.0819
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 1.0 | 4 | 9.3200 | 0.0018 | 9.3200 | 3.0529 |
| No log | 2.0 | 8 | 5.4443 | 0.0440 | 5.4443 | 2.3333 |
| No log | 3.0 | 12 | 3.0684 | 0.0040 | 3.0684 | 1.7517 |
| No log | 4.0 | 16 | 1.6222 | 0.0533 | 1.6222 | 1.2737 |
| No log | 5.0 | 20 | 1.0598 | 0.0316 | 1.0598 | 1.0294 |
| No log | 6.0 | 24 | 0.8730 | 0.2653 | 0.8730 | 0.9343 |
| No log | 7.0 | 28 | 1.1325 | 0.0555 | 1.1325 | 1.0642 |
| No log | 8.0 | 32 | 0.8457 | 0.3436 | 0.8457 | 0.9196 |
| No log | 9.0 | 36 | 1.2328 | 0.2265 | 1.2328 | 1.1103 |
| No log | 10.0 | 40 | 0.8298 | 0.3989 | 0.8298 | 0.9109 |
| No log | 11.0 | 44 | 1.5534 | 0.2440 | 1.5534 | 1.2463 |
| No log | 12.0 | 48 | 1.4950 | 0.2646 | 1.4950 | 1.2227 |
| No log | 13.0 | 52 | 0.8201 | 0.4485 | 0.8201 | 0.9056 |
| No log | 14.0 | 56 | 1.8936 | 0.1688 | 1.8936 | 1.3761 |
| No log | 15.0 | 60 | 1.1041 | 0.3442 | 1.1041 | 1.0507 |
| No log | 16.0 | 64 | 1.0786 | 0.3349 | 1.0786 | 1.0386 |
| No log | 17.0 | 68 | 1.8436 | 0.1788 | 1.8436 | 1.3578 |
| No log | 18.0 | 72 | 1.3404 | 0.2522 | 1.3404 | 1.1578 |
| No log | 19.0 | 76 | 1.0569 | 0.3663 | 1.0569 | 1.0280 |
| No log | 20.0 | 80 | 1.1673 | 0.3339 | 1.1673 | 1.0804 |
| No log | 21.0 | 84 | 1.4539 | 0.2528 | 1.4539 | 1.2058 |
| No log | 22.0 | 88 | 1.1758 | 0.3380 | 1.1758 | 1.0844 |
| No log | 23.0 | 92 | 1.4130 | 0.2739 | 1.4130 | 1.1887 |
| No log | 24.0 | 96 | 1.3242 | 0.2931 | 1.3242 | 1.1507 |
| No log | 25.0 | 100 | 1.5893 | 0.2347 | 1.5893 | 1.2607 |
| No log | 26.0 | 104 | 1.6088 | 0.2281 | 1.6088 | 1.2684 |
| No log | 27.0 | 108 | 1.2284 | 0.3175 | 1.2284 | 1.1084 |
| No log | 28.0 | 112 | 1.9626 | 0.1895 | 1.9626 | 1.4009 |
| No log | 29.0 | 116 | 1.3958 | 0.2577 | 1.3958 | 1.1814 |
| No log | 30.0 | 120 | 1.4429 | 0.2497 | 1.4429 | 1.2012 |
| No log | 31.0 | 124 | 1.0080 | 0.3668 | 1.0080 | 1.0040 |
| No log | 32.0 | 128 | 1.0719 | 0.3407 | 1.0719 | 1.0353 |
| No log | 33.0 | 132 | 1.1752 | 0.3115 | 1.1752 | 1.0841 |
| No log | 34.0 | 136 | 1.1945 | 0.3047 | 1.1945 | 1.0929 |
| No log | 35.0 | 140 | 1.2850 | 0.2768 | 1.2850 | 1.1336 |
| No log | 36.0 | 144 | 1.3042 | 0.2731 | 1.3042 | 1.1420 |
| No log | 37.0 | 148 | 1.2559 | 0.2868 | 1.2559 | 1.1207 |
| No log | 38.0 | 152 | 1.6649 | 0.2037 | 1.6649 | 1.2903 |
| No log | 39.0 | 156 | 1.3297 | 0.2339 | 1.3297 | 1.1531 |
| No log | 40.0 | 160 | 1.1931 | 0.2662 | 1.1931 | 1.0923 |
| No log | 41.0 | 164 | 1.1159 | 0.3093 | 1.1159 | 1.0564 |
| No log | 42.0 | 168 | 1.6883 | 0.2136 | 1.6883 | 1.2993 |
| No log | 43.0 | 172 | 1.1705 | 0.2843 | 1.1705 | 1.0819 |
### Framework versions
- Transformers 4.51.1
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
|
Raniahossam33/qwen2.5-7b-instruct-ditto-Tunisia-topic-sap-custom | Raniahossam33 | 2025-04-24T18:10:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-24T18:01:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
orsondelight/orsondelight | orsondelight | 2025-04-24T10:48:22Z | 0 | 0 | null | [
"license:bsd-2-clause",
"region:us"
] | null | 2025-04-24T10:48:22Z | ---
license: bsd-2-clause
---
|
MayBashendy/arabic_SDP_1_binary_multilingual_e5_small_lr3e-05_targ7 | MayBashendy | 2025-04-24T10:41:07Z | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-04-24T09:53:31Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
ASethi04/Qwen-Qwen2.5-7B-gsm8k-1-lora-1-0.004 | ASethi04 | 2025-04-24T09:28:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-7B",
"base_model:finetune:Qwen/Qwen2.5-7B",
"endpoints_compatible",
"region:us"
] | null | 2025-04-24T09:27:57Z | ---
base_model: Qwen/Qwen2.5-7B
library_name: transformers
model_name: Qwen-Qwen2.5-7B-gsm8k-1-lora-1-0.004
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen-Qwen2.5-7B-gsm8k-1-lora-1-0.004
This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ASethi04/Qwen-Qwen2.5-7B-gsm8k-1-lora-1-0.004", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/3nlse7kx)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.1
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
genki10/BERT_V8_sp10_lw40_ex10_lo00_k1_k1_fold0 | genki10 | 2025-04-24T09:05:33Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-24T08:49:38Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: BERT_V8_sp10_lw40_ex10_lo00_k1_k1_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_V8_sp10_lw40_ex10_lo00_k1_k1_fold0
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9059
- Qwk: 0.3887
- Mse: 0.9059
- Rmse: 0.9518
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:------:|
| No log | 1.0 | 1 | 10.0483 | 0.0036 | 10.0483 | 3.1699 |
| No log | 2.0 | 2 | 8.9108 | 0.0 | 8.9108 | 2.9851 |
| No log | 3.0 | 3 | 8.0625 | 0.0 | 8.0625 | 2.8394 |
| No log | 4.0 | 4 | 7.5347 | 0.0 | 7.5347 | 2.7449 |
| No log | 5.0 | 5 | 7.0416 | 0.0 | 7.0416 | 2.6536 |
| No log | 6.0 | 6 | 6.6422 | 0.0 | 6.6422 | 2.5773 |
| No log | 7.0 | 7 | 6.2135 | -0.0007 | 6.2135 | 2.4927 |
| No log | 8.0 | 8 | 5.7842 | 0.0173 | 5.7842 | 2.4050 |
| No log | 9.0 | 9 | 5.3573 | 0.0112 | 5.3573 | 2.3146 |
| No log | 10.0 | 10 | 4.9228 | 0.0115 | 4.9228 | 2.2187 |
| No log | 11.0 | 11 | 4.4961 | 0.0115 | 4.4961 | 2.1204 |
| No log | 12.0 | 12 | 4.0843 | 0.0039 | 4.0843 | 2.0210 |
| No log | 13.0 | 13 | 3.6744 | 0.0039 | 3.6744 | 1.9169 |
| No log | 14.0 | 14 | 3.2588 | 0.0 | 3.2588 | 1.8052 |
| No log | 15.0 | 15 | 2.8628 | 0.0 | 2.8628 | 1.6920 |
| No log | 16.0 | 16 | 2.5200 | 0.0880 | 2.5200 | 1.5875 |
| No log | 17.0 | 17 | 2.2179 | 0.0850 | 2.2179 | 1.4893 |
| No log | 18.0 | 18 | 1.9581 | 0.0511 | 1.9581 | 1.3993 |
| No log | 19.0 | 19 | 1.7607 | 0.0409 | 1.7607 | 1.3269 |
| No log | 20.0 | 20 | 1.5954 | 0.0316 | 1.5954 | 1.2631 |
| No log | 21.0 | 21 | 1.4326 | 0.0316 | 1.4326 | 1.1969 |
| No log | 22.0 | 22 | 1.2948 | 0.0316 | 1.2948 | 1.1379 |
| No log | 23.0 | 23 | 1.1614 | 0.0316 | 1.1614 | 1.0777 |
| No log | 24.0 | 24 | 1.0403 | 0.0316 | 1.0403 | 1.0199 |
| No log | 25.0 | 25 | 0.9587 | 0.0382 | 0.9587 | 0.9791 |
| No log | 26.0 | 26 | 0.9068 | 0.0484 | 0.9068 | 0.9523 |
| No log | 27.0 | 27 | 0.8533 | 0.1868 | 0.8533 | 0.9238 |
| No log | 28.0 | 28 | 0.7648 | 0.4033 | 0.7648 | 0.8745 |
| No log | 29.0 | 29 | 0.7343 | 0.4547 | 0.7343 | 0.8569 |
| No log | 30.0 | 30 | 0.7603 | 0.3904 | 0.7603 | 0.8719 |
| No log | 31.0 | 31 | 0.8029 | 0.3827 | 0.8029 | 0.8961 |
| No log | 32.0 | 32 | 0.7019 | 0.4852 | 0.7019 | 0.8378 |
| No log | 33.0 | 33 | 0.6138 | 0.4551 | 0.6138 | 0.7835 |
| No log | 34.0 | 34 | 0.5976 | 0.4523 | 0.5976 | 0.7731 |
| No log | 35.0 | 35 | 0.6071 | 0.5144 | 0.6071 | 0.7792 |
| No log | 36.0 | 36 | 0.9589 | 0.4104 | 0.9589 | 0.9792 |
| No log | 37.0 | 37 | 1.1003 | 0.3599 | 1.1003 | 1.0490 |
| No log | 38.0 | 38 | 0.8754 | 0.4068 | 0.8754 | 0.9356 |
| No log | 39.0 | 39 | 0.5961 | 0.5144 | 0.5961 | 0.7721 |
| No log | 40.0 | 40 | 0.5613 | 0.5002 | 0.5613 | 0.7492 |
| No log | 41.0 | 41 | 0.6087 | 0.4988 | 0.6087 | 0.7802 |
| No log | 42.0 | 42 | 0.8077 | 0.4326 | 0.8077 | 0.8987 |
| No log | 43.0 | 43 | 0.9335 | 0.3958 | 0.9335 | 0.9662 |
| No log | 44.0 | 44 | 0.8524 | 0.3996 | 0.8524 | 0.9233 |
| No log | 45.0 | 45 | 0.6743 | 0.4858 | 0.6743 | 0.8212 |
| No log | 46.0 | 46 | 0.6761 | 0.4746 | 0.6761 | 0.8223 |
| No log | 47.0 | 47 | 0.7430 | 0.4333 | 0.7430 | 0.8620 |
| No log | 48.0 | 48 | 0.7731 | 0.4140 | 0.7731 | 0.8793 |
| No log | 49.0 | 49 | 0.9369 | 0.3848 | 0.9369 | 0.9679 |
| No log | 50.0 | 50 | 0.9340 | 0.3776 | 0.9340 | 0.9664 |
| No log | 51.0 | 51 | 0.8041 | 0.3842 | 0.8041 | 0.8967 |
| No log | 52.0 | 52 | 0.7657 | 0.3875 | 0.7657 | 0.8750 |
| No log | 53.0 | 53 | 0.8315 | 0.3624 | 0.8315 | 0.9119 |
| No log | 54.0 | 54 | 0.9206 | 0.3413 | 0.9206 | 0.9595 |
| No log | 55.0 | 55 | 0.9621 | 0.3265 | 0.9621 | 0.9809 |
| No log | 56.0 | 56 | 0.8604 | 0.3532 | 0.8604 | 0.9276 |
| No log | 57.0 | 57 | 0.7423 | 0.4008 | 0.7423 | 0.8616 |
| No log | 58.0 | 58 | 0.7310 | 0.4449 | 0.7310 | 0.8550 |
| No log | 59.0 | 59 | 0.7518 | 0.4494 | 0.7518 | 0.8671 |
| No log | 60.0 | 60 | 0.7723 | 0.4345 | 0.7723 | 0.8788 |
| No log | 61.0 | 61 | 0.8527 | 0.3750 | 0.8527 | 0.9234 |
| No log | 62.0 | 62 | 0.9940 | 0.3258 | 0.9940 | 0.9970 |
| No log | 63.0 | 63 | 1.0460 | 0.3366 | 1.0460 | 1.0228 |
| No log | 64.0 | 64 | 0.9804 | 0.3444 | 0.9804 | 0.9901 |
| No log | 65.0 | 65 | 0.8786 | 0.3792 | 0.8786 | 0.9373 |
| No log | 66.0 | 66 | 0.8433 | 0.4086 | 0.8433 | 0.9183 |
| No log | 67.0 | 67 | 0.8306 | 0.4130 | 0.8306 | 0.9114 |
| No log | 68.0 | 68 | 0.8338 | 0.4101 | 0.8338 | 0.9131 |
| No log | 69.0 | 69 | 0.9059 | 0.3887 | 0.9059 | 0.9518 |
### Framework versions
- Transformers 4.51.1
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
|
7-Redeem-Craze-Viral-Video/Original-Viral-Link.Redeem.Craze.Viral.Videos.Leaks.official | 7-Redeem-Craze-Viral-Video | 2025-04-24T04:38:27Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-24T04:36:40Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/2x869u6x?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Bacon ipsum dolor amet leberkas ribeye pork belly jowl flank short ribs. Fatback turducken tenderloin pork chop. Kevin tenderloin picanha, cow turducken turkey spare ribs leberkas porchetta. Rump leberkas meatball, alcatra capicola swine tail shank drumstick pastrami venison boudin brisket beef ribs. Andouille kevin meatball tail brisket. |
abhinavsarkar/Google-T5-base-Grammatical_Error_Correction-Finetuned-C4-200M-550k | abhinavsarkar | 2025-04-23T16:23:49Z | 319 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:abhinavsarkar/C4-200m-550k-Determiner",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-11-10T10:48:41Z | ---
license: apache-2.0
language:
- en
base_model:
- google-t5/t5-base
datasets:
- abhinavsarkar/C4-200m-550k-Determiner
library_name: transformers
---
---
# Model Card for Google-T5-base-Grammatical-Error-Correction-Finetuned-C4-200M-550k
This model is fine-tuned for grammatical error correction (GEC). It helps in generating grammatically correct text from input sentences with diverse types of errors, making it useful for applications in writing enhancement and grammar correction across various domains.
## Model Details
### Model Description
This model is a fine-tuned version of [Google-T5-base] aimed at correcting sentences grammatically across diverse topics.
- **Developed by:** Abhinav Sarkar
- **Shared by:** abhinavsarkar
- **Model type:** Causal Language Model
- **Languages:** English
- **Finetuned from model:** Google-T5-base
## Uses
### Direct Use
This model is suitable for grammar and language correction tools, enhancing writing quality in emails, blogs, social media posts, and more.
It is particularly helpful for users seeking to improve their English language grammar and accuracy in various communication formats.
### Downstream Use
The model can be integrated into systems that require high-quality text generation and correction, such as:
- Grammar and spell-checking software
- Educational platforms for language learning
- Writing assistance tools for professionals
## How to Get Started with the Model
Use the following peices of codes to start using the model:
- PreRequisites
```python
!pip install -U sentencepiece transformers torch
```
- Loading the model and its tokenizer
```python
import torch
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = 'abhinavsarkar/Google-T5-base-Grammatical_Error_Correction-Finetuned-C4-200M-550k'
torch_device = 'cuda' if torch.cuda.is_available() else 'cpu'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name).to(torch_device)
```
- Inferencing the model
```python
import torch
def correct_grammar(input_text,num_return_sequences):
batch = tokenizer([input_text],truncation=True,padding='max_length',max_length=64, return_tensors="pt").to(torch_device)
translated = model.generate(**batch,max_length=64,num_beams=4, num_return_sequences=num_return_sequences, temperature=1.5)
tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)
return tgt_text
text = 'He are moving here.'
print(correct_grammar(text, num_return_sequences=2))
```
## Training Details
### Training Data
The model was fine-tuned on [abhinavsarkar/C4-200m-550k-Determiner], a subset of C4-200M dataset[https://www.kaggle.com/datasets/felixstahlberg/the-c4-200m-dataset-for-gec] focused on grammatical error correction (GEC) with 200 million examples containing diverse syntactic and semantic structures.
### Training Procedure
The model was fine-tuned using the Hugging Face Transformers library, Wandb in a distributed environment(Google Collab).
#### Training Hyperparameters
- **Training regime:** fp16 mixed precision
- **Epochs:** 2
- **Batch size:** 16
- **Learning rate:** 2e-4
## Technical Specifications
### Compute Infrastructure
#### Hardware
The fine-tuning was conducted on a setup involving a single T4 GPUs.
#### Software
- **Framework**: PyTorch
- **Libraries**: Hugging Face Transformers
## More Information
For further details or inquiries, please reach out via [LinkedIn](https://www.linkedin.com/in/abhinavsarkarrr/) or email at [email protected].
## Model Card Authors
- Abhinav Sarkar
## Model Card Contact
- [email protected]
--- |
123jones/davidjones | 123jones | 2025-04-21T03:38:34Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-21T03:38:34Z | ---
license: apache-2.0
---
|
Subsets and Splits