modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-05-23 06:28:04
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 474
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-05-23 06:27:34
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
vladimir707/gpt-mini1 | vladimir707 | "2025-04-19T12:09:43Z" | 0 | 0 | null | [
"pytorch",
"region:us"
] | null | "2025-04-19T12:09:30Z" | # GPT Tiny Shakespeare (Decoder-only)
Ein leichtgewichtiges autoregressives Transformer-Modell (GPT-artig), trainiert auf dem Tiny Shakespeare-Datensatz.
## Architektur
- Decoder-only Transformer (ähnlich GPT-2)
- <1M Parameter
- 2 Layers, 4 Attention Heads
- Embedding-Dimension: 128
## Trainingsdaten
Tiny Shakespeare (ca. 100k Zeichen an Theater-Dialogen von Shakespeare).
## Verwendung
Für einfache Textgenerierung und Experimente auf CPUs.
## Tags
- gpt
- decoder-only
- tiny
- shakespeare
- text-generation
- educational
|
SanderGi/PCB-OBB | SanderGi | "2025-04-19T12:08:57Z" | 0 | 0 | ultralytics | [
"ultralytics",
"printed-circuit-boards",
"base_model:Ultralytics/YOLO11",
"base_model:finetune:Ultralytics/YOLO11",
"license:mit",
"model-index",
"region:us"
] | null | "2025-04-19T11:43:34Z" | ---
license: mit
base_model:
- Ultralytics/YOLO11
tags:
- printed-circuit-boards
library_name: ultralytics
model-index:
- name: ultralytics/yolo11
results:
- task:
type: object-detection
metrics:
- type: f1
value: 93.8%
name: F1 Score
- type: mAP50
value: 93.0%
name: mAP50
metrics:
- f1 - 93.8%
- mAP50 - 93.0%
---
# PCB Detection
There are [a lot of models](https://universe.roboflow.com/roboflow-100/printed-circuit-board/model/3) for detecting components within a Printed Circuit Board (PCB), but not as many for detecting which pixels (if any) in an image contain the PCB itself. Being able to determine if and where a PCB is in an image is useful for [calculating its size to estimate carbon footprint]((https://github.com/SanderGi/LCA)), as a preprocessing step for detecting components, to limit the amount of image more expensive PCB defect detection models have to process, and more.
Read more [here](https://github.com/SanderGi/PCB-Detection).
## Usage
1. Download [`the model weights`](https://huggingface.co/SanderGi/PCB-OBB/resolve/main/best.pt?download=true)
2. `pip install ultralytics`
3. Run the model with `yolo task=obb mode=predict model=[path to model weights] source=[path to test image]` from the terminal or with Python:
```python
from ultralytics import YOLO
model = YOLO('[path to model weights]')
results = model.predict('[path/to/test/image.jpg]')
```
## Results
Dataset | Precision | Recall | F1 Score | mAP50 | mAP50-95
-----------|-----------|--------|----------|--------|---------
Training | 100.0% | 100.0% | 100.0% | 100.0% | 100.0%
Validation | 100.0% | 100.0% | 100.0% | 99.5% | 97.0%
Test | 100.0% | 88.4% | 93.8% | 93.0% | 91.2%
Sample predictions:
 |
phospho-app/lerobot_v2_ball040-mnsx4wnl5s | phospho-app | "2025-04-19T12:08:20Z" | 0 | 0 | null | [
"safetensors",
"gr00t_n1",
"phosphobot",
"gr00t",
"replicate",
"region:us"
] | null | "2025-04-19T11:31:55Z" |
---
tags:
- phosphobot
- gr00t
- replicate
task_categories:
- robotics
---
# Gr00t Model - phospho Replication Pipeline
This model was trained using **phospho's Replicate pipeline** for **gr00t models**.
Training parameters:
- **Dataset**: [pgoffin/lerobot_v2_ball040](https://huggingface.co/datasets/pgoffin/lerobot_v2_ball040)
- **Wandb run URL**: https://wandb.ai/artcomputer123-artcomputer/gr00t-replicate/runs/tcprmxc0
- **Epochs**: 20
- **Batch size**: 64
- **Training steps**: 1810
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=replicate_groot_training_pipeline)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=replicate_groot_training_pipeline)
🔗 **Explore on Replicate**: [Replicate](https://replicate.com/phospho-app/gr00t-policy)
|
Mehrdadslehi/Qwen2-0.5B-GRPO-SFT1_RL | Mehrdadslehi | "2025-04-19T12:05:13Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"dataset:AI-MO/NuminaMath-TIR",
"arxiv:2402.03300",
"endpoints_compatible",
"region:us"
] | null | "2025-04-18T14:43:03Z" | ---
datasets: AI-MO/NuminaMath-TIR
library_name: transformers
model_name: Qwen2-0.5B-GRPO-SFT1_RL
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen2-0.5B-GRPO-SFT1_RL
This model is a fine-tuned version of [None](https://huggingface.co/None) on the [AI-MO/NuminaMath-TIR](https://huggingface.co/datasets/AI-MO/NuminaMath-TIR) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Mehrdadslehi/Qwen2-0.5B-GRPO-SFT1_RL", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.1
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
saisasanky/Llama-3.1-8B-Instruct-4bit-aish_gguf | saisasanky | "2025-04-19T12:05:09Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-19T12:02:51Z" | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** saisasanky
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
TOMFORD79/Cake_13 | TOMFORD79 | "2025-04-19T12:02:30Z" | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | "2025-04-19T10:57:12Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
hyokwan/llama31_famili_2025 | hyokwan | "2025-04-19T11:56:41Z" | 0 | 0 | null | [
"safetensors",
"llama",
"license:apache-2.0",
"region:us"
] | null | "2025-04-19T11:43:12Z" | ---
license: apache-2.0
---
|
rbelanec/train_mrpc_1744902647 | rbelanec | "2025-04-19T11:55:12Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"prompt-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | "2025-04-19T03:05:12Z" | ---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- prompt-tuning
- generated_from_trainer
model-index:
- name: train_mrpc_1744902647
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_mrpc_1744902647
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the mrpc dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1153
- Num Input Tokens Seen: 65784064
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.3
- train_batch_size: 4
- eval_batch_size: 4
- seed: 123
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- training_steps: 40000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:--------:|:-----:|:---------------:|:-----------------:|
| 0.178 | 0.9685 | 200 | 0.2087 | 329312 |
| 0.1823 | 1.9395 | 400 | 0.1658 | 658560 |
| 0.1749 | 2.9104 | 600 | 0.1568 | 987040 |
| 0.1421 | 3.8814 | 800 | 0.1488 | 1316448 |
| 0.1589 | 4.8523 | 1000 | 0.1386 | 1644608 |
| 0.1413 | 5.8232 | 1200 | 0.1437 | 1974016 |
| 0.1574 | 6.7942 | 1400 | 0.1415 | 2303584 |
| 0.1059 | 7.7651 | 1600 | 0.1394 | 2630688 |
| 0.1251 | 8.7361 | 1800 | 0.1364 | 2959808 |
| 0.1088 | 9.7070 | 2000 | 0.1153 | 3287584 |
| 0.0996 | 10.6780 | 2200 | 0.1288 | 3617920 |
| 0.0731 | 11.6489 | 2400 | 0.1425 | 3945536 |
| 0.0462 | 12.6199 | 2600 | 0.1264 | 4274560 |
| 0.0608 | 13.5908 | 2800 | 0.1430 | 4603168 |
| 0.022 | 14.5617 | 3000 | 0.1643 | 4932448 |
| 0.0288 | 15.5327 | 3200 | 0.1873 | 5261312 |
| 0.0596 | 16.5036 | 3400 | 0.1731 | 5589632 |
| 0.0199 | 17.4746 | 3600 | 0.2195 | 5918112 |
| 0.0126 | 18.4455 | 3800 | 0.1945 | 6246368 |
| 0.0278 | 19.4165 | 4000 | 0.2397 | 6574848 |
| 0.0266 | 20.3874 | 4200 | 0.2409 | 6903520 |
| 0.005 | 21.3584 | 4400 | 0.2498 | 7231904 |
| 0.0084 | 22.3293 | 4600 | 0.2794 | 7561504 |
| 0.0394 | 23.3002 | 4800 | 0.3202 | 7890912 |
| 0.0073 | 24.2712 | 5000 | 0.2466 | 8218592 |
| 0.0014 | 25.2421 | 5200 | 0.2287 | 8548256 |
| 0.0264 | 26.2131 | 5400 | 0.2157 | 8876704 |
| 0.0123 | 27.1840 | 5600 | 0.2764 | 9206272 |
| 0.0033 | 28.1550 | 5800 | 0.2357 | 9534720 |
| 0.0057 | 29.1259 | 6000 | 0.2608 | 9864384 |
| 0.0052 | 30.0969 | 6200 | 0.2388 | 10193376 |
| 0.0011 | 31.0678 | 6400 | 0.3327 | 10521952 |
| 0.0081 | 32.0387 | 6600 | 0.3200 | 10851520 |
| 0.0102 | 33.0097 | 6800 | 0.2724 | 11180544 |
| 0.0056 | 33.9782 | 7000 | 0.3153 | 11509344 |
| 0.0106 | 34.9492 | 7200 | 0.3401 | 11838208 |
| 0.0221 | 35.9201 | 7400 | 0.2370 | 12167872 |
| 0.0417 | 36.8910 | 7600 | 0.2576 | 12496352 |
| 0.017 | 37.8620 | 7800 | 0.2671 | 12826048 |
| 0.0308 | 38.8329 | 8000 | 0.2404 | 13155040 |
| 0.0004 | 39.8039 | 8200 | 0.3460 | 13483008 |
| 0.0026 | 40.7748 | 8400 | 0.3096 | 13812064 |
| 0.0096 | 41.7458 | 8600 | 0.2907 | 14140576 |
| 0.0053 | 42.7167 | 8800 | 0.3575 | 14469248 |
| 0.0004 | 43.6877 | 9000 | 0.3422 | 14796672 |
| 0.0004 | 44.6586 | 9200 | 0.3874 | 15126752 |
| 0.0001 | 45.6295 | 9400 | 0.4214 | 15456160 |
| 0.0 | 46.6005 | 9600 | 0.4649 | 15784928 |
| 0.0 | 47.5714 | 9800 | 0.4702 | 16113248 |
| 0.0 | 48.5424 | 10000 | 0.4790 | 16442496 |
| 0.0 | 49.5133 | 10200 | 0.4879 | 16772640 |
| 0.0 | 50.4843 | 10400 | 0.4982 | 17100000 |
| 0.0 | 51.4552 | 10600 | 0.5047 | 17428768 |
| 0.0 | 52.4262 | 10800 | 0.5145 | 17757344 |
| 0.0 | 53.3971 | 11000 | 0.5212 | 18085920 |
| 0.0 | 54.3680 | 11200 | 0.5300 | 18414336 |
| 0.0 | 55.3390 | 11400 | 0.5361 | 18743040 |
| 0.0 | 56.3099 | 11600 | 0.5398 | 19072928 |
| 0.0 | 57.2809 | 11800 | 0.5495 | 19401376 |
| 0.0 | 58.2518 | 12000 | 0.5575 | 19730336 |
| 0.0 | 59.2228 | 12200 | 0.5617 | 20059488 |
| 0.0 | 60.1937 | 12400 | 0.5684 | 20388064 |
| 0.0 | 61.1646 | 12600 | 0.5766 | 20718144 |
| 0.0 | 62.1356 | 12800 | 0.5837 | 21048224 |
| 0.0 | 63.1065 | 13000 | 0.5898 | 21376576 |
| 0.0 | 64.0775 | 13200 | 0.5944 | 21706080 |
| 0.0 | 65.0484 | 13400 | 0.6017 | 22034624 |
| 0.0 | 66.0194 | 13600 | 0.6081 | 22364128 |
| 0.0 | 66.9879 | 13800 | 0.6153 | 22692352 |
| 0.0 | 67.9588 | 14000 | 0.6185 | 23020864 |
| 0.0 | 68.9298 | 14200 | 0.6256 | 23349920 |
| 0.0 | 69.9007 | 14400 | 0.6316 | 23679072 |
| 0.0 | 70.8717 | 14600 | 0.6375 | 24007776 |
| 0.0 | 71.8426 | 14800 | 0.6423 | 24336640 |
| 0.0 | 72.8136 | 15000 | 0.6482 | 24664576 |
| 0.0 | 73.7845 | 15200 | 0.6532 | 24994848 |
| 0.0 | 74.7554 | 15400 | 0.6600 | 25322720 |
| 0.0 | 75.7264 | 15600 | 0.6636 | 25650784 |
| 0.0 | 76.6973 | 15800 | 0.6717 | 25980512 |
| 0.0 | 77.6683 | 16000 | 0.6783 | 26309536 |
| 0.0 | 78.6392 | 16200 | 0.6823 | 26638944 |
| 0.0 | 79.6102 | 16400 | 0.6852 | 26967360 |
| 0.0 | 80.5811 | 16600 | 0.6886 | 27297120 |
| 0.0 | 81.5521 | 16800 | 0.6960 | 27626144 |
| 0.0 | 82.5230 | 17000 | 0.6995 | 27954656 |
| 0.0 | 83.4939 | 17200 | 0.7026 | 28284160 |
| 0.0 | 84.4649 | 17400 | 0.7082 | 28612224 |
| 0.0 | 85.4358 | 17600 | 0.7175 | 28940448 |
| 0.0 | 86.4068 | 17800 | 0.7151 | 29270912 |
| 0.0 | 87.3777 | 18000 | 0.7190 | 29599424 |
| 0.0 | 88.3487 | 18200 | 0.7216 | 29929280 |
| 0.0 | 89.3196 | 18400 | 0.7255 | 30257504 |
| 0.0 | 90.2906 | 18600 | 0.7287 | 30586944 |
| 0.0 | 91.2615 | 18800 | 0.7326 | 30915744 |
| 0.0 | 92.2324 | 19000 | 0.7370 | 31245216 |
| 0.0 | 93.2034 | 19200 | 0.7390 | 31573600 |
| 0.0 | 94.1743 | 19400 | 0.7438 | 31903616 |
| 0.0 | 95.1453 | 19600 | 0.7448 | 32232032 |
| 0.0 | 96.1162 | 19800 | 0.7473 | 32560480 |
| 0.0 | 97.0872 | 20000 | 0.7519 | 32889696 |
| 0.0 | 98.0581 | 20200 | 0.7500 | 33218016 |
| 0.0 | 99.0291 | 20400 | 0.7546 | 33547296 |
| 0.0 | 99.9976 | 20600 | 0.7566 | 33876000 |
| 0.0 | 100.9685 | 20800 | 0.7594 | 34205376 |
| 0.0 | 101.9395 | 21000 | 0.7605 | 34534496 |
| 0.0 | 102.9104 | 21200 | 0.7605 | 34864000 |
| 0.0 | 103.8814 | 21400 | 0.7587 | 35192256 |
| 0.0 | 104.8523 | 21600 | 0.7634 | 35521376 |
| 0.0 | 105.8232 | 21800 | 0.7686 | 35851264 |
| 0.0 | 106.7942 | 22000 | 0.7694 | 36180000 |
| 0.0 | 107.7651 | 22200 | 0.7666 | 36508832 |
| 0.0 | 108.7361 | 22400 | 0.7679 | 36837600 |
| 0.0 | 109.7070 | 22600 | 0.7702 | 37166720 |
| 0.0 | 110.6780 | 22800 | 0.7690 | 37495520 |
| 0.0 | 111.6489 | 23000 | 0.7686 | 37824352 |
| 0.0 | 112.6199 | 23200 | 0.7735 | 38153856 |
| 0.0 | 113.5908 | 23400 | 0.7741 | 38483200 |
| 0.0 | 114.5617 | 23600 | 0.7726 | 38812672 |
| 0.0 | 115.5327 | 23800 | 0.7704 | 39142400 |
| 0.0 | 116.5036 | 24000 | 0.7778 | 39471200 |
| 0.0 | 117.4746 | 24200 | 0.7778 | 39798848 |
| 0.0 | 118.4455 | 24400 | 0.7782 | 40127360 |
| 0.0 | 119.4165 | 24600 | 0.7768 | 40456736 |
| 0.0 | 120.3874 | 24800 | 0.7763 | 40785312 |
| 0.0 | 121.3584 | 25000 | 0.7760 | 41112576 |
| 0.0 | 122.3293 | 25200 | 0.7755 | 41442112 |
| 0.0 | 123.3002 | 25400 | 0.7797 | 41771552 |
| 0.0 | 124.2712 | 25600 | 0.7775 | 42101248 |
| 0.0 | 125.2421 | 25800 | 0.7784 | 42427392 |
| 0.0 | 126.2131 | 26000 | 0.7776 | 42756704 |
| 0.0 | 127.1840 | 26200 | 0.7778 | 43085664 |
| 0.0 | 128.1550 | 26400 | 0.7804 | 43414240 |
| 0.0 | 129.1259 | 26600 | 0.7818 | 43743072 |
| 0.0 | 130.0969 | 26800 | 0.7803 | 44072768 |
| 0.0 | 131.0678 | 27000 | 0.7818 | 44400192 |
| 0.0 | 132.0387 | 27200 | 0.7777 | 44729632 |
| 0.0 | 133.0097 | 27400 | 0.7763 | 45058976 |
| 0.0 | 133.9782 | 27600 | 0.7758 | 45388352 |
| 0.0 | 134.9492 | 27800 | 0.7759 | 45717952 |
| 0.0 | 135.9201 | 28000 | 0.7753 | 46046144 |
| 0.0 | 136.8910 | 28200 | 0.7770 | 46375168 |
| 0.0 | 137.8620 | 28400 | 0.7772 | 46702816 |
| 0.0 | 138.8329 | 28600 | 0.7731 | 47033152 |
| 0.0 | 139.8039 | 28800 | 0.7766 | 47361472 |
| 0.0 | 140.7748 | 29000 | 0.7751 | 47691424 |
| 0.0 | 141.7458 | 29200 | 0.7738 | 48019712 |
| 0.0 | 142.7167 | 29400 | 0.7735 | 48348832 |
| 0.0 | 143.6877 | 29600 | 0.7778 | 48678560 |
| 0.0 | 144.6586 | 29800 | 0.7774 | 49008256 |
| 0.0 | 145.6295 | 30000 | 0.7776 | 49337088 |
| 0.0 | 146.6005 | 30200 | 0.7748 | 49665344 |
| 0.0 | 147.5714 | 30400 | 0.7795 | 49996128 |
| 0.0 | 148.5424 | 30600 | 0.7759 | 50324736 |
| 0.0 | 149.5133 | 30800 | 0.7778 | 50652864 |
| 0.0 | 150.4843 | 31000 | 0.7747 | 50981920 |
| 0.0 | 151.4552 | 31200 | 0.7766 | 51310752 |
| 0.0 | 152.4262 | 31400 | 0.7740 | 51640352 |
| 0.0 | 153.3971 | 31600 | 0.7767 | 51969184 |
| 0.0 | 154.3680 | 31800 | 0.7749 | 52297280 |
| 0.0 | 155.3390 | 32000 | 0.7770 | 52625600 |
| 0.0 | 156.3099 | 32200 | 0.7763 | 52953920 |
| 0.0 | 157.2809 | 32400 | 0.7756 | 53283648 |
| 0.0 | 158.2518 | 32600 | 0.7734 | 53613056 |
| 0.0 | 159.2228 | 32800 | 0.7745 | 53941632 |
| 0.0 | 160.1937 | 33000 | 0.7744 | 54270272 |
| 0.0 | 161.1646 | 33200 | 0.7751 | 54599104 |
| 0.0 | 162.1356 | 33400 | 0.7762 | 54929056 |
| 0.0 | 163.1065 | 33600 | 0.7753 | 55257728 |
| 0.0 | 164.0775 | 33800 | 0.7750 | 55587456 |
| 0.0 | 165.0484 | 34000 | 0.7761 | 55916576 |
| 0.0 | 166.0194 | 34200 | 0.7766 | 56245664 |
| 0.0 | 166.9879 | 34400 | 0.7779 | 56574272 |
| 0.0 | 167.9588 | 34600 | 0.7757 | 56903360 |
| 0.0 | 168.9298 | 34800 | 0.7782 | 57232032 |
| 0.0 | 169.9007 | 35000 | 0.7748 | 57561504 |
| 0.0 | 170.8717 | 35200 | 0.7741 | 57891168 |
| 0.0 | 171.8426 | 35400 | 0.7750 | 58220352 |
| 0.0 | 172.8136 | 35600 | 0.7764 | 58548960 |
| 0.0 | 173.7845 | 35800 | 0.7737 | 58878688 |
| 0.0 | 174.7554 | 36000 | 0.7754 | 59207104 |
| 0.0 | 175.7264 | 36200 | 0.7763 | 59536800 |
| 0.0 | 176.6973 | 36400 | 0.7769 | 59865312 |
| 0.0 | 177.6683 | 36600 | 0.7765 | 60194816 |
| 0.0 | 178.6392 | 36800 | 0.7797 | 60523584 |
| 0.0 | 179.6102 | 37000 | 0.7767 | 60852352 |
| 0.0 | 180.5811 | 37200 | 0.7763 | 61181024 |
| 0.0 | 181.5521 | 37400 | 0.7752 | 61510624 |
| 0.0 | 182.5230 | 37600 | 0.7787 | 61840672 |
| 0.0 | 183.4939 | 37800 | 0.7763 | 62167808 |
| 0.0 | 184.4649 | 38000 | 0.7755 | 62496960 |
| 0.0 | 185.4358 | 38200 | 0.7765 | 62826016 |
| 0.0 | 186.4068 | 38400 | 0.7740 | 63154784 |
| 0.0 | 187.3777 | 38600 | 0.7765 | 63483904 |
| 0.0 | 188.3487 | 38800 | 0.7759 | 63811808 |
| 0.0 | 189.3196 | 39000 | 0.7742 | 64139488 |
| 0.0 | 190.2906 | 39200 | 0.7752 | 64467808 |
| 0.0 | 191.2615 | 39400 | 0.7753 | 64798112 |
| 0.0 | 192.2324 | 39600 | 0.7759 | 65126304 |
| 0.0 | 193.2034 | 39800 | 0.7780 | 65455776 |
| 0.0 | 194.1743 | 40000 | 0.7753 | 65784064 |
### Framework versions
- PEFT 0.15.1
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
shadenmoh/xlm-roberta-base | shadenmoh | "2025-04-19T11:54:04Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:tner/xlm-roberta-base-panx-dataset-ar",
"base_model:finetune:tner/xlm-roberta-base-panx-dataset-ar",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2025-04-19T11:51:06Z" | ---
library_name: transformers
base_model: tner/xlm-roberta-base-panx-dataset-ar
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base
This model is a fine-tuned version of [tner/xlm-roberta-base-panx-dataset-ar](https://huggingface.co/tner/xlm-roberta-base-panx-dataset-ar) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
Vergill2345/Razor | Vergill2345 | "2025-04-19T11:53:44Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-04-19T11:53:44Z" | ---
license: apache-2.0
---
|
rbelanec/train_sst2_1744902627 | rbelanec | "2025-04-19T11:52:17Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3",
"license:apache-2.0",
"region:us"
] | null | "2025-04-18T23:09:42Z" | ---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.3
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: train_sst2_1744902627
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_sst2_1744902627
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) on the sst2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0593
- Num Input Tokens Seen: 33458560
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 123
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- training_steps: 40000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-------:|:-----:|:---------------:|:-----------------:|
| 0.0745 | 0.0528 | 200 | 0.0783 | 166688 |
| 0.1164 | 0.1056 | 400 | 0.1055 | 334048 |
| 0.0125 | 0.1584 | 600 | 0.0809 | 500448 |
| 0.0293 | 0.2112 | 800 | 0.0696 | 667872 |
| 0.0633 | 0.2640 | 1000 | 0.0697 | 834848 |
| 0.037 | 0.3167 | 1200 | 0.0937 | 1002816 |
| 0.0586 | 0.3695 | 1400 | 0.0752 | 1169088 |
| 0.1164 | 0.4223 | 1600 | 0.0692 | 1337088 |
| 0.0527 | 0.4751 | 1800 | 0.0690 | 1505536 |
| 0.0513 | 0.5279 | 2000 | 0.0648 | 1673024 |
| 0.0544 | 0.5807 | 2200 | 0.0687 | 1842304 |
| 0.0687 | 0.6335 | 2400 | 0.0648 | 2007328 |
| 0.0739 | 0.6863 | 2600 | 0.0617 | 2174880 |
| 0.1184 | 0.7391 | 2800 | 0.0758 | 2341280 |
| 0.113 | 0.7919 | 3000 | 0.0676 | 2509440 |
| 0.0589 | 0.8447 | 3200 | 0.0651 | 2674784 |
| 0.0484 | 0.8975 | 3400 | 0.0629 | 2843680 |
| 0.1224 | 0.9502 | 3600 | 0.0674 | 3011904 |
| 0.0241 | 1.0029 | 3800 | 0.0719 | 3178064 |
| 0.0495 | 1.0557 | 4000 | 0.0655 | 3345904 |
| 0.0462 | 1.1085 | 4200 | 0.0620 | 3514608 |
| 0.0457 | 1.1613 | 4400 | 0.0608 | 3680560 |
| 0.0455 | 1.2141 | 4600 | 0.0638 | 3849328 |
| 0.0395 | 1.2669 | 4800 | 0.0638 | 4017200 |
| 0.037 | 1.3197 | 5000 | 0.0653 | 4187184 |
| 0.0199 | 1.3724 | 5200 | 0.0640 | 4354416 |
| 0.0455 | 1.4252 | 5400 | 0.0654 | 4519856 |
| 0.0231 | 1.4780 | 5600 | 0.0632 | 4687280 |
| 0.0568 | 1.5308 | 5800 | 0.0660 | 4856112 |
| 0.0563 | 1.5836 | 6000 | 0.0604 | 5022736 |
| 0.1588 | 1.6364 | 6200 | 0.0595 | 5188656 |
| 0.0301 | 1.6892 | 6400 | 0.0716 | 5356208 |
| 0.0572 | 1.7420 | 6600 | 0.0593 | 5523952 |
| 0.0309 | 1.7948 | 6800 | 0.0615 | 5690672 |
| 0.0549 | 1.8476 | 7000 | 0.0636 | 5857072 |
| 0.0461 | 1.9004 | 7200 | 0.0662 | 6024976 |
| 0.0753 | 1.9531 | 7400 | 0.0680 | 6191664 |
| 0.0403 | 2.0058 | 7600 | 0.0632 | 6357472 |
| 0.0261 | 2.0586 | 7800 | 0.0631 | 6525984 |
| 0.0126 | 2.1114 | 8000 | 0.0692 | 6692320 |
| 0.0039 | 2.1642 | 8200 | 0.0756 | 6860064 |
| 0.0463 | 2.2170 | 8400 | 0.0737 | 7026528 |
| 0.0118 | 2.2698 | 8600 | 0.0701 | 7192384 |
| 0.0211 | 2.3226 | 8800 | 0.0734 | 7358816 |
| 0.0369 | 2.3753 | 9000 | 0.0806 | 7526496 |
| 0.0336 | 2.4281 | 9200 | 0.1007 | 7696064 |
| 0.0252 | 2.4809 | 9400 | 0.0934 | 7863456 |
| 0.0041 | 2.5337 | 9600 | 0.0711 | 8031776 |
| 0.0297 | 2.5865 | 9800 | 0.0753 | 8199584 |
| 0.0712 | 2.6393 | 10000 | 0.0741 | 8366016 |
| 0.008 | 2.6921 | 10200 | 0.0924 | 8531808 |
| 0.0324 | 2.7449 | 10400 | 0.0728 | 8702976 |
| 0.0155 | 2.7977 | 10600 | 0.0793 | 8870944 |
| 0.057 | 2.8505 | 10800 | 0.0790 | 9039680 |
| 0.0472 | 2.9033 | 11000 | 0.0685 | 9206880 |
| 0.0067 | 2.9561 | 11200 | 0.0759 | 9372128 |
| 0.023 | 3.0087 | 11400 | 0.1036 | 9538768 |
| 0.0009 | 3.0615 | 11600 | 0.1041 | 9705232 |
| 0.0026 | 3.1143 | 11800 | 0.1420 | 9871632 |
| 0.007 | 3.1671 | 12000 | 0.1002 | 10039472 |
| 0.0023 | 3.2199 | 12200 | 0.1041 | 10206320 |
| 0.003 | 3.2727 | 12400 | 0.1281 | 10376240 |
| 0.0087 | 3.3255 | 12600 | 0.1209 | 10544464 |
| 0.0146 | 3.3782 | 12800 | 0.1133 | 10712240 |
| 0.0024 | 3.4310 | 13000 | 0.1138 | 10879120 |
| 0.0002 | 3.4838 | 13200 | 0.1368 | 11045072 |
| 0.0046 | 3.5366 | 13400 | 0.1290 | 11211312 |
| 0.0263 | 3.5894 | 13600 | 0.1125 | 11378128 |
| 0.0051 | 3.6422 | 13800 | 0.1019 | 11544592 |
| 0.019 | 3.6950 | 14000 | 0.0911 | 11713040 |
| 0.0066 | 3.7478 | 14200 | 0.1016 | 11880432 |
| 0.0148 | 3.8006 | 14400 | 0.1121 | 12048176 |
| 0.0001 | 3.8534 | 14600 | 0.1492 | 12215792 |
| 0.0281 | 3.9062 | 14800 | 0.1034 | 12383792 |
| 0.0056 | 3.9590 | 15000 | 0.0986 | 12549680 |
| 0.0001 | 4.0116 | 15200 | 0.1283 | 12716448 |
| 0.0037 | 4.0644 | 15400 | 0.2098 | 12882752 |
| 0.0078 | 4.1172 | 15600 | 0.1637 | 13051200 |
| 0.0078 | 4.1700 | 15800 | 0.1420 | 13217024 |
| 0.0162 | 4.2228 | 16000 | 0.1648 | 13382784 |
| 0.0003 | 4.2756 | 16200 | 0.1115 | 13549216 |
| 0.0021 | 4.3284 | 16400 | 0.1300 | 13719072 |
| 0.0028 | 4.3812 | 16600 | 0.1370 | 13884928 |
| 0.0028 | 4.4339 | 16800 | 0.1263 | 14051584 |
| 0.0001 | 4.4867 | 17000 | 0.1414 | 14220704 |
| 0.0038 | 4.5395 | 17200 | 0.1233 | 14387008 |
| 0.0052 | 4.5923 | 17400 | 0.2157 | 14555808 |
| 0.0 | 4.6451 | 17600 | 0.2525 | 14723456 |
| 0.0196 | 4.6979 | 17800 | 0.1433 | 14890880 |
| 0.0229 | 4.7507 | 18000 | 0.1838 | 15059744 |
| 0.057 | 4.8035 | 18200 | 0.1577 | 15224512 |
| 0.0453 | 4.8563 | 18400 | 0.1232 | 15392960 |
| 0.0066 | 4.9091 | 18600 | 0.1784 | 15561696 |
| 0.0162 | 4.9619 | 18800 | 0.1757 | 15728800 |
| 0.0485 | 5.0145 | 19000 | 0.1900 | 15897552 |
| 0.0009 | 5.0673 | 19200 | 0.1809 | 16064688 |
| 0.0005 | 5.1201 | 19400 | 0.1684 | 16231120 |
| 0.0044 | 5.1729 | 19600 | 0.1712 | 16397744 |
| 0.1072 | 5.2257 | 19800 | 0.1722 | 16564176 |
| 0.037 | 5.2785 | 20000 | 0.1688 | 16731600 |
| 0.0052 | 5.3313 | 20200 | 0.1799 | 16898064 |
| 0.0 | 5.3841 | 20400 | 0.1782 | 17064080 |
| 0.0002 | 5.4368 | 20600 | 0.2476 | 17231888 |
| 0.0057 | 5.4896 | 20800 | 0.1952 | 17399184 |
| 0.0022 | 5.5424 | 21000 | 0.2121 | 17566160 |
| 0.0063 | 5.5952 | 21200 | 0.1922 | 17732304 |
| 0.004 | 5.6480 | 21400 | 0.1764 | 17900880 |
| 0.0001 | 5.7008 | 21600 | 0.1741 | 18070192 |
| 0.0001 | 5.7536 | 21800 | 0.1708 | 18237168 |
| 0.0058 | 5.8064 | 22000 | 0.2006 | 18403856 |
| 0.002 | 5.8592 | 22200 | 0.2176 | 18571248 |
| 0.0 | 5.9120 | 22400 | 0.2260 | 18738672 |
| 0.0002 | 5.9648 | 22600 | 0.1856 | 18905744 |
| 0.0 | 6.0174 | 22800 | 0.2380 | 19073440 |
| 0.0 | 6.0702 | 23000 | 0.1898 | 19241920 |
| 0.0036 | 6.1230 | 23200 | 0.1885 | 19409408 |
| 0.0 | 6.1758 | 23400 | 0.1937 | 19577024 |
| 0.0532 | 6.2286 | 23600 | 0.1851 | 19744608 |
| 0.0036 | 6.2814 | 23800 | 0.1758 | 19911488 |
| 0.0001 | 6.3342 | 24000 | 0.2023 | 20078944 |
| 0.0 | 6.3870 | 24200 | 0.2283 | 20244928 |
| 0.0107 | 6.4398 | 24400 | 0.1919 | 20411232 |
| 0.0 | 6.4925 | 24600 | 0.2069 | 20578080 |
| 0.0002 | 6.5453 | 24800 | 0.1839 | 20746592 |
| 0.0 | 6.5981 | 25000 | 0.1915 | 20913344 |
| 0.0 | 6.6509 | 25200 | 0.2142 | 21081952 |
| 0.0051 | 6.7037 | 25400 | 0.1797 | 21248384 |
| 0.0027 | 6.7565 | 25600 | 0.1834 | 21415872 |
| 0.0004 | 6.8093 | 25800 | 0.1750 | 21584000 |
| 0.0004 | 6.8621 | 26000 | 0.2093 | 21751168 |
| 0.0 | 6.9149 | 26200 | 0.1849 | 21918816 |
| 0.0291 | 6.9677 | 26400 | 0.1955 | 22084384 |
| 0.0001 | 7.0203 | 26600 | 0.2036 | 22251776 |
| 0.0 | 7.0731 | 26800 | 0.2368 | 22418080 |
| 0.0 | 7.1259 | 27000 | 0.2574 | 22587392 |
| 0.0 | 7.1787 | 27200 | 0.2346 | 22753056 |
| 0.0 | 7.2315 | 27400 | 0.2273 | 22920768 |
| 0.0 | 7.2843 | 27600 | 0.2511 | 23087296 |
| 0.0 | 7.3371 | 27800 | 0.2544 | 23254400 |
| 0.0 | 7.3899 | 28000 | 0.2559 | 23422752 |
| 0.0 | 7.4427 | 28200 | 0.2674 | 23588352 |
| 0.0055 | 7.4954 | 28400 | 0.2443 | 23755840 |
| 0.0001 | 7.5482 | 28600 | 0.2886 | 23923680 |
| 0.0 | 7.6010 | 28800 | 0.3007 | 24091168 |
| 0.0 | 7.6538 | 29000 | 0.2789 | 24258016 |
| 0.0 | 7.7066 | 29200 | 0.2825 | 24427808 |
| 0.0 | 7.7594 | 29400 | 0.2882 | 24596288 |
| 0.0 | 7.8122 | 29600 | 0.2831 | 24764192 |
| 0.0 | 7.8650 | 29800 | 0.2660 | 24932000 |
| 0.0 | 7.9178 | 30000 | 0.2519 | 25100224 |
| 0.0229 | 7.9706 | 30200 | 0.2555 | 25267808 |
| 0.0 | 8.0232 | 30400 | 0.2564 | 25433440 |
| 0.0 | 8.0760 | 30600 | 0.2685 | 25600672 |
| 0.0 | 8.1288 | 30800 | 0.2806 | 25769408 |
| 0.0 | 8.1816 | 31000 | 0.2828 | 25936160 |
| 0.0 | 8.2344 | 31200 | 0.2733 | 26103744 |
| 0.0 | 8.2872 | 31400 | 0.2767 | 26270560 |
| 0.0 | 8.3400 | 31600 | 0.2765 | 26437536 |
| 0.0 | 8.3928 | 31800 | 0.2866 | 26604480 |
| 0.0 | 8.4456 | 32000 | 0.2873 | 26771680 |
| 0.0 | 8.4984 | 32200 | 0.2827 | 26940256 |
| 0.0 | 8.5511 | 32400 | 0.2836 | 27107680 |
| 0.0 | 8.6039 | 32600 | 0.2861 | 27274048 |
| 0.0 | 8.6567 | 32800 | 0.2895 | 27440544 |
| 0.0 | 8.7095 | 33000 | 0.2866 | 27608000 |
| 0.0 | 8.7623 | 33200 | 0.2908 | 27776704 |
| 0.0 | 8.8151 | 33400 | 0.2903 | 27942752 |
| 0.0 | 8.8679 | 33600 | 0.2963 | 28108864 |
| 0.0 | 8.9207 | 33800 | 0.2992 | 28275296 |
| 0.0 | 8.9735 | 34000 | 0.2939 | 28443520 |
| 0.0 | 9.0261 | 34200 | 0.3001 | 28609776 |
| 0.0 | 9.0789 | 34400 | 0.3015 | 28777712 |
| 0.0 | 9.1317 | 34600 | 0.3028 | 28944144 |
| 0.0 | 9.1845 | 34800 | 0.3033 | 29111152 |
| 0.0002 | 9.2373 | 35000 | 0.2992 | 29278000 |
| 0.0 | 9.2901 | 35200 | 0.2962 | 29443792 |
| 0.0 | 9.3429 | 35400 | 0.2975 | 29609072 |
| 0.0 | 9.3957 | 35600 | 0.2983 | 29776592 |
| 0.0 | 9.4485 | 35800 | 0.2987 | 29941616 |
| 0.0 | 9.5013 | 36000 | 0.2990 | 30110160 |
| 0.0 | 9.5540 | 36200 | 0.2988 | 30277744 |
| 0.0 | 9.6068 | 36400 | 0.2996 | 30447152 |
| 0.0 | 9.6596 | 36600 | 0.3006 | 30612976 |
| 0.0 | 9.7124 | 36800 | 0.3007 | 30780240 |
| 0.0 | 9.7652 | 37000 | 0.3018 | 30948048 |
| 0.0 | 9.8180 | 37200 | 0.3031 | 31116368 |
| 0.0 | 9.8708 | 37400 | 0.3022 | 31283888 |
| 0.0 | 9.9236 | 37600 | 0.3025 | 31452560 |
| 0.0 | 9.9764 | 37800 | 0.3033 | 31620720 |
| 0.0 | 10.0290 | 38000 | 0.3030 | 31786016 |
| 0.0 | 10.0818 | 38200 | 0.3018 | 31952768 |
| 0.0 | 10.1346 | 38400 | 0.3032 | 32120320 |
| 0.0 | 10.1874 | 38600 | 0.3024 | 32287584 |
| 0.0 | 10.2402 | 38800 | 0.3030 | 32455072 |
| 0.0 | 10.2930 | 39000 | 0.3025 | 32621184 |
| 0.0 | 10.3458 | 39200 | 0.3031 | 32788960 |
| 0.0 | 10.3986 | 39400 | 0.3028 | 32955776 |
| 0.0 | 10.4514 | 39600 | 0.3024 | 33122816 |
| 0.0 | 10.5042 | 39800 | 0.3032 | 33291072 |
| 0.0 | 10.5569 | 40000 | 0.3027 | 33458560 |
### Framework versions
- PEFT 0.15.1
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
swarup3204/byt5-sanskrit-original-data-ft | swarup3204 | "2025-04-19T11:47:47Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"sa",
"arxiv:1910.09700",
"base_model:chronbmm/sanskrit5-multitask",
"base_model:finetune:chronbmm/sanskrit5-multitask",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-04-19T07:46:59Z" | ---
library_name: transformers
license: apache-2.0
language:
- sa
base_model:
- chronbmm/sanskrit5-multitask
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Beinsezii/Mistral-Small-24B-Instruct-2501-Q6_K_C-GGUF | Beinsezii | "2025-04-19T11:47:22Z" | 0 | 0 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-19T11:16:34Z" | ---
license: apache-2.0
---
https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501
Q6_K_C: Q6_K weights, copied output, copied embed
Fits 24K CTX on a 24GiB GPU |
Dans-DiscountModels/7b-m-dans-optimizersweeps-repremover-1-ademamix-hi-lr-b1_0.9-b2_0.999-b3_0.999-a10 | Dans-DiscountModels | "2025-04-19T11:45:52Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"axolotl",
"generated_from_trainer",
"conversational",
"dataset:Dans-DiscountModels/pretokenization-test-3",
"base_model:Dans-DiscountModels/7b-m-dans-personalityengine-v1.2.1-rc-2",
"base_model:finetune:Dans-DiscountModels/7b-m-dans-personalityengine-v1.2.1-rc-2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-19T11:12:01Z" | ---
library_name: transformers
base_model: Dans-DiscountModels/7b-m-dans-personalityengine-v1.2.1-rc-2
tags:
- axolotl
- generated_from_trainer
datasets:
- Dans-DiscountModels/pretokenization-test-3
model-index:
- name: 7b-m-dans-optimizersweeps-repremover-1-ademamix-hi-lr-b1_0.9-b2_0.999-b3_0.999-a10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.8.0`
```yaml
base_model: Dans-DiscountModels/7b-m-dans-personalityengine-v1.2.1-rc-2
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
trust_remote_code:
# wandb configuration
wandb_project: 7b-m-dans-optimizersweeps
wandb_watch:
wandb_run_id: repremover-1-1-ademamix-hi-lr-b1_0.9-b2_0.999-b3_0.999-a10
wandb_log_model:
# push checkpoints to hub
hub_model_id: Dans-DiscountModels/7b-m-dans-optimizersweeps-repremover-1-ademamix-hi-lr-b1_0.9-b2_0.999-b3_0.999-a10
# how to push checkpoints to hub
# https://huggingface.co/docs/transformers/v4.31.0/en/main_classes/trainer#transformers.TrainingArguments.hub_strategy
hub_strategy: "every_save"
# Whether to use hf `use_auth_token` for loading datasets. Useful for fetching private datasets
# Required to be true when used in combination with `push_dataset_to_hub`
hf_use_auth_token: true
# where to save the finished model to
output_dir: ./7b-m-dans-optimizersweeps
# where to save the dataset to
dataset_prepared_path: ./7b-m-dans-optimizersweeps-data
save_safetensors: true
# dataset settings (local or huggingface repo)
datasets:
- path: Dans-DiscountModels/pretokenization-test-3
ds_type: parquet
type:
plugins:
- axolotl.integrations.liger.LigerPlugin
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
liger_rope: true
liger_rms_norm: true
liger_layer_norm: true
liger_glu_activation: true
liger_fused_linear_cross_entropy: false
cut_cross_entropy: true
load_in_8bit: false
load_in_4bit: false
strict: false
adapter:
lora_model_dir:
val_set_size: 0.01
sequence_len: 8192
sample_packing: false
eval_sample_packing: false
pad_to_sequence_len: true
gradient_checkpointing: true
# gradient_checkpointing_kwargs:
# use_reentrant: false
gradient_accumulation_steps: 1
micro_batch_size: 4
num_epochs: 3
optimizer: ademamix
optim_args: "beta1=0.9,beta2=0.999,beta3=0.999,alpha=10"
lr_scheduler: rex
learning_rate: 0.0000003
cosine_min_lr_ratio:
# weight_decay: 0.03
max_grad_norm: 0.001
train_on_inputs: false
group_by_length: true
bf16: true
fp16: false
tf32: false
early_stopping_patience:
resume_from_checkpoint:
auto_resume_from_checkpoints: false
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_ratio: 0.1
evals_per_epoch: 24
eval_table_size:
eval_max_new_tokens:
saves_per_epoch: 1
save_total_limit: 2
debug: false
deepspeed: deepspeed_configs/zero3_bf16.json
fsdp:
fsdp_config:
special_tokens:
```
</details><br>
# 7b-m-dans-optimizersweeps-repremover-1-ademamix-hi-lr-b1_0.9-b2_0.999-b3_0.999-a10
This model is a fine-tuned version of [Dans-DiscountModels/7b-m-dans-personalityengine-v1.2.1-rc-2](https://huggingface.co/Dans-DiscountModels/7b-m-dans-personalityengine-v1.2.1-rc-2) on the Dans-DiscountModels/pretokenization-test-3 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-07
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use ademamix and the args are:
beta1=0.9,beta2=0.999,beta3=0.999,alpha=10
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 41
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.0376 | 0.0072 | 1 | 2.1457 |
| 2.2661 | 0.0432 | 6 | 2.1208 |
| 2.307 | 0.0863 | 12 | 2.1567 |
| 2.1831 | 0.1295 | 18 | 2.1175 |
| 2.2321 | 0.1727 | 24 | 2.1267 |
| 2.0512 | 0.2158 | 30 | 2.1440 |
| 2.1275 | 0.2590 | 36 | 2.1177 |
| 2.0276 | 0.3022 | 42 | 2.1115 |
| 2.0803 | 0.3453 | 48 | 2.1289 |
| 2.1525 | 0.3885 | 54 | 2.1259 |
| 2.0461 | 0.4317 | 60 | 2.1080 |
| 2.0416 | 0.4748 | 66 | 2.1281 |
| 2.2091 | 0.5180 | 72 | 2.1134 |
| 2.1002 | 0.5612 | 78 | 2.1163 |
| 2.0207 | 0.6043 | 84 | 2.1346 |
| 2.1418 | 0.6475 | 90 | 2.1178 |
| 2.0907 | 0.6906 | 96 | 2.1061 |
| 2.1079 | 0.7338 | 102 | 2.1228 |
| 2.0733 | 0.7770 | 108 | 2.1221 |
| 2.0229 | 0.8201 | 114 | 2.1011 |
| 2.0239 | 0.8633 | 120 | 2.1111 |
| 1.9952 | 0.9065 | 126 | 2.1107 |
| 2.1515 | 0.9496 | 132 | 2.0999 |
| 1.9878 | 0.9928 | 138 | 2.1050 |
| 2.0482 | 1.0360 | 144 | 2.1151 |
| 1.9203 | 1.0791 | 150 | 2.0964 |
| 2.0638 | 1.1223 | 156 | 2.1202 |
| 1.9855 | 1.1655 | 162 | 2.1308 |
| 1.9788 | 1.2086 | 168 | 2.1189 |
| 1.9651 | 1.2518 | 174 | 2.1124 |
| 1.9656 | 1.2950 | 180 | 2.1234 |
| 2.0319 | 1.3381 | 186 | 2.1157 |
| 2.0527 | 1.3813 | 192 | 2.1175 |
| 2.0895 | 1.4245 | 198 | 2.1198 |
| 1.9853 | 1.4676 | 204 | 2.1186 |
| 2.0482 | 1.5108 | 210 | 2.1123 |
| 1.892 | 1.5540 | 216 | 2.1013 |
| 2.0457 | 1.5971 | 222 | 2.1133 |
| 1.9954 | 1.6403 | 228 | 2.1084 |
| 1.9719 | 1.6835 | 234 | 2.1045 |
| 2.0459 | 1.7266 | 240 | 2.1159 |
| 1.9969 | 1.7698 | 246 | 2.1020 |
| 1.9273 | 1.8129 | 252 | 2.1154 |
| 1.9269 | 1.8561 | 258 | 2.1231 |
| 1.9751 | 1.8993 | 264 | 2.1132 |
| 1.9338 | 1.9424 | 270 | 2.0767 |
| 1.9924 | 1.9856 | 276 | 2.1092 |
| 1.9114 | 2.0288 | 282 | 2.1149 |
| 1.9014 | 2.0719 | 288 | 2.1025 |
| 1.9959 | 2.1151 | 294 | 2.0986 |
| 1.9145 | 2.1583 | 300 | 2.1133 |
| 1.8756 | 2.2014 | 306 | 2.1224 |
| 1.8999 | 2.2446 | 312 | 2.1034 |
| 1.963 | 2.2878 | 318 | 2.1198 |
| 1.9189 | 2.3309 | 324 | 2.1308 |
| 1.9539 | 2.3741 | 330 | 2.1069 |
| 1.9463 | 2.4173 | 336 | 2.1014 |
| 1.9892 | 2.4604 | 342 | 2.1129 |
| 1.9526 | 2.5036 | 348 | 2.1019 |
| 2.0455 | 2.5468 | 354 | 2.1284 |
| 1.9248 | 2.5899 | 360 | 2.1191 |
| 1.8867 | 2.6331 | 366 | 2.0985 |
| 1.7824 | 2.6763 | 372 | 2.1137 |
| 1.6577 | 2.7194 | 378 | 2.0967 |
| 1.7822 | 2.7626 | 384 | 2.0938 |
| 1.84 | 2.8058 | 390 | 2.1322 |
| 1.8023 | 2.8489 | 396 | 2.0898 |
| 1.8613 | 2.8921 | 402 | 2.1231 |
| 1.7858 | 2.9353 | 408 | 2.1254 |
| 1.7629 | 2.9784 | 414 | 2.0850 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
rbelanec/train_mrpc_1744902649 | rbelanec | "2025-04-19T11:45:31Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lntuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | "2025-04-19T04:29:20Z" | ---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- lntuning
- generated_from_trainer
model-index:
- name: train_mrpc_1744902649
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_mrpc_1744902649
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the mrpc dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1272
- Num Input Tokens Seen: 65784064
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 123
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- training_steps: 40000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:--------:|:-----:|:---------------:|:-----------------:|
| 0.1792 | 0.9685 | 200 | 0.1595 | 329312 |
| 0.1616 | 1.9395 | 400 | 0.1593 | 658560 |
| 0.1105 | 2.9104 | 600 | 0.1366 | 987040 |
| 0.093 | 3.8814 | 800 | 0.1343 | 1316448 |
| 0.1363 | 4.8523 | 1000 | 0.1369 | 1644608 |
| 0.1061 | 5.8232 | 1200 | 0.1310 | 1974016 |
| 0.1695 | 6.7942 | 1400 | 0.1355 | 2303584 |
| 0.0548 | 7.7651 | 1600 | 0.1272 | 2630688 |
| 0.1025 | 8.7361 | 1800 | 0.1282 | 2959808 |
| 0.1103 | 9.7070 | 2000 | 0.1275 | 3287584 |
| 0.073 | 10.6780 | 2200 | 0.1276 | 3617920 |
| 0.0808 | 11.6489 | 2400 | 0.1375 | 3945536 |
| 0.0832 | 12.6199 | 2600 | 0.1372 | 4274560 |
| 0.043 | 13.5908 | 2800 | 0.1441 | 4603168 |
| 0.1192 | 14.5617 | 3000 | 0.1549 | 4932448 |
| 0.0522 | 15.5327 | 3200 | 0.1576 | 5261312 |
| 0.0495 | 16.5036 | 3400 | 0.1620 | 5589632 |
| 0.0463 | 17.4746 | 3600 | 0.1867 | 5918112 |
| 0.0316 | 18.4455 | 3800 | 0.2041 | 6246368 |
| 0.0302 | 19.4165 | 4000 | 0.2137 | 6574848 |
| 0.0489 | 20.3874 | 4200 | 0.2474 | 6903520 |
| 0.0428 | 21.3584 | 4400 | 0.2885 | 7231904 |
| 0.0092 | 22.3293 | 4600 | 0.2955 | 7561504 |
| 0.0247 | 23.3002 | 4800 | 0.3410 | 7890912 |
| 0.0108 | 24.2712 | 5000 | 0.3752 | 8218592 |
| 0.003 | 25.2421 | 5200 | 0.3848 | 8548256 |
| 0.0001 | 26.2131 | 5400 | 0.4551 | 8876704 |
| 0.0038 | 27.1840 | 5600 | 0.5005 | 9206272 |
| 0.0011 | 28.1550 | 5800 | 0.5081 | 9534720 |
| 0.0186 | 29.1259 | 6000 | 0.5632 | 9864384 |
| 0.0001 | 30.0969 | 6200 | 0.6207 | 10193376 |
| 0.0001 | 31.0678 | 6400 | 0.6485 | 10521952 |
| 0.0001 | 32.0387 | 6600 | 0.6780 | 10851520 |
| 0.0 | 33.0097 | 6800 | 0.7211 | 11180544 |
| 0.0001 | 33.9782 | 7000 | 0.7425 | 11509344 |
| 0.0 | 34.9492 | 7200 | 0.7592 | 11838208 |
| 0.0022 | 35.9201 | 7400 | 0.7934 | 12167872 |
| 0.0 | 36.8910 | 7600 | 0.7808 | 12496352 |
| 0.0 | 37.8620 | 7800 | 0.8029 | 12826048 |
| 0.0 | 38.8329 | 8000 | 0.8225 | 13155040 |
| 0.0003 | 39.8039 | 8200 | 0.8472 | 13483008 |
| 0.0 | 40.7748 | 8400 | 0.8385 | 13812064 |
| 0.0 | 41.7458 | 8600 | 0.9110 | 14140576 |
| 0.0001 | 42.7167 | 8800 | 0.8721 | 14469248 |
| 0.0006 | 43.6877 | 9000 | 0.8889 | 14796672 |
| 0.0002 | 44.6586 | 9200 | 0.8948 | 15126752 |
| 0.0042 | 45.6295 | 9400 | 0.9480 | 15456160 |
| 0.0006 | 46.6005 | 9600 | 0.8594 | 15784928 |
| 0.0 | 47.5714 | 9800 | 0.9047 | 16113248 |
| 0.0 | 48.5424 | 10000 | 0.8897 | 16442496 |
| 0.0 | 49.5133 | 10200 | 0.8949 | 16772640 |
| 0.0 | 50.4843 | 10400 | 0.8780 | 17100000 |
| 0.0 | 51.4552 | 10600 | 0.8463 | 17428768 |
| 0.0 | 52.4262 | 10800 | 0.8947 | 17757344 |
| 0.0 | 53.3971 | 11000 | 0.8721 | 18085920 |
| 0.0 | 54.3680 | 11200 | 0.8789 | 18414336 |
| 0.0 | 55.3390 | 11400 | 0.8813 | 18743040 |
| 0.0 | 56.3099 | 11600 | 0.8915 | 19072928 |
| 0.0 | 57.2809 | 11800 | 0.8866 | 19401376 |
| 0.0 | 58.2518 | 12000 | 0.8915 | 19730336 |
| 0.0 | 59.2228 | 12200 | 0.8939 | 20059488 |
| 0.0 | 60.1937 | 12400 | 0.8958 | 20388064 |
| 0.0 | 61.1646 | 12600 | 0.8991 | 20718144 |
| 0.0 | 62.1356 | 12800 | 0.9055 | 21048224 |
| 0.0 | 63.1065 | 13000 | 0.9546 | 21376576 |
| 0.0029 | 64.0775 | 13200 | 0.9045 | 21706080 |
| 0.0 | 65.0484 | 13400 | 0.9358 | 22034624 |
| 0.0 | 66.0194 | 13600 | 0.8919 | 22364128 |
| 0.0 | 66.9879 | 13800 | 0.8877 | 22692352 |
| 0.0101 | 67.9588 | 14000 | 0.8636 | 23020864 |
| 0.0 | 68.9298 | 14200 | 0.9585 | 23349920 |
| 0.0 | 69.9007 | 14400 | 0.8971 | 23679072 |
| 0.0 | 70.8717 | 14600 | 0.8881 | 24007776 |
| 0.0 | 71.8426 | 14800 | 0.9130 | 24336640 |
| 0.0 | 72.8136 | 15000 | 0.9017 | 24664576 |
| 0.0 | 73.7845 | 15200 | 0.9239 | 24994848 |
| 0.0 | 74.7554 | 15400 | 0.9034 | 25322720 |
| 0.0 | 75.7264 | 15600 | 0.9104 | 25650784 |
| 0.0013 | 76.6973 | 15800 | 0.9375 | 25980512 |
| 0.0007 | 77.6683 | 16000 | 0.9748 | 26309536 |
| 0.0 | 78.6392 | 16200 | 0.9272 | 26638944 |
| 0.0 | 79.6102 | 16400 | 0.9310 | 26967360 |
| 0.0 | 80.5811 | 16600 | 0.9371 | 27297120 |
| 0.0 | 81.5521 | 16800 | 0.9354 | 27626144 |
| 0.0 | 82.5230 | 17000 | 0.9427 | 27954656 |
| 0.0 | 83.4939 | 17200 | 0.9468 | 28284160 |
| 0.0 | 84.4649 | 17400 | 0.9542 | 28612224 |
| 0.0 | 85.4358 | 17600 | 0.9413 | 28940448 |
| 0.0 | 86.4068 | 17800 | 0.9482 | 29270912 |
| 0.0 | 87.3777 | 18000 | 0.9457 | 29599424 |
| 0.0 | 88.3487 | 18200 | 0.9610 | 29929280 |
| 0.0 | 89.3196 | 18400 | 0.9695 | 30257504 |
| 0.0 | 90.2906 | 18600 | 0.9427 | 30586944 |
| 0.0 | 91.2615 | 18800 | 0.9648 | 30915744 |
| 0.0 | 92.2324 | 19000 | 0.9566 | 31245216 |
| 0.0 | 93.2034 | 19200 | 1.0410 | 31573600 |
| 0.0 | 94.1743 | 19400 | 1.0290 | 31903616 |
| 0.0 | 95.1453 | 19600 | 1.0224 | 32232032 |
| 0.0 | 96.1162 | 19800 | 1.0002 | 32560480 |
| 0.0 | 97.0872 | 20000 | 1.0333 | 32889696 |
| 0.0 | 98.0581 | 20200 | 0.9999 | 33218016 |
| 0.0 | 99.0291 | 20400 | 1.0188 | 33547296 |
| 0.0 | 99.9976 | 20600 | 1.0259 | 33876000 |
| 0.0 | 100.9685 | 20800 | 1.0148 | 34205376 |
| 0.0 | 101.9395 | 21000 | 1.0062 | 34534496 |
| 0.0 | 102.9104 | 21200 | 0.9976 | 34864000 |
| 0.0 | 103.8814 | 21400 | 1.0242 | 35192256 |
| 0.0 | 104.8523 | 21600 | 1.0044 | 35521376 |
| 0.0 | 105.8232 | 21800 | 1.0179 | 35851264 |
| 0.0 | 106.7942 | 22000 | 1.0085 | 36180000 |
| 0.0 | 107.7651 | 22200 | 1.0040 | 36508832 |
| 0.0 | 108.7361 | 22400 | 1.0053 | 36837600 |
| 0.0 | 109.7070 | 22600 | 0.9748 | 37166720 |
| 0.0 | 110.6780 | 22800 | 1.0201 | 37495520 |
| 0.0 | 111.6489 | 23000 | 1.0137 | 37824352 |
| 0.0 | 112.6199 | 23200 | 1.0274 | 38153856 |
| 0.0 | 113.5908 | 23400 | 1.0198 | 38483200 |
| 0.0 | 114.5617 | 23600 | 1.0236 | 38812672 |
| 0.0 | 115.5327 | 23800 | 1.0075 | 39142400 |
| 0.0 | 116.5036 | 24000 | 1.0092 | 39471200 |
| 0.0 | 117.4746 | 24200 | 1.0208 | 39798848 |
| 0.0 | 118.4455 | 24400 | 1.0163 | 40127360 |
| 0.0 | 119.4165 | 24600 | 1.0297 | 40456736 |
| 0.0 | 120.3874 | 24800 | 1.0208 | 40785312 |
| 0.0 | 121.3584 | 25000 | 1.0032 | 41112576 |
| 0.0 | 122.3293 | 25200 | 1.0071 | 41442112 |
| 0.0 | 123.3002 | 25400 | 1.0182 | 41771552 |
| 0.0 | 124.2712 | 25600 | 1.0241 | 42101248 |
| 0.0 | 125.2421 | 25800 | 0.9986 | 42427392 |
| 0.0 | 126.2131 | 26000 | 1.0178 | 42756704 |
| 0.0 | 127.1840 | 26200 | 1.0377 | 43085664 |
| 0.0 | 128.1550 | 26400 | 1.0162 | 43414240 |
| 0.0 | 129.1259 | 26600 | 1.0307 | 43743072 |
| 0.0 | 130.0969 | 26800 | 1.0224 | 44072768 |
| 0.0 | 131.0678 | 27000 | 1.0235 | 44400192 |
| 0.0 | 132.0387 | 27200 | 1.0353 | 44729632 |
| 0.0 | 133.0097 | 27400 | 1.0296 | 45058976 |
| 0.0 | 133.9782 | 27600 | 1.0324 | 45388352 |
| 0.0 | 134.9492 | 27800 | 1.0443 | 45717952 |
| 0.0 | 135.9201 | 28000 | 1.0478 | 46046144 |
| 0.0 | 136.8910 | 28200 | 1.0435 | 46375168 |
| 0.0 | 137.8620 | 28400 | 1.0442 | 46702816 |
| 0.0 | 138.8329 | 28600 | 1.0448 | 47033152 |
| 0.0 | 139.8039 | 28800 | 1.0729 | 47361472 |
| 0.0 | 140.7748 | 29000 | 1.0439 | 47691424 |
| 0.0 | 141.7458 | 29200 | 1.0689 | 48019712 |
| 0.0 | 142.7167 | 29400 | 1.0791 | 48348832 |
| 0.0 | 143.6877 | 29600 | 1.0849 | 48678560 |
| 0.0 | 144.6586 | 29800 | 1.0461 | 49008256 |
| 0.0 | 145.6295 | 30000 | 1.0701 | 49337088 |
| 0.0 | 146.6005 | 30200 | 1.0699 | 49665344 |
| 0.0 | 147.5714 | 30400 | 1.0625 | 49996128 |
| 0.0 | 148.5424 | 30600 | 1.0711 | 50324736 |
| 0.0 | 149.5133 | 30800 | 1.0653 | 50652864 |
| 0.0 | 150.4843 | 31000 | 1.0867 | 50981920 |
| 0.0 | 151.4552 | 31200 | 1.0732 | 51310752 |
| 0.0 | 152.4262 | 31400 | 1.0587 | 51640352 |
| 0.0 | 153.3971 | 31600 | 1.0614 | 51969184 |
| 0.0 | 154.3680 | 31800 | 1.0761 | 52297280 |
| 0.0 | 155.3390 | 32000 | 1.0690 | 52625600 |
| 0.0 | 156.3099 | 32200 | 1.0777 | 52953920 |
| 0.0 | 157.2809 | 32400 | 1.0818 | 53283648 |
| 0.0 | 158.2518 | 32600 | 1.0866 | 53613056 |
| 0.0 | 159.2228 | 32800 | 1.0812 | 53941632 |
| 0.0 | 160.1937 | 33000 | 1.0887 | 54270272 |
| 0.0 | 161.1646 | 33200 | 1.0782 | 54599104 |
| 0.0 | 162.1356 | 33400 | 1.0808 | 54929056 |
| 0.0 | 163.1065 | 33600 | 1.0965 | 55257728 |
| 0.0 | 164.0775 | 33800 | 1.0854 | 55587456 |
| 0.0 | 165.0484 | 34000 | 1.0979 | 55916576 |
| 0.0 | 166.0194 | 34200 | 1.0962 | 56245664 |
| 0.0 | 166.9879 | 34400 | 1.1092 | 56574272 |
| 0.0 | 167.9588 | 34600 | 1.1052 | 56903360 |
| 0.0 | 168.9298 | 34800 | 1.1229 | 57232032 |
| 0.0 | 169.9007 | 35000 | 1.0853 | 57561504 |
| 0.0 | 170.8717 | 35200 | 1.1070 | 57891168 |
| 0.0 | 171.8426 | 35400 | 1.1014 | 58220352 |
| 0.0 | 172.8136 | 35600 | 1.1065 | 58548960 |
| 0.0 | 173.7845 | 35800 | 1.0964 | 58878688 |
| 0.0 | 174.7554 | 36000 | 1.0980 | 59207104 |
| 0.0 | 175.7264 | 36200 | 1.1023 | 59536800 |
| 0.0 | 176.6973 | 36400 | 1.0831 | 59865312 |
| 0.0 | 177.6683 | 36600 | 1.0948 | 60194816 |
| 0.0 | 178.6392 | 36800 | 1.1205 | 60523584 |
| 0.0 | 179.6102 | 37000 | 1.0883 | 60852352 |
| 0.0 | 180.5811 | 37200 | 1.0916 | 61181024 |
| 0.0 | 181.5521 | 37400 | 1.1090 | 61510624 |
| 0.0 | 182.5230 | 37600 | 1.1083 | 61840672 |
| 0.0 | 183.4939 | 37800 | 1.1169 | 62167808 |
| 0.0 | 184.4649 | 38000 | 1.1141 | 62496960 |
| 0.0 | 185.4358 | 38200 | 1.0932 | 62826016 |
| 0.0 | 186.4068 | 38400 | 1.1050 | 63154784 |
| 0.0 | 187.3777 | 38600 | 1.0873 | 63483904 |
| 0.0 | 188.3487 | 38800 | 1.1244 | 63811808 |
| 0.0 | 189.3196 | 39000 | 1.1015 | 64139488 |
| 0.0 | 190.2906 | 39200 | 1.0930 | 64467808 |
| 0.0 | 191.2615 | 39400 | 1.0899 | 64798112 |
| 0.0 | 192.2324 | 39600 | 1.0952 | 65126304 |
| 0.0 | 193.2034 | 39800 | 1.1142 | 65455776 |
| 0.0 | 194.1743 | 40000 | 1.1018 | 65784064 |
### Framework versions
- PEFT 0.15.1
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
mlx-community/InternVL3-8B-bf16 | mlx-community | "2025-04-19T11:43:56Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"internvl_chat",
"feature-extraction",
"internvl",
"custom_code",
"mlx",
"image-text-to-text",
"conversational",
"multilingual",
"dataset:OpenGVLab/MMPR-v1.2",
"base_model:OpenGVLab/InternVL3-1B-Instruct",
"base_model:finetune:OpenGVLab/InternVL3-1B-Instruct",
"license:other",
"region:us"
] | image-text-to-text | "2025-04-18T12:20:43Z" | ---
license: other
license_name: qwen
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
pipeline_tag: image-text-to-text
library_name: transformers
base_model:
- OpenGVLab/InternVL3-1B-Instruct
base_model_relation: finetune
datasets:
- OpenGVLab/MMPR-v1.2
language:
- multilingual
tags:
- internvl
- custom_code
- mlx
---
# mlx-community/InternVL3-8B-bf16
This model was converted to MLX format from [`models/InternVL3-8B`]() using mlx-vlm version **0.1.23**.
Refer to the [original model card](https://huggingface.co/models/InternVL3-8B) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model mlx-community/InternVL3-8B-bf16 --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
warmachine68/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-solitary_zealous_cat | warmachine68 | "2025-04-19T11:42:54Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am solitary zealous cat",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-08T22:38:25Z" | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-solitary_zealous_cat
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am solitary zealous cat
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-solitary_zealous_cat
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="warmachine68/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-solitary_zealous_cat", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.1
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/kunoichi-lemon-royale-v2experiment2-32K-7B-i1-GGUF | mradermacher | "2025-04-19T11:41:26Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:grimjim/kunoichi-lemon-royale-v2experiment2-32K-7B",
"base_model:quantized:grimjim/kunoichi-lemon-royale-v2experiment2-32K-7B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-04-19T10:55:02Z" | ---
base_model: grimjim/kunoichi-lemon-royale-v2experiment2-32K-7B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/grimjim/kunoichi-lemon-royale-v2experiment2-32K-7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/kunoichi-lemon-royale-v2experiment2-32K-7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/kunoichi-lemon-royale-v2experiment2-32K-7B-i1-GGUF/resolve/main/kunoichi-lemon-royale-v2experiment2-32K-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/kunoichi-lemon-royale-v2experiment2-32K-7B-i1-GGUF/resolve/main/kunoichi-lemon-royale-v2experiment2-32K-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/kunoichi-lemon-royale-v2experiment2-32K-7B-i1-GGUF/resolve/main/kunoichi-lemon-royale-v2experiment2-32K-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/kunoichi-lemon-royale-v2experiment2-32K-7B-i1-GGUF/resolve/main/kunoichi-lemon-royale-v2experiment2-32K-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/kunoichi-lemon-royale-v2experiment2-32K-7B-i1-GGUF/resolve/main/kunoichi-lemon-royale-v2experiment2-32K-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/kunoichi-lemon-royale-v2experiment2-32K-7B-i1-GGUF/resolve/main/kunoichi-lemon-royale-v2experiment2-32K-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/kunoichi-lemon-royale-v2experiment2-32K-7B-i1-GGUF/resolve/main/kunoichi-lemon-royale-v2experiment2-32K-7B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/kunoichi-lemon-royale-v2experiment2-32K-7B-i1-GGUF/resolve/main/kunoichi-lemon-royale-v2experiment2-32K-7B.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/kunoichi-lemon-royale-v2experiment2-32K-7B-i1-GGUF/resolve/main/kunoichi-lemon-royale-v2experiment2-32K-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/kunoichi-lemon-royale-v2experiment2-32K-7B-i1-GGUF/resolve/main/kunoichi-lemon-royale-v2experiment2-32K-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/kunoichi-lemon-royale-v2experiment2-32K-7B-i1-GGUF/resolve/main/kunoichi-lemon-royale-v2experiment2-32K-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/kunoichi-lemon-royale-v2experiment2-32K-7B-i1-GGUF/resolve/main/kunoichi-lemon-royale-v2experiment2-32K-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/kunoichi-lemon-royale-v2experiment2-32K-7B-i1-GGUF/resolve/main/kunoichi-lemon-royale-v2experiment2-32K-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/kunoichi-lemon-royale-v2experiment2-32K-7B-i1-GGUF/resolve/main/kunoichi-lemon-royale-v2experiment2-32K-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/kunoichi-lemon-royale-v2experiment2-32K-7B-i1-GGUF/resolve/main/kunoichi-lemon-royale-v2experiment2-32K-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/kunoichi-lemon-royale-v2experiment2-32K-7B-i1-GGUF/resolve/main/kunoichi-lemon-royale-v2experiment2-32K-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/kunoichi-lemon-royale-v2experiment2-32K-7B-i1-GGUF/resolve/main/kunoichi-lemon-royale-v2experiment2-32K-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/kunoichi-lemon-royale-v2experiment2-32K-7B-i1-GGUF/resolve/main/kunoichi-lemon-royale-v2experiment2-32K-7B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/kunoichi-lemon-royale-v2experiment2-32K-7B-i1-GGUF/resolve/main/kunoichi-lemon-royale-v2experiment2-32K-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/kunoichi-lemon-royale-v2experiment2-32K-7B-i1-GGUF/resolve/main/kunoichi-lemon-royale-v2experiment2-32K-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/kunoichi-lemon-royale-v2experiment2-32K-7B-i1-GGUF/resolve/main/kunoichi-lemon-royale-v2experiment2-32K-7B.i1-Q4_1.gguf) | i1-Q4_1 | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/kunoichi-lemon-royale-v2experiment2-32K-7B-i1-GGUF/resolve/main/kunoichi-lemon-royale-v2experiment2-32K-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/kunoichi-lemon-royale-v2experiment2-32K-7B-i1-GGUF/resolve/main/kunoichi-lemon-royale-v2experiment2-32K-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/kunoichi-lemon-royale-v2experiment2-32K-7B-i1-GGUF/resolve/main/kunoichi-lemon-royale-v2experiment2-32K-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/kunoichi-lemon-royale-v2experiment2-32K-7B-GGUF | mradermacher | "2025-04-19T11:38:07Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:grimjim/kunoichi-lemon-royale-v2experiment2-32K-7B",
"base_model:quantized:grimjim/kunoichi-lemon-royale-v2experiment2-32K-7B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-19T09:00:09Z" | ---
base_model: grimjim/kunoichi-lemon-royale-v2experiment2-32K-7B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/grimjim/kunoichi-lemon-royale-v2experiment2-32K-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/kunoichi-lemon-royale-v2experiment2-32K-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/kunoichi-lemon-royale-v2experiment2-32K-7B-GGUF/resolve/main/kunoichi-lemon-royale-v2experiment2-32K-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/kunoichi-lemon-royale-v2experiment2-32K-7B-GGUF/resolve/main/kunoichi-lemon-royale-v2experiment2-32K-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/kunoichi-lemon-royale-v2experiment2-32K-7B-GGUF/resolve/main/kunoichi-lemon-royale-v2experiment2-32K-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/kunoichi-lemon-royale-v2experiment2-32K-7B-GGUF/resolve/main/kunoichi-lemon-royale-v2experiment2-32K-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/kunoichi-lemon-royale-v2experiment2-32K-7B-GGUF/resolve/main/kunoichi-lemon-royale-v2experiment2-32K-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/kunoichi-lemon-royale-v2experiment2-32K-7B-GGUF/resolve/main/kunoichi-lemon-royale-v2experiment2-32K-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/kunoichi-lemon-royale-v2experiment2-32K-7B-GGUF/resolve/main/kunoichi-lemon-royale-v2experiment2-32K-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/kunoichi-lemon-royale-v2experiment2-32K-7B-GGUF/resolve/main/kunoichi-lemon-royale-v2experiment2-32K-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/kunoichi-lemon-royale-v2experiment2-32K-7B-GGUF/resolve/main/kunoichi-lemon-royale-v2experiment2-32K-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/kunoichi-lemon-royale-v2experiment2-32K-7B-GGUF/resolve/main/kunoichi-lemon-royale-v2experiment2-32K-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/kunoichi-lemon-royale-v2experiment2-32K-7B-GGUF/resolve/main/kunoichi-lemon-royale-v2experiment2-32K-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/kunoichi-lemon-royale-v2experiment2-32K-7B-GGUF/resolve/main/kunoichi-lemon-royale-v2experiment2-32K-7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/teapotllm-chat-GGUF | mradermacher | "2025-04-19T11:36:48Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:teapotai/teapotllm-chat",
"base_model:quantized:teapotai/teapotllm-chat",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | "2025-04-19T11:30:30Z" | ---
base_model: teapotai/teapotllm-chat
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/teapotai/teapotllm-chat
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/teapotllm-chat-GGUF/resolve/main/teapotllm-chat.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/teapotllm-chat-GGUF/resolve/main/teapotllm-chat.Q3_K_S.gguf) | Q3_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/teapotllm-chat-GGUF/resolve/main/teapotllm-chat.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/teapotllm-chat-GGUF/resolve/main/teapotllm-chat.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/teapotllm-chat-GGUF/resolve/main/teapotllm-chat.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/teapotllm-chat-GGUF/resolve/main/teapotllm-chat.Q4_K_S.gguf) | Q4_K_S | 0.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/teapotllm-chat-GGUF/resolve/main/teapotllm-chat.Q4_K_M.gguf) | Q4_K_M | 0.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/teapotllm-chat-GGUF/resolve/main/teapotllm-chat.Q5_K_S.gguf) | Q5_K_S | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/teapotllm-chat-GGUF/resolve/main/teapotllm-chat.Q5_K_M.gguf) | Q5_K_M | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/teapotllm-chat-GGUF/resolve/main/teapotllm-chat.Q6_K.gguf) | Q6_K | 0.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/teapotllm-chat-GGUF/resolve/main/teapotllm-chat.Q8_0.gguf) | Q8_0 | 0.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/teapotllm-chat-GGUF/resolve/main/teapotllm-chat.f16.gguf) | f16 | 1.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
fedovtt/a1097dee-9584-49fe-8602-d38eff229ef6 | fedovtt | "2025-04-19T11:35:05Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-2-SOLAR-10.7B",
"base_model:adapter:NousResearch/Nous-Hermes-2-SOLAR-10.7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-04-19T10:02:03Z" | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a1097dee-9584-49fe-8602-d38eff229ef6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 33ca191ef861beb8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/33ca191ef861beb8_train_data.json
type:
field_instruction: premise
field_output: hypothesis
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: fedovtt/a1097dee-9584-49fe-8602-d38eff229ef6
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 4
mixed_precision: bf16
mlflow_experiment_name: /tmp/33ca191ef861beb8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 36fecfe5-f91d-46c3-aeef-47b486bafbe0
wandb_project: 01-31
wandb_run: your_name
wandb_runid: 36fecfe5-f91d-46c3-aeef-47b486bafbe0
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# a1097dee-9584-49fe-8602-d38eff229ef6
This model is a fine-tuned version of [NousResearch/Nous-Hermes-2-SOLAR-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0082 | 200 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
jethrowang/whisper-tiny_tat-esc_exp_nr_0.5_embed | jethrowang | "2025-04-19T11:30:36Z" | 1 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"zh",
"dataset:formospeech/tat_asr_aligned",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2025-04-18T10:47:51Z" | ---
library_name: transformers
language:
- zh
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- formospeech/tat_asr_aligned
model-index:
- name: Whisper Tiny Taiwanese (exp_nr_0.5_embed)
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny Taiwanese (exp_nr_0.5_embed)
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the TAT ASR Aligned dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3660
- Cer: 32.6917
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 681
- training_steps: 6810
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.4831 | 0.9985 | 681 | 0.9359 | 46.4492 |
| 0.2884 | 1.9971 | 1362 | 0.8960 | 35.3926 |
| 0.1794 | 2.9956 | 2043 | 0.9427 | 38.7004 |
| 0.1035 | 3.9941 | 2724 | 1.0423 | 35.5451 |
| 0.0585 | 4.9927 | 3405 | 1.1355 | 35.1530 |
| 0.0287 | 5.9912 | 4086 | 1.2120 | 35.6566 |
| 0.0146 | 6.9897 | 4767 | 1.2845 | 33.8159 |
| 0.0052 | 7.9883 | 5448 | 1.3243 | 33.4713 |
| 0.0018 | 8.9868 | 6129 | 1.3504 | 32.9579 |
| 0.0008 | 9.9853 | 6810 | 1.3660 | 32.6917 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.0.0.post304
- Datasets 3.3.2
- Tokenizers 0.21.0
|
YOYO-AI/QwQ-Olympic-coder-32B | YOYO-AI | "2025-04-19T11:28:28Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2408.07990",
"base_model:Qwen/QwQ-32B",
"base_model:merge:Qwen/QwQ-32B",
"base_model:Qwen/Qwen2.5-Coder-32B",
"base_model:merge:Qwen/Qwen2.5-Coder-32B",
"base_model:open-r1/OlympicCoder-32B",
"base_model:merge:open-r1/OlympicCoder-32B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-19T11:14:10Z" | ---
base_model:
- Qwen/Qwen2.5-Coder-32B
- Qwen/QwQ-32B
- open-r1/OlympicCoder-32B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [Qwen/Qwen2.5-Coder-32B](https://huggingface.co/Qwen/Qwen2.5-Coder-32B) as a base.
### Models Merged
The following models were included in the merge:
* [Qwen/QwQ-32B](https://huggingface.co/Qwen/QwQ-32B)
* [open-r1/OlympicCoder-32B](https://huggingface.co/open-r1/OlympicCoder-32B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: sce
models:
# Pivot model
- model: Qwen/Qwen2.5-Coder-32B
# Target models
- model: Qwen/QwQ-32B
- model: open-r1/OlympicCoder-32B
base_model: Qwen/Qwen2.5-Coder-32B
parameters:
select_topk: 1
dtype: bfloat16
tokenizer_source: Qwen/QwQ-32B
normalize: true
int8_mask: true
```
|
mradermacher/Nemo-Chuckles-12B-GGUF | mradermacher | "2025-04-19T11:28:26Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:spacematt/Nemo-Chuckles-12B",
"base_model:quantized:spacematt/Nemo-Chuckles-12B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-19T09:22:20Z" | ---
base_model: spacematt/Nemo-Chuckles-12B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/spacematt/Nemo-Chuckles-12B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Nemo-Chuckles-12B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Nemo-Chuckles-12B-GGUF/resolve/main/Nemo-Chuckles-12B.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Nemo-Chuckles-12B-GGUF/resolve/main/Nemo-Chuckles-12B.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/Nemo-Chuckles-12B-GGUF/resolve/main/Nemo-Chuckles-12B.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Nemo-Chuckles-12B-GGUF/resolve/main/Nemo-Chuckles-12B.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Nemo-Chuckles-12B-GGUF/resolve/main/Nemo-Chuckles-12B.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/Nemo-Chuckles-12B-GGUF/resolve/main/Nemo-Chuckles-12B.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Nemo-Chuckles-12B-GGUF/resolve/main/Nemo-Chuckles-12B.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Nemo-Chuckles-12B-GGUF/resolve/main/Nemo-Chuckles-12B.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Nemo-Chuckles-12B-GGUF/resolve/main/Nemo-Chuckles-12B.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Nemo-Chuckles-12B-GGUF/resolve/main/Nemo-Chuckles-12B.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Nemo-Chuckles-12B-GGUF/resolve/main/Nemo-Chuckles-12B.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
pnle/Qwen2.5-1.5B-Open-R1-Distill | pnle | "2025-04-19T11:26:39Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:open-r1/OpenR1-Math-220k",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-08T08:45:49Z" | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
datasets: open-r1/OpenR1-Math-220k
library_name: transformers
model_name: Qwen2.5-1.5B-Open-R1-Distill
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-1.5B-Open-R1-Distill
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the [open-r1/OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="pnle/Qwen2.5-1.5B-Open-R1-Distill", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/pnnnnnnn-sun-yat-sen-university/huggingface/runs/6wo0gpqn)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0
- Transformers: 4.50.0
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
ppagare/Meta-Llama-3.1-8B-Instruct-pg-chatbot-LORA | ppagare | "2025-04-19T11:23:17Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-04-18T10:11:22Z" | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ppagare
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Gemma-3-1B-Roblox-Luau-GGUF | mradermacher | "2025-04-19T11:23:08Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"roblox",
"luau",
"en",
"dataset:boatbomber/roblox-info-dump",
"dataset:boatbomber/the-luau-stack",
"base_model:boatbomber/Gemma-3-1B-Roblox-Luau",
"base_model:quantized:boatbomber/Gemma-3-1B-Roblox-Luau",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-19T11:11:15Z" | ---
base_model: boatbomber/Gemma-3-1B-Roblox-Luau
datasets:
- boatbomber/roblox-info-dump
- boatbomber/the-luau-stack
language:
- en
library_name: transformers
license: gemma
license_link: https://ai.google.dev/gemma/terms
quantized_by: mradermacher
tags:
- chat
- roblox
- luau
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/boatbomber/Gemma-3-1B-Roblox-Luau
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-1B-Roblox-Luau-GGUF/resolve/main/Gemma-3-1B-Roblox-Luau.Q3_K_S.gguf) | Q3_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-1B-Roblox-Luau-GGUF/resolve/main/Gemma-3-1B-Roblox-Luau.Q2_K.gguf) | Q2_K | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-1B-Roblox-Luau-GGUF/resolve/main/Gemma-3-1B-Roblox-Luau.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-1B-Roblox-Luau-GGUF/resolve/main/Gemma-3-1B-Roblox-Luau.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-1B-Roblox-Luau-GGUF/resolve/main/Gemma-3-1B-Roblox-Luau.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-1B-Roblox-Luau-GGUF/resolve/main/Gemma-3-1B-Roblox-Luau.Q4_K_S.gguf) | Q4_K_S | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-1B-Roblox-Luau-GGUF/resolve/main/Gemma-3-1B-Roblox-Luau.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-1B-Roblox-Luau-GGUF/resolve/main/Gemma-3-1B-Roblox-Luau.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-1B-Roblox-Luau-GGUF/resolve/main/Gemma-3-1B-Roblox-Luau.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-1B-Roblox-Luau-GGUF/resolve/main/Gemma-3-1B-Roblox-Luau.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-1B-Roblox-Luau-GGUF/resolve/main/Gemma-3-1B-Roblox-Luau.Q8_0.gguf) | Q8_0 | 1.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-1B-Roblox-Luau-GGUF/resolve/main/Gemma-3-1B-Roblox-Luau.f16.gguf) | f16 | 2.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Shaleen123/MedicalEDI-14b-EDI-Reasoning-Final-3 | Shaleen123 | "2025-04-19T11:22:49Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-19T11:16:58Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Llama_3.x_70b_Triads_V7-GGUF | mradermacher | "2025-04-19T11:19:59Z" | 19 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:NexesMess/Llama_3.x_70b_Triads_V7",
"base_model:quantized:NexesMess/Llama_3.x_70b_Triads_V7",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-14T10:04:24Z" | ---
base_model: NexesMess/Llama_3.x_70b_Triads_V7
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/NexesMess/Llama_3.x_70b_Triads_V7
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama_3.x_70b_Triads_V7-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Triads_V7-GGUF/resolve/main/Llama_3.x_70b_Triads_V7.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Triads_V7-GGUF/resolve/main/Llama_3.x_70b_Triads_V7.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Triads_V7-GGUF/resolve/main/Llama_3.x_70b_Triads_V7.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Triads_V7-GGUF/resolve/main/Llama_3.x_70b_Triads_V7.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Triads_V7-GGUF/resolve/main/Llama_3.x_70b_Triads_V7.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Triads_V7-GGUF/resolve/main/Llama_3.x_70b_Triads_V7.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Triads_V7-GGUF/resolve/main/Llama_3.x_70b_Triads_V7.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Triads_V7-GGUF/resolve/main/Llama_3.x_70b_Triads_V7.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Triads_V7-GGUF/resolve/main/Llama_3.x_70b_Triads_V7.Q5_K_M.gguf) | Q5_K_M | 50.1 | |
| [PART 1](https://huggingface.co/mradermacher/Llama_3.x_70b_Triads_V7-GGUF/resolve/main/Llama_3.x_70b_Triads_V7.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama_3.x_70b_Triads_V7-GGUF/resolve/main/Llama_3.x_70b_Triads_V7.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Llama_3.x_70b_Triads_V7-GGUF/resolve/main/Llama_3.x_70b_Triads_V7.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama_3.x_70b_Triads_V7-GGUF/resolve/main/Llama_3.x_70b_Triads_V7.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
RioShiina/ABEJA-Qwen2.5-7b-Japanese-v0.1-exl2 | RioShiina | "2025-04-19T11:17:30Z" | 0 | 0 | null | [
"ja",
"base_model:abeja/ABEJA-Qwen2.5-7b-Japanese-v0.1",
"base_model:quantized:abeja/ABEJA-Qwen2.5-7b-Japanese-v0.1",
"license:apache-2.0",
"region:us"
] | null | "2025-04-19T11:17:28Z" | ---
license: apache-2.0
base_model: abeja/ABEJA-Qwen2.5-7b-Japanese-v0.1
base_model_relation: quantized
language:
- ja
---
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.2.8">turboderp's ExLlamaV2 v0.2.8</a> for quantization.
**[2.2bpw](https://huggingface.co/rioshiina/ABEJA-Qwen2.5-7b-Japanese-v0.1-exl2/tree/2.2bpw)**
**[3.0bpw](https://huggingface.co/rioshiina/ABEJA-Qwen2.5-7b-Japanese-v0.1-exl2/tree/3.0bpw)**
**[4.0bpw](https://huggingface.co/rioshiina/ABEJA-Qwen2.5-7b-Japanese-v0.1-exl2/tree/4.0bpw)**
**[5.0bpw](https://huggingface.co/rioshiina/ABEJA-Qwen2.5-7b-Japanese-v0.1-exl2/tree/5.0bpw)**
**[6.0bpw](https://huggingface.co/rioshiina/ABEJA-Qwen2.5-7b-Japanese-v0.1-exl2/tree/6.0bpw)**
**[7.0bpw](https://huggingface.co/rioshiina/ABEJA-Qwen2.5-7b-Japanese-v0.1-exl2/tree/7.0bpw)**
**[8.0bpw](https://huggingface.co/rioshiina/ABEJA-Qwen2.5-7b-Japanese-v0.1-exl2/tree/8.0bpw)**
## Calibration Dataset
[TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)
## ABEJA-Qwen2.5-7b-Japanese-v0.1-exl2
- Model creator: [abeja](https://huggingface.co/abeja)
- Original model: [ABEJA-Qwen2.5-7b-Japanese-v0.1](https://huggingface.co/abeja/ABEJA-Qwen2.5-7b-Japanese-v0.1)
|
rbelanec/train_qqp_1744902597 | rbelanec | "2025-04-19T11:17:03Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"prompt-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | "2025-04-17T15:18:48Z" | ---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- prompt-tuning
- generated_from_trainer
model-index:
- name: train_qqp_1744902597
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_qqp_1744902597
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the qqp dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2268
- Num Input Tokens Seen: 49022016
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.3
- train_batch_size: 4
- eval_batch_size: 4
- seed: 123
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- training_steps: 40000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:------:|:-----:|:---------------:|:-----------------:|
| 0.3195 | 0.0098 | 200 | 0.3106 | 245536 |
| 0.2888 | 0.0195 | 400 | 0.3139 | 489696 |
| 0.256 | 0.0293 | 600 | 0.2695 | 737824 |
| 0.2961 | 0.0391 | 800 | 0.2602 | 981856 |
| 0.2767 | 0.0489 | 1000 | 0.2627 | 1225952 |
| 0.3652 | 0.0586 | 1200 | 0.3079 | 1469920 |
| 0.285 | 0.0684 | 1400 | 0.2651 | 1715360 |
| 0.2802 | 0.0782 | 1600 | 0.2552 | 1961952 |
| 0.2617 | 0.0879 | 1800 | 0.2957 | 2205952 |
| 0.3155 | 0.0977 | 2000 | 0.2709 | 2453792 |
| 0.3359 | 0.1075 | 2200 | 0.2968 | 2698976 |
| 0.2723 | 0.1173 | 2400 | 0.2812 | 2944000 |
| 0.3139 | 0.1270 | 2600 | 0.2977 | 3190496 |
| 0.228 | 0.1368 | 2800 | 0.2626 | 3439104 |
| 0.2619 | 0.1466 | 3000 | 0.2564 | 3684640 |
| 0.3061 | 0.1564 | 3200 | 0.2572 | 3931744 |
| 0.2571 | 0.1661 | 3400 | 0.2869 | 4179680 |
| 0.2624 | 0.1759 | 3600 | 0.2561 | 4424000 |
| 0.2622 | 0.1857 | 3800 | 0.2517 | 4667488 |
| 0.2741 | 0.1954 | 4000 | 0.2555 | 4910752 |
| 0.2699 | 0.2052 | 4200 | 0.2509 | 5157152 |
| 0.2502 | 0.2150 | 4400 | 0.2667 | 5403360 |
| 0.2635 | 0.2248 | 4600 | 0.2509 | 5647360 |
| 0.2642 | 0.2345 | 4800 | 0.2685 | 5889632 |
| 0.2264 | 0.2443 | 5000 | 0.2568 | 6135424 |
| 0.2425 | 0.2541 | 5200 | 0.2495 | 6380320 |
| 0.2447 | 0.2638 | 5400 | 0.2495 | 6627360 |
| 0.2926 | 0.2736 | 5600 | 0.2547 | 6873760 |
| 0.3135 | 0.2834 | 5800 | 0.2659 | 7121504 |
| 0.2274 | 0.2932 | 6000 | 0.2668 | 7366208 |
| 0.2814 | 0.3029 | 6200 | 0.2612 | 7615264 |
| 0.2345 | 0.3127 | 6400 | 0.2557 | 7860128 |
| 0.2643 | 0.3225 | 6600 | 0.2545 | 8103360 |
| 0.2393 | 0.3323 | 6800 | 0.2910 | 8350976 |
| 0.2462 | 0.3420 | 7000 | 0.2714 | 8597664 |
| 0.2514 | 0.3518 | 7200 | 0.2714 | 8842400 |
| 0.2713 | 0.3616 | 7400 | 0.2500 | 9087456 |
| 0.358 | 0.3713 | 7600 | 0.2564 | 9331520 |
| 0.2397 | 0.3811 | 7800 | 0.2481 | 9576704 |
| 0.2357 | 0.3909 | 8000 | 0.2521 | 9819200 |
| 0.2608 | 0.4007 | 8200 | 0.2518 | 10064928 |
| 0.2234 | 0.4104 | 8400 | 0.2646 | 10308768 |
| 0.272 | 0.4202 | 8600 | 0.2483 | 10551296 |
| 0.2518 | 0.4300 | 8800 | 0.2720 | 10798144 |
| 0.242 | 0.4397 | 9000 | 0.2520 | 11047776 |
| 0.2411 | 0.4495 | 9200 | 0.2625 | 11292384 |
| 0.2778 | 0.4593 | 9400 | 0.2483 | 11534944 |
| 0.2841 | 0.4691 | 9600 | 0.3225 | 11778880 |
| 0.2869 | 0.4788 | 9800 | 0.2492 | 12025472 |
| 0.2562 | 0.4886 | 10000 | 0.2491 | 12267968 |
| 0.245 | 0.4984 | 10200 | 0.2480 | 12511488 |
| 0.2534 | 0.5081 | 10400 | 0.2492 | 12755904 |
| 0.2657 | 0.5179 | 10600 | 0.2542 | 13002048 |
| 0.2525 | 0.5277 | 10800 | 0.2583 | 13246272 |
| 0.2363 | 0.5375 | 11000 | 0.2486 | 13491456 |
| 0.2056 | 0.5472 | 11200 | 0.2683 | 13735936 |
| 0.2512 | 0.5570 | 11400 | 0.2502 | 13982176 |
| 0.2496 | 0.5668 | 11600 | 0.2485 | 14227136 |
| 0.2499 | 0.5766 | 11800 | 0.2652 | 14472704 |
| 0.221 | 0.5863 | 12000 | 0.2578 | 14717856 |
| 0.3302 | 0.5961 | 12200 | 0.2479 | 14963520 |
| 0.2619 | 0.6059 | 12400 | 0.2624 | 15208224 |
| 0.2512 | 0.6156 | 12600 | 0.2487 | 15453408 |
| 0.2312 | 0.6254 | 12800 | 0.2576 | 15698016 |
| 0.2569 | 0.6352 | 13000 | 0.2521 | 15942720 |
| 0.252 | 0.6450 | 13200 | 0.2469 | 16186528 |
| 0.2567 | 0.6547 | 13400 | 0.2460 | 16433472 |
| 0.2417 | 0.6645 | 13600 | 0.2502 | 16679360 |
| 0.2363 | 0.6743 | 13800 | 0.2536 | 16924896 |
| 0.2023 | 0.6840 | 14000 | 0.2571 | 17171072 |
| 0.2533 | 0.6938 | 14200 | 0.2453 | 17416704 |
| 0.2489 | 0.7036 | 14400 | 0.2610 | 17663488 |
| 0.2735 | 0.7134 | 14600 | 0.2442 | 17910272 |
| 0.2151 | 0.7231 | 14800 | 0.2596 | 18151712 |
| 0.2568 | 0.7329 | 15000 | 0.2432 | 18395744 |
| 0.2308 | 0.7427 | 15200 | 0.2456 | 18642368 |
| 0.2532 | 0.7524 | 15400 | 0.2430 | 18889312 |
| 0.2515 | 0.7622 | 15600 | 0.2442 | 19133312 |
| 0.2491 | 0.7720 | 15800 | 0.2443 | 19376992 |
| 0.2613 | 0.7818 | 16000 | 0.2443 | 19620672 |
| 0.2467 | 0.7915 | 16200 | 0.2485 | 19866240 |
| 0.2548 | 0.8013 | 16400 | 0.2481 | 20112160 |
| 0.2564 | 0.8111 | 16600 | 0.2450 | 20358464 |
| 0.2713 | 0.8209 | 16800 | 0.2460 | 20602112 |
| 0.2777 | 0.8306 | 17000 | 0.2397 | 20845696 |
| 0.2217 | 0.8404 | 17200 | 0.2408 | 21089792 |
| 0.251 | 0.8502 | 17400 | 0.2454 | 21334176 |
| 0.1997 | 0.8599 | 17600 | 0.2507 | 21577600 |
| 0.2264 | 0.8697 | 17800 | 0.2566 | 21822848 |
| 0.2631 | 0.8795 | 18000 | 0.2381 | 22067296 |
| 0.2491 | 0.8893 | 18200 | 0.2405 | 22313824 |
| 0.2151 | 0.8990 | 18400 | 0.2373 | 22558912 |
| 0.2593 | 0.9088 | 18600 | 0.2564 | 22803456 |
| 0.2586 | 0.9186 | 18800 | 0.2396 | 23047552 |
| 0.2396 | 0.9283 | 19000 | 0.2389 | 23293856 |
| 0.2385 | 0.9381 | 19200 | 0.2390 | 23539488 |
| 0.2906 | 0.9479 | 19400 | 0.2493 | 23786464 |
| 0.2623 | 0.9577 | 19600 | 0.2394 | 24032064 |
| 0.2404 | 0.9674 | 19800 | 0.2371 | 24278464 |
| 0.2486 | 0.9772 | 20000 | 0.2393 | 24521632 |
| 0.2454 | 0.9870 | 20200 | 0.2435 | 24765600 |
| 0.2408 | 0.9968 | 20400 | 0.2354 | 25007520 |
| 0.2772 | 1.0065 | 20600 | 0.2488 | 25253920 |
| 0.28 | 1.0163 | 20800 | 0.2345 | 25498432 |
| 0.2189 | 1.0261 | 21000 | 0.2350 | 25745120 |
| 0.2038 | 1.0359 | 21200 | 0.2462 | 25989952 |
| 0.2399 | 1.0456 | 21400 | 0.2449 | 26234080 |
| 0.2399 | 1.0554 | 21600 | 0.2423 | 26482784 |
| 0.2106 | 1.0652 | 21800 | 0.2333 | 26728608 |
| 0.194 | 1.0750 | 22000 | 0.2438 | 26977792 |
| 0.2419 | 1.0847 | 22200 | 0.2328 | 27218080 |
| 0.2607 | 1.0945 | 22400 | 0.2352 | 27463456 |
| 0.2204 | 1.1043 | 22600 | 0.2364 | 27708832 |
| 0.2387 | 1.1140 | 22800 | 0.2334 | 27956000 |
| 0.2512 | 1.1238 | 23000 | 0.2327 | 28204704 |
| 0.2076 | 1.1336 | 23200 | 0.2332 | 28452992 |
| 0.2111 | 1.1434 | 23400 | 0.2331 | 28696640 |
| 0.2251 | 1.1531 | 23600 | 0.2315 | 28937792 |
| 0.2526 | 1.1629 | 23800 | 0.2321 | 29186016 |
| 0.2118 | 1.1727 | 24000 | 0.2375 | 29431872 |
| 0.253 | 1.1824 | 24200 | 0.2321 | 29673216 |
| 0.2585 | 1.1922 | 24400 | 0.2326 | 29916864 |
| 0.2545 | 1.2020 | 24600 | 0.2308 | 30163136 |
| 0.2238 | 1.2118 | 24800 | 0.2309 | 30405920 |
| 0.2053 | 1.2215 | 25000 | 0.2311 | 30652960 |
| 0.2021 | 1.2313 | 25200 | 0.2308 | 30897184 |
| 0.2241 | 1.2411 | 25400 | 0.2310 | 31141248 |
| 0.2576 | 1.2508 | 25600 | 0.2547 | 31385376 |
| 0.1997 | 1.2606 | 25800 | 0.2426 | 31630880 |
| 0.2535 | 1.2704 | 26000 | 0.2305 | 31876320 |
| 0.2404 | 1.2802 | 26200 | 0.2330 | 32120640 |
| 0.2961 | 1.2899 | 26400 | 0.2408 | 32365056 |
| 0.2544 | 1.2997 | 26600 | 0.2298 | 32611072 |
| 0.2309 | 1.3095 | 26800 | 0.2332 | 32855648 |
| 0.2574 | 1.3193 | 27000 | 0.2293 | 33097440 |
| 0.2147 | 1.3290 | 27200 | 0.2324 | 33342208 |
| 0.2189 | 1.3388 | 27400 | 0.2373 | 33587968 |
| 0.2572 | 1.3486 | 27600 | 0.2337 | 33831872 |
| 0.2355 | 1.3583 | 27800 | 0.2298 | 34076864 |
| 0.19 | 1.3681 | 28000 | 0.2303 | 34319616 |
| 0.2344 | 1.3779 | 28200 | 0.2296 | 34563968 |
| 0.2514 | 1.3877 | 28400 | 0.2300 | 34808704 |
| 0.2254 | 1.3974 | 28600 | 0.2340 | 35054656 |
| 0.2162 | 1.4072 | 28800 | 0.2293 | 35297248 |
| 0.2615 | 1.4170 | 29000 | 0.2393 | 35543232 |
| 0.2474 | 1.4267 | 29200 | 0.2354 | 35787200 |
| 0.2404 | 1.4365 | 29400 | 0.2288 | 36033344 |
| 0.2282 | 1.4463 | 29600 | 0.2294 | 36277664 |
| 0.2092 | 1.4561 | 29800 | 0.2296 | 36522912 |
| 0.2429 | 1.4658 | 30000 | 0.2293 | 36766912 |
| 0.2493 | 1.4756 | 30200 | 0.2291 | 37010880 |
| 0.1932 | 1.4854 | 30400 | 0.2317 | 37255808 |
| 0.2201 | 1.4952 | 30600 | 0.2287 | 37500256 |
| 0.2086 | 1.5049 | 30800 | 0.2282 | 37744128 |
| 0.2215 | 1.5147 | 31000 | 0.2332 | 37989600 |
| 0.2221 | 1.5245 | 31200 | 0.2286 | 38233760 |
| 0.202 | 1.5342 | 31400 | 0.2285 | 38480384 |
| 0.2619 | 1.5440 | 31600 | 0.2291 | 38728448 |
| 0.2256 | 1.5538 | 31800 | 0.2300 | 38975296 |
| 0.2269 | 1.5636 | 32000 | 0.2287 | 39221728 |
| 0.2851 | 1.5733 | 32200 | 0.2288 | 39465280 |
| 0.2444 | 1.5831 | 32400 | 0.2297 | 39712992 |
| 0.2923 | 1.5929 | 32600 | 0.2279 | 39960032 |
| 0.221 | 1.6026 | 32800 | 0.2297 | 40206624 |
| 0.225 | 1.6124 | 33000 | 0.2322 | 40449856 |
| 0.2395 | 1.6222 | 33200 | 0.2280 | 40693312 |
| 0.2141 | 1.6320 | 33400 | 0.2278 | 40936672 |
| 0.2278 | 1.6417 | 33600 | 0.2273 | 41180480 |
| 0.2443 | 1.6515 | 33800 | 0.2271 | 41422272 |
| 0.2371 | 1.6613 | 34000 | 0.2290 | 41666752 |
| 0.198 | 1.6710 | 34200 | 0.2277 | 41912096 |
| 0.2219 | 1.6808 | 34400 | 0.2272 | 42157856 |
| 0.2452 | 1.6906 | 34600 | 0.2273 | 42402496 |
| 0.2011 | 1.7004 | 34800 | 0.2276 | 42645088 |
| 0.2132 | 1.7101 | 35000 | 0.2270 | 42889536 |
| 0.2117 | 1.7199 | 35200 | 0.2285 | 43134208 |
| 0.2199 | 1.7297 | 35400 | 0.2273 | 43377824 |
| 0.1868 | 1.7395 | 35600 | 0.2285 | 43623232 |
| 0.2633 | 1.7492 | 35800 | 0.2272 | 43872416 |
| 0.2165 | 1.7590 | 36000 | 0.2280 | 44117632 |
| 0.2574 | 1.7688 | 36200 | 0.2271 | 44363488 |
| 0.2696 | 1.7785 | 36400 | 0.2268 | 44608000 |
| 0.2385 | 1.7883 | 36600 | 0.2271 | 44852672 |
| 0.2374 | 1.7981 | 36800 | 0.2269 | 45098144 |
| 0.216 | 1.8079 | 37000 | 0.2276 | 45342912 |
| 0.1787 | 1.8176 | 37200 | 0.2269 | 45590720 |
| 0.1936 | 1.8274 | 37400 | 0.2271 | 45835200 |
| 0.2439 | 1.8372 | 37600 | 0.2272 | 46079328 |
| 0.2397 | 1.8469 | 37800 | 0.2272 | 46322496 |
| 0.2493 | 1.8567 | 38000 | 0.2268 | 46565536 |
| 0.2368 | 1.8665 | 38200 | 0.2268 | 46809376 |
| 0.2552 | 1.8763 | 38400 | 0.2269 | 47052352 |
| 0.2151 | 1.8860 | 38600 | 0.2269 | 47298816 |
| 0.188 | 1.8958 | 38800 | 0.2270 | 47547712 |
| 0.2345 | 1.9056 | 39000 | 0.2272 | 47794048 |
| 0.2166 | 1.9153 | 39200 | 0.2270 | 48039872 |
| 0.2361 | 1.9251 | 39400 | 0.2271 | 48286368 |
| 0.2186 | 1.9349 | 39600 | 0.2271 | 48530880 |
| 0.2108 | 1.9447 | 39800 | 0.2270 | 48774656 |
| 0.2436 | 1.9544 | 40000 | 0.2270 | 49022016 |
### Framework versions
- PEFT 0.15.1
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
mlfoundations-dev/b1_science_top_2_10k | mlfoundations-dev | "2025-04-19T11:16:34Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-19T04:45:07Z" | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: b1_science_top_2_10k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# b1_science_top_2_10k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/b1_science_top_2_10k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
rehatr/chan | rehatr | "2025-04-19T11:15:50Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-04-19T10:49:53Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: chan
---
# Chan
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `chan` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "chan",
"lora_weights": "https://huggingface.co/rehatr/chan/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('rehatr/chan', weight_name='lora.safetensors')
image = pipeline('chan').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/rehatr/chan/discussions) to add images that show off what you’ve made with this LoRA.
|
aleegis/9d59f849-08fd-4dd8-9b12-765a10cae01d | aleegis | "2025-04-19T11:15:22Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:upstage/SOLAR-10.7B-Instruct-v1.0",
"base_model:adapter:upstage/SOLAR-10.7B-Instruct-v1.0",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2025-04-19T09:15:38Z" | ---
library_name: peft
license: cc-by-nc-4.0
base_model: upstage/SOLAR-10.7B-Instruct-v1.0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9d59f849-08fd-4dd8-9b12-765a10cae01d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: upstage/SOLAR-10.7B-Instruct-v1.0
bf16: auto
chat_template: llama3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- fc41e3171106b27b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fc41e3171106b27b_train_data.json
type:
field_instruction: source
field_output: good-translation
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: aleegis/9d59f849-08fd-4dd8-9b12-765a10cae01d
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: null
lora_alpha: 32
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
loraplus_lr_embedding: 1.0e-06
loraplus_lr_ratio: 16
lr_scheduler: cosine
max_grad_norm: 1
max_steps: 1500
micro_batch_size: 8
mlflow_experiment_name: /tmp/fc41e3171106b27b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 200
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
save_total_limit: 10
saves_per_epoch: 0
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.0
wandb_entity: null
wandb_mode: online
wandb_name: ec623392-2dd6-4ede-83e3-b5ca8a66c621
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ec623392-2dd6-4ede-83e3-b5ca8a66c621
warmup_steps: 100
weight_decay: 0
xformers_attention: null
```
</details><br>
# 9d59f849-08fd-4dd8-9b12-765a10cae01d
This model is a fine-tuned version of [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1500
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
jyc0325/Qwen2.5-1.5B-Instruct-SFT-code | jyc0325 | "2025-04-19T11:13:28Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:open-r1/codeforces-cots",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-19T00:39:44Z" | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
datasets: open-r1/codeforces-cots
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-SFT-code
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-SFT-code
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the [open-r1/codeforces-cots](https://huggingface.co/datasets/open-r1/codeforces-cots) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jyc0325/Qwen2.5-1.5B-Instruct-SFT-code", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/davidcho2356-purdue-university/huggingface/runs/xuwfbac1)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0
- Transformers: 4.50.0
- Pytorch: 2.5.1
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
dekos2606/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-beaked_aquatic_dove | dekos2606 | "2025-04-19T11:13:06Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am beaked aquatic dove",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-19T11:11:51Z" | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-beaked_aquatic_dove
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am beaked aquatic dove
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-beaked_aquatic_dove
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="dekos2606/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-beaked_aquatic_dove", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
ushnaiffath/gita-text-generation-gpt2 | ushnaiffath | "2025-04-19T11:11:51Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-19T11:11:29Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TOMFORD79/Cake_7 | TOMFORD79 | "2025-04-19T11:09:59Z" | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | "2025-04-19T10:56:37Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
TOMFORD79/Cake_6 | TOMFORD79 | "2025-04-19T11:09:43Z" | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | "2025-04-19T10:56:31Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf | RichardErkhov | "2025-04-19T11:07:22Z" | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-19T09:32:59Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3 - GGUF
- Model creator: https://huggingface.co/yjwon/
- Original model: https://huggingface.co/yjwon/mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q2_K.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q2_K.gguf) | Q2_K | 2.54GB |
| [mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.IQ3_XS.gguf) | IQ3_XS | 2.82GB |
| [mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.IQ3_S.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.IQ3_S.gguf) | IQ3_S | 2.97GB |
| [mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.IQ3_M.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q3_K.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q3_K.gguf) | Q3_K | 3.28GB |
| [mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.IQ4_XS.gguf) | IQ4_XS | 3.68GB |
| [mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q4_0.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q4_0.gguf) | Q4_0 | 3.83GB |
| [mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q4_K.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q4_K.gguf) | Q4_K | 4.07GB |
| [mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q4_1.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q4_1.gguf) | Q4_1 | 4.24GB |
| [mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q5_0.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q5_0.gguf) | Q5_0 | 4.66GB |
| [mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q5_K_S.gguf) | Q5_K_S | 4.66GB |
| [mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q5_K.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q5_K.gguf) | Q5_K | 4.78GB |
| [mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q5_1.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q5_1.gguf) | Q5_1 | 5.07GB |
| [mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q6_K.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q6_K.gguf) | Q6_K | 5.54GB |
| [mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q8_0.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta1e-1_epoch3.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Raniahossam33/qwen2.5-7b-instruct-ditto-Tunisia-food-sap1-custom | Raniahossam33 | "2025-04-19T11:03:17Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-21T00:55:05Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dzanbek/cfd34c94-17a4-4d2b-a13b-ec0e25df2cab | dzanbek | "2025-04-19T11:02:45Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:adapter:unsloth/Llama-3.2-1B-Instruct",
"license:llama3.2",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-04-19T10:58:12Z" | ---
library_name: peft
license: llama3.2
base_model: unsloth/Llama-3.2-1B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cfd34c94-17a4-4d2b-a13b-ec0e25df2cab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.2-1B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1f97354293835a5f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1f97354293835a5f_train_data.json
type:
field_instruction: prompt
field_output: init_response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: dzanbek/cfd34c94-17a4-4d2b-a13b-ec0e25df2cab
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/1f97354293835a5f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 40576177-a321-4699-bf15-9122727da99d
wandb_project: 01-31
wandb_run: your_name
wandb_runid: 40576177-a321-4699-bf15-9122727da99d
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# cfd34c94-17a4-4d2b-a13b-ec0e25df2cab
This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0288 | 150 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nicksedov/rubert-tiny2-classifier | nicksedov | "2025-04-19T10:58:24Z" | 21 | 0 | null | [
"safetensors",
"bert",
"text-classification",
"ru",
"base_model:cointegrated/rubert-tiny2",
"base_model:finetune:cointegrated/rubert-tiny2",
"license:mit",
"region:us"
] | text-classification | "2025-04-09T17:47:48Z" | ---
license: mit
language:
- ru
base_model:
- cointegrated/rubert-tiny2
pipeline_tag: text-classification
---
Модель бинарной классификации текстов на русском языке.
Получает на вход текст запроса и определяет, содержится ли в нем просьба сгенерировать изображение. |
itlwas/Hiber-Multi-10B-Instruct-Q4_K_M-GGUF | itlwas | "2025-04-19T10:58:01Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"hiber-multi",
"safetensors",
"Llama3.1",
"multilingual-llm",
"instruction-tuning",
"flash-attention2",
"quantization",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"zh",
"es",
"fr",
"de",
"ja",
"ko",
"ru",
"base_model:Hibernates/Hiber-Multi-10B-Instruct",
"base_model:quantized:Hibernates/Hiber-Multi-10B-Instruct",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-04-19T10:57:35Z" | ---
base_model: Hibernates/Hiber-Multi-10B-Instruct
language:
- en
- zh
- es
- fr
- de
- ja
- ko
- ru
library_name: transformers
license: llama3.1
pipeline_tag: text-generation
tags:
- text-generation-inference
- hiber-multi
- safetensors
- Llama3.1
- multilingual-llm
- instruction-tuning
- flash-attention2
- quantization
- llama-cpp
- gguf-my-repo
---
# itlwas/Hiber-Multi-10B-Instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`Hibernates/Hiber-Multi-10B-Instruct`](https://huggingface.co/Hibernates/Hiber-Multi-10B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Hibernates/Hiber-Multi-10B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo itlwas/Hiber-Multi-10B-Instruct-Q4_K_M-GGUF --hf-file hiber-multi-10b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo itlwas/Hiber-Multi-10B-Instruct-Q4_K_M-GGUF --hf-file hiber-multi-10b-instruct-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo itlwas/Hiber-Multi-10B-Instruct-Q4_K_M-GGUF --hf-file hiber-multi-10b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo itlwas/Hiber-Multi-10B-Instruct-Q4_K_M-GGUF --hf-file hiber-multi-10b-instruct-q4_k_m.gguf -c 2048
```
|
unsloth/DeepSeek-V3-0324-BF16 | unsloth | "2025-04-19T10:57:32Z" | 113 | 2 | transformers | [
"transformers",
"safetensors",
"deepseek_v3",
"text-generation",
"deepseek",
"unsloth",
"conversational",
"custom_code",
"en",
"arxiv:2412.19437",
"base_model:deepseek-ai/DeepSeek-V3-0324",
"base_model:quantized:deepseek-ai/DeepSeek-V3-0324",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"fp8",
"region:us"
] | text-generation | "2025-03-25T00:34:56Z" | ---
base_model: deepseek-ai/DeepSeek-V3-0324
language:
- en
library_name: transformers
license: mit
tags:
- deepseek_v3
- deepseek
- unsloth
- transformers
---
# DeepSeek-V3-0324
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" />
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20V3-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;">
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;">
<img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;">
<img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;">
<img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="LICENSE" style="margin: 2px;">
<img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
## Features
DeepSeek-V3-0324 demonstrates notable improvements over its predecessor, DeepSeek-V3, in several key aspects.

### Reasoning Capabilities
- Significant improvements in benchmark performance:
- MMLU-Pro: 75.9 → 81.2 (+5.3)
- GPQA: 59.1 → 68.4 (+9.3)
- AIME: 39.6 → 59.4 (+19.8)
- LiveCodeBench: 39.2 → 49.2 (+10.0)
### Front-End Web Development
- Improved the executability of the code
- More aesthetically pleasing web pages and game front-ends
### Chinese Writing Proficiency
- Enhanced style and content quality:
- Aligned with the R1 writing style
- Better quality in medium-to-long-form writing
- Feature Enhancements
- Improved multi-turn interactive rewriting
- Optimized translation quality and letter writing
### Chinese Search Capabilities
- Enhanced report analysis requests with more detailed outputs
### Function Calling Improvements
- Increased accuracy in Function Calling, fixing issues from previous V3 versions
---
## Usage Recommendations
### System Prompt
In the official DeepSeek web/app, we use the same system prompt with a specific date.
```
该助手为DeepSeek Chat,由深度求索公司创造。
今天是{current date}。
```
For example,
```
该助手为DeepSeek Chat,由深度求索公司创造。
今天是3月24日,星期一。
```
### Temperature
In our web and application environments, the temperature parameter $T_{model}$ is set to 0.3. Because many users use the default temperature 1.0 in API call, we have implemented an API temperature $T_{api}$ mapping mechanism that adjusts the input API temperature value of 1.0 to the most suitable model temperature setting of 0.3.
$$
T_{model} = T_{api} \times 0.3 \quad (0 \leq T_{api} \leq 1)
$$
$$
T_{model} = T_{api} - 0.7 \quad (1 < T_{api} \leq 2)
$$
Thus, if you call V3 via API, temperature 1.0 equals to the model temperature 0.3.
### Prompts for File Uploading and Web Search
For file uploading, please follow the template to create prompts, where {file_name}, {file_content} and {question} are arguments.
```
file_template = \
"""[file name]: {file_name}
[file content begin]
{file_content}
[file content end]
{question}"""
```
For Web Search, {search_results}, {cur_date}, and {question} are arguments.
For Chinese query, we use the prompt:
```
search_answer_zh_template = \
'''# 以下内容是基于用户发送的消息的搜索结果:
{search_results}
在我给你的搜索结果中,每个结果都是[webpage X begin]...[webpage X end]格式的,X代表每篇文章的数字索引。请在适当的情况下在句子末尾引用上下文。请按照引用编号[citation:X]的格式在答案中对应部分引用上下文。如果一句话源自多个上下文,请列出所有相关的引用编号,例如[citation:3][citation:5],切记不要将引用集中在最后返回引用编号,而是在答案对应部分列出。
在回答时,请注意以下几点:
- 今天是{cur_date}。
- 并非搜索结果的所有内容都与用户的问题密切相关,你需要结合问题,对搜索结果进行甄别、筛选。
- 对于列举类的问题(如列举所有航班信息),尽量将答案控制在10个要点以内,并告诉用户可以查看搜索来源、获得完整信息。优先提供信息完整、最相关的列举项;如非必要,不要主动告诉用户搜索结果未提供的内容。
- 对于创作类的问题(如写论文),请务必在正文的段落中引用对应的参考编号,例如[citation:3][citation:5],不能只在文章末尾引用。你需要解读并概括用户的题目要求,选择合适的格式,充分利用搜索结果并抽取重要信息,生成符合用户要求、极具思想深度、富有创造力与专业性的答案。你的创作篇幅需要尽可能延长,对于每一个要点的论述要推测用户的意图,给出尽可能多角度的回答要点,且务必信息量大、论述详尽。
- 如果回答很长,请尽量结构化、分段落总结。如果需要分点作答,尽量控制在5个点以内,并合并相关的内容。
- 对于客观类的问答,如果问题的答案非常简短,可以适当补充一到两句相关信息,以丰富内容。
- 你需要根据用户要求和回答内容选择合适、美观的回答格式,确保可读性强。
- 你的回答应该综合多个相关网页来回答,不能重复引用一个网页。
- 除非用户要求,否则你回答的语言需要和用户提问的语言保持一致。
# 用户消息为:
{question}'''
```
For English query, we use the prompt:
```
search_answer_en_template = \
'''# The following contents are the search results related to the user's message:
{search_results}
In the search results I provide to you, each result is formatted as [webpage X begin]...[webpage X end], where X represents the numerical index of each article. Please cite the context at the end of the relevant sentence when appropriate. Use the citation format [citation:X] in the corresponding part of your answer. If a sentence is derived from multiple contexts, list all relevant citation numbers, such as [citation:3][citation:5]. Be sure not to cluster all citations at the end; instead, include them in the corresponding parts of the answer.
When responding, please keep the following points in mind:
- Today is {cur_date}.
- Not all content in the search results is closely related to the user's question. You need to evaluate and filter the search results based on the question.
- For listing-type questions (e.g., listing all flight information), try to limit the answer to 10 key points and inform the user that they can refer to the search sources for complete information. Prioritize providing the most complete and relevant items in the list. Avoid mentioning content not provided in the search results unless necessary.
- For creative tasks (e.g., writing an essay), ensure that references are cited within the body of the text, such as [citation:3][citation:5], rather than only at the end of the text. You need to interpret and summarize the user's requirements, choose an appropriate format, fully utilize the search results, extract key information, and generate an answer that is insightful, creative, and professional. Extend the length of your response as much as possible, addressing each point in detail and from multiple perspectives, ensuring the content is rich and thorough.
- If the response is lengthy, structure it well and summarize it in paragraphs. If a point-by-point format is needed, try to limit it to 5 points and merge related content.
- For objective Q&A, if the answer is very brief, you may add one or two related sentences to enrich the content.
- Choose an appropriate and visually appealing format for your response based on the user's requirements and the content of the answer, ensuring strong readability.
- Your answer should synthesize information from multiple relevant webpages and avoid repeatedly citing the same webpage.
- Unless the user requests otherwise, your response should be in the same language as the user's question.
# The user's message is:
{question}'''
```
## How to Run Locally
The model structure of DeepSeek-V3-0324 is exactly the same as DeepSeek-V3. Please visit [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repo for more information about running this model locally.
**This model supports features such as function calling, JSON output, and FIM completion. For instructions on how to construct prompts to use these features, please refer to [DeepSeek-V2.5](https://huggingface.co/deepseek-ai/DeepSeek-V2.5#function-calling) repo.**
**NOTE: Hugging Face's Transformers has not been directly supported yet.**
## License
This repository and the model weights are licensed under the [MIT License](LICENSE).
## Citation
```
@misc{deepseekai2024deepseekv3technicalreport,
title={DeepSeek-V3 Technical Report},
author={DeepSeek-AI},
year={2024},
eprint={2412.19437},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2412.19437},
}
```
## Contact
If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
|
unsloth/DeepSeek-V3-bf16 | unsloth | "2025-04-19T10:55:22Z" | 505 | 16 | transformers | [
"transformers",
"safetensors",
"deepseek_v3",
"text-generation",
"deepseek",
"unsloth",
"conversational",
"custom_code",
"en",
"arxiv:2412.19437",
"base_model:deepseek-ai/DeepSeek-V3",
"base_model:quantized:deepseek-ai/DeepSeek-V3",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"fp8",
"region:us"
] | text-generation | "2025-01-06T09:27:41Z" | ---
base_model: deepseek-ai/DeepSeek-V3
language:
- en
library_name: transformers
license: mit
tags:
- deepseek_v3
- deepseek
- unsloth
- transformers
---
## ***See [our collection](https://huggingface.co/collections/unsloth/deepseek-v3-all-versions-677cf5cfd7df8b7815fc723c) for versions of Deepseek V3 including GGUF, bf16 and original formats.***
# Finetune Llama 3.3, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a free Google Colab Tesla T4 notebook for Llama 3.1 (8B) here: https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-Alpaca.ipynb
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# unsloth/DeepSeek-V3-bf16
For more details on the model, please go to Deepseek's original [model card](https://huggingface.co/deepseek-ai/DeepSeek-V3)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) | 2.4x faster | 58% less |
| **Llama-3.2 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(11B)-Vision.ipynb) | 2x faster | 60% less |
| **Qwen2 VL (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2_VL_(7B)-Vision.ipynb) | 1.8x faster | 60% less |
| **Qwen2.5 (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_(7B)-Alpaca.ipynb) | 2x faster | 60% less |
| **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-Alpaca.ipynb) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_3.5_Mini-Conversational.ipynb) | 2x faster | 50% less |
| **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Gemma2_(9B)-Alpaca.ipynb) | 2.4x faster | 58% less |
| **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_v0.3_(7B)-Conversational.ipynb) | 2.2x faster | 62% less |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="200"/>](https://docs.unsloth.ai)
- This [Llama 3.2 conversational notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_(7B)-Text_Completion.ipynb) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
## Special Thanks
A huge thank you to the Deepseek team for creating and releasing these models.
## Model Information
We present DeepSeek-V3, a strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.
To achieve efficient inference and cost-effective training, DeepSeek-V3 adopts Multi-head Latent Attention (MLA) and DeepSeekMoE architectures, which were thoroughly validated in DeepSeek-V2.
Furthermore, DeepSeek-V3 pioneers an auxiliary-loss-free strategy for load balancing and sets a multi-token prediction training objective for stronger performance.
We pre-train DeepSeek-V3 on 14.8 trillion diverse and high-quality tokens, followed by Supervised Fine-Tuning and Reinforcement Learning stages to fully harness its capabilities.
Comprehensive evaluations reveal that DeepSeek-V3 outperforms other open-source models and achieves performance comparable to leading closed-source models.
Despite its excellent performance, DeepSeek-V3 requires only 2.788M H800 GPU hours for its full training.
In addition, its training process is remarkably stable.
Throughout the entire training process, we did not experience any irrecoverable loss spikes or perform any rollbacks.
## 2. Model Summary
---
**Architecture: Innovative Load Balancing Strategy and Training Objective**
- On top of the efficient architecture of DeepSeek-V2, we pioneer an auxiliary-loss-free strategy for load balancing, which minimizes the performance degradation that arises from encouraging load balancing.
- We investigate a Multi-Token Prediction (MTP) objective and prove it beneficial to model performance.
It can also be used for speculative decoding for inference acceleration.
---
**Pre-Training: Towards Ultimate Training Efficiency**
- We design an FP8 mixed precision training framework and, for the first time, validate the feasibility and effectiveness of FP8 training on an extremely large-scale model.
- Through co-design of algorithms, frameworks, and hardware, we overcome the communication bottleneck in cross-node MoE training, nearly achieving full computation-communication overlap.
This significantly enhances our training efficiency and reduces the training costs, enabling us to further scale up the model size without additional overhead.
- At an economical cost of only 2.664M H800 GPU hours, we complete the pre-training of DeepSeek-V3 on 14.8T tokens, producing the currently strongest open-source base model. The subsequent training stages after pre-training require only 0.1M GPU hours.
---
**Post-Training: Knowledge Distillation from DeepSeek-R1**
- We introduce an innovative methodology to distill reasoning capabilities from the long-Chain-of-Thought (CoT) model, specifically from one of the DeepSeek R1 series models, into standard LLMs, particularly DeepSeek-V3. Our pipeline elegantly incorporates the verification and reflection patterns of R1 into DeepSeek-V3 and notably improves its reasoning performance. Meanwhile, we also maintain a control over the output style and length of DeepSeek-V3.
---
## 3. Model Downloads
<div align="center">
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
| :------------: | :------------: | :------------: | :------------: | :------------: |
| DeepSeek-V3-Base | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-V3-Base) |
| DeepSeek-V3 | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-V3) |
</div>
**NOTE: The total size of DeepSeek-V3 models on HuggingFace is 685B, which includes 671B of the Main Model weights and 14B of the Multi-Token Prediction (MTP) Module weights.**
To ensure optimal performance and flexibility, we have partnered with open-source communities and hardware vendors to provide multiple ways to run the model locally. For step-by-step guidance, check out Section 6: [How_to Run_Locally](#6-how-to-run-locally).
For developers looking to dive deeper, we recommend exploring [README_WEIGHTS.md](./README_WEIGHTS.md) for details on the Main Model weights and the Multi-Token Prediction (MTP) Modules. Please note that MTP support is currently under active development within the community, and we welcome your contributions and feedback.
## 4. Evaluation Results
### Base Model
#### Standard Benchmarks
<div align="center">
| | Benchmark (Metric) | # Shots | DeepSeek-V2 | Qwen2.5 72B | LLaMA3.1 405B | DeepSeek-V3 |
|---|-------------------|----------|--------|-------------|---------------|---------|
| | Architecture | - | MoE | Dense | Dense | MoE |
| | # Activated Params | - | 21B | 72B | 405B | 37B |
| | # Total Params | - | 236B | 72B | 405B | 671B |
| English | Pile-test (BPB) | - | 0.606 | 0.638 | **0.542** | 0.548 |
| | BBH (EM) | 3-shot | 78.8 | 79.8 | 82.9 | **87.5** |
| | MMLU (Acc.) | 5-shot | 78.4 | 85.0 | 84.4 | **87.1** |
| | MMLU-Redux (Acc.) | 5-shot | 75.6 | 83.2 | 81.3 | **86.2** |
| | MMLU-Pro (Acc.) | 5-shot | 51.4 | 58.3 | 52.8 | **64.4** |
| | DROP (F1) | 3-shot | 80.4 | 80.6 | 86.0 | **89.0** |
| | ARC-Easy (Acc.) | 25-shot | 97.6 | 98.4 | 98.4 | **98.9** |
| | ARC-Challenge (Acc.) | 25-shot | 92.2 | 94.5 | **95.3** | **95.3** |
| | HellaSwag (Acc.) | 10-shot | 87.1 | 84.8 | **89.2** | 88.9 |
| | PIQA (Acc.) | 0-shot | 83.9 | 82.6 | **85.9** | 84.7 |
| | WinoGrande (Acc.) | 5-shot | **86.3** | 82.3 | 85.2 | 84.9 |
| | RACE-Middle (Acc.) | 5-shot | 73.1 | 68.1 | **74.2** | 67.1 |
| | RACE-High (Acc.) | 5-shot | 52.6 | 50.3 | **56.8** | 51.3 |
| | TriviaQA (EM) | 5-shot | 80.0 | 71.9 | **82.7** | **82.9** |
| | NaturalQuestions (EM) | 5-shot | 38.6 | 33.2 | **41.5** | 40.0 |
| | AGIEval (Acc.) | 0-shot | 57.5 | 75.8 | 60.6 | **79.6** |
| Code | HumanEval (Pass@1) | 0-shot | 43.3 | 53.0 | 54.9 | **65.2** |
| | MBPP (Pass@1) | 3-shot | 65.0 | 72.6 | 68.4 | **75.4** |
| | LiveCodeBench-Base (Pass@1) | 3-shot | 11.6 | 12.9 | 15.5 | **19.4** |
| | CRUXEval-I (Acc.) | 2-shot | 52.5 | 59.1 | 58.5 | **67.3** |
| | CRUXEval-O (Acc.) | 2-shot | 49.8 | 59.9 | 59.9 | **69.8** |
| Math | GSM8K (EM) | 8-shot | 81.6 | 88.3 | 83.5 | **89.3** |
| | MATH (EM) | 4-shot | 43.4 | 54.4 | 49.0 | **61.6** |
| | MGSM (EM) | 8-shot | 63.6 | 76.2 | 69.9 | **79.8** |
| | CMath (EM) | 3-shot | 78.7 | 84.5 | 77.3 | **90.7** |
| Chinese | CLUEWSC (EM) | 5-shot | 82.0 | 82.5 | **83.0** | 82.7 |
| | C-Eval (Acc.) | 5-shot | 81.4 | 89.2 | 72.5 | **90.1** |
| | CMMLU (Acc.) | 5-shot | 84.0 | **89.5** | 73.7 | 88.8 |
| | CMRC (EM) | 1-shot | **77.4** | 75.8 | 76.0 | 76.3 |
| | C3 (Acc.) | 0-shot | 77.4 | 76.7 | **79.7** | 78.6 |
| | CCPM (Acc.) | 0-shot | **93.0** | 88.5 | 78.6 | 92.0 |
| Multilingual | MMMLU-non-English (Acc.) | 5-shot | 64.0 | 74.8 | 73.8 | **79.4** |
</div>
Note: Best results are shown in bold. Scores with a gap not exceeding 0.3 are considered to be at the same level. DeepSeek-V3 achieves the best performance on most benchmarks, especially on math and code tasks.
For more evaluation details, please check our paper.
#### Context Window
<p align="center">
<img width="80%" src="figures/niah.png">
</p>
Evaluation results on the ``Needle In A Haystack`` (NIAH) tests. DeepSeek-V3 performs well across all context window lengths up to **128K**.
### Chat Model
#### Standard Benchmarks (Models larger than 67B)
<div align="center">
| | **Benchmark (Metric)** | **DeepSeek V2-0506** | **DeepSeek V2.5-0905** | **Qwen2.5 72B-Inst.** | **Llama3.1 405B-Inst.** | **Claude-3.5-Sonnet-1022** | **GPT-4o 0513** | **DeepSeek V3** |
|---|---------------------|---------------------|----------------------|---------------------|----------------------|---------------------------|----------------|----------------|
| | Architecture | MoE | MoE | Dense | Dense | - | - | MoE |
| | # Activated Params | 21B | 21B | 72B | 405B | - | - | 37B |
| | # Total Params | 236B | 236B | 72B | 405B | - | - | 671B |
| English | MMLU (EM) | 78.2 | 80.6 | 85.3 | **88.6** | **88.3** | 87.2 | **88.5** |
| | MMLU-Redux (EM) | 77.9 | 80.3 | 85.6 | 86.2 | **88.9** | 88.0 | **89.1** |
| | MMLU-Pro (EM) | 58.5 | 66.2 | 71.6 | 73.3 | **78.0** | 72.6 | 75.9 |
| | DROP (3-shot F1) | 83.0 | 87.8 | 76.7 | 88.7 | 88.3 | 83.7 | **91.6** |
| | IF-Eval (Prompt Strict) | 57.7 | 80.6 | 84.1 | 86.0 | **86.5** | 84.3 | 86.1 |
| | GPQA-Diamond (Pass@1) | 35.3 | 41.3 | 49.0 | 51.1 | **65.0** | 49.9 | 59.1 |
| | SimpleQA (Correct) | 9.0 | 10.2 | 9.1 | 17.1 | 28.4 | **38.2** | 24.9 |
| | FRAMES (Acc.) | 66.9 | 65.4 | 69.8 | 70.0 | 72.5 | **80.5** | 73.3 |
| | LongBench v2 (Acc.) | 31.6 | 35.4 | 39.4 | 36.1 | 41.0 | 48.1 | **48.7** |
| Code | HumanEval-Mul (Pass@1) | 69.3 | 77.4 | 77.3 | 77.2 | 81.7 | 80.5 | **82.6** |
| | LiveCodeBench (Pass@1-COT) | 18.8 | 29.2 | 31.1 | 28.4 | 36.3 | 33.4 | **40.5** |
| | LiveCodeBench (Pass@1) | 20.3 | 28.4 | 28.7 | 30.1 | 32.8 | 34.2 | **37.6** |
| | Codeforces (Percentile) | 17.5 | 35.6 | 24.8 | 25.3 | 20.3 | 23.6 | **51.6** |
| | SWE Verified (Resolved) | - | 22.6 | 23.8 | 24.5 | **50.8** | 38.8 | 42.0 |
| | Aider-Edit (Acc.) | 60.3 | 71.6 | 65.4 | 63.9 | **84.2** | 72.9 | 79.7 |
| | Aider-Polyglot (Acc.) | - | 18.2 | 7.6 | 5.8 | 45.3 | 16.0 | **49.6** |
| Math | AIME 2024 (Pass@1) | 4.6 | 16.7 | 23.3 | 23.3 | 16.0 | 9.3 | **39.2** |
| | MATH-500 (EM) | 56.3 | 74.7 | 80.0 | 73.8 | 78.3 | 74.6 | **90.2** |
| | CNMO 2024 (Pass@1) | 2.8 | 10.8 | 15.9 | 6.8 | 13.1 | 10.8 | **43.2** |
| Chinese | CLUEWSC (EM) | 89.9 | 90.4 | **91.4** | 84.7 | 85.4 | 87.9 | 90.9 |
| | C-Eval (EM) | 78.6 | 79.5 | 86.1 | 61.5 | 76.7 | 76.0 | **86.5** |
| | C-SimpleQA (Correct) | 48.5 | 54.1 | 48.4 | 50.4 | 51.3 | 59.3 | **64.8** |
Note: All models are evaluated in a configuration that limits the output length to 8K. Benchmarks containing fewer than 1000 samples are tested multiple times using varying temperature settings to derive robust final results. DeepSeek-V3 stands as the best-performing open-source model, and also exhibits competitive performance against frontier closed-source models.
</div>
#### Open Ended Generation Evaluation
<div align="center">
| Model | Arena-Hard | AlpacaEval 2.0 |
|-------|------------|----------------|
| DeepSeek-V2.5-0905 | 76.2 | 50.5 |
| Qwen2.5-72B-Instruct | 81.2 | 49.1 |
| LLaMA-3.1 405B | 69.3 | 40.5 |
| GPT-4o-0513 | 80.4 | 51.1 |
| Claude-Sonnet-3.5-1022 | 85.2 | 52.0 |
| DeepSeek-V3 | **85.5** | **70.0** |
Note: English open-ended conversation evaluations. For AlpacaEval 2.0, we use the length-controlled win rate as the metric.
</div>
## 5. Chat Website & API Platform
You can chat with DeepSeek-V3 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com/sign_in)
We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/)
## 6. How to Run Locally
DeepSeek-V3 can be deployed locally using the following hardware and open-source community software:
1. **DeepSeek-Infer Demo**: We provide a simple and lightweight demo for FP8 and BF16 inference.
2. **SGLang**: Fully support the DeepSeek-V3 model in both BF16 and FP8 inference modes.
3. **LMDeploy**: Enables efficient FP8 and BF16 inference for local and cloud deployment.
4. **TensorRT-LLM**: Currently supports BF16 inference and INT4/8 quantization, with FP8 support coming soon.
5. **vLLM**: Support DeekSeek-V3 model with FP8 and BF16 modes for tensor parallelism and pipeline parallelism.
6. **AMD GPU**: Enables running the DeepSeek-V3 model on AMD GPUs via SGLang in both BF16 and FP8 modes.
7. **Huawei Ascend NPU**: Supports running DeepSeek-V3 on Huawei Ascend devices.
Since FP8 training is natively adopted in our framework, we only provide FP8 weights. If you require BF16 weights for experimentation, you can use the provided conversion script to perform the transformation.
Here is an example of converting FP8 weights to BF16:
```shell
cd inference
python fp8_cast_bf16.py --input-fp8-hf-path /path/to/fp8_weights --output-bf16-hf-path /path/to/bf16_weights
```
**NOTE: Huggingface's Transformers has not been directly supported yet.**
### 6.1 Inference with DeepSeek-Infer Demo (example only)
#### Model Weights & Demo Code Preparation
First, clone our DeepSeek-V3 GitHub repository:
```shell
git clone https://github.com/deepseek-ai/DeepSeek-V3.git
```
Navigate to the `inference` folder and install dependencies listed in `requirements.txt`.
```shell
cd DeepSeek-V3/inference
pip install -r requirements.txt
```
Download the model weights from HuggingFace, and put them into `/path/to/DeepSeek-V3` folder.
#### Model Weights Conversion
Convert HuggingFace model weights to a specific format:
```shell
python convert.py --hf-ckpt-path /path/to/DeepSeek-V3 --save-path /path/to/DeepSeek-V3-Demo --n-experts 256 --model-parallel 16
```
#### Run
Then you can chat with DeepSeek-V3:
```shell
torchrun --nnodes 2 --nproc-per-node 8 generate.py --node-rank $RANK --master-addr $ADDR --ckpt-path /path/to/DeepSeek-V3-Demo --config configs/config_671B.json --interactive --temperature 0.7 --max-new-tokens 200
```
Or batch inference on a given file:
```shell
torchrun --nnodes 2 --nproc-per-node 8 generate.py --node-rank $RANK --master-addr $ADDR --ckpt-path /path/to/DeepSeek-V3-Demo --config configs/config_671B.json --input-file $FILE
```
### 6.2 Inference with SGLang (recommended)
[SGLang](https://github.com/sgl-project/sglang) currently supports MLA optimizations, FP8 (W8A8), FP8 KV Cache, and Torch Compile, delivering state-of-the-art latency and throughput performance among open-source frameworks.
Notably, [SGLang v0.4.1](https://github.com/sgl-project/sglang/releases/tag/v0.4.1) fully supports running DeepSeek-V3 on both **NVIDIA and AMD GPUs**, making it a highly versatile and robust solution.
Here are the launch instructions from the SGLang team: https://github.com/sgl-project/sglang/tree/main/benchmark/deepseek_v3
### 6.3 Inference with LMDeploy (recommended)
[LMDeploy](https://github.com/InternLM/lmdeploy), a flexible and high-performance inference and serving framework tailored for large language models, now supports DeepSeek-V3. It offers both offline pipeline processing and online deployment capabilities, seamlessly integrating with PyTorch-based workflows.
For comprehensive step-by-step instructions on running DeepSeek-V3 with LMDeploy, please refer to here: https://github.com/InternLM/lmdeploy/issues/2960
### 6.4 Inference with TRT-LLM (recommended)
[TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM) now supports the DeepSeek-V3 model, offering precision options such as BF16 and INT4/INT8 weight-only. Support for FP8 is currently in progress and will be released soon. You can access the custom branch of TRTLLM specifically for DeepSeek-V3 support through the following link to experience the new features directly: https://github.com/NVIDIA/TensorRT-LLM/tree/deepseek/examples/deepseek_v3.
### 6.5 Inference with vLLM (recommended)
[vLLM](https://github.com/vllm-project/vllm) v0.6.6 supports DeepSeek-V3 inference for FP8 and BF16 modes on both NVIDIA and AMD GPUs. Aside from standard techniques, vLLM offers _pipeline parallelism_ allowing you to run this model on multiple machines connected by networks. For detailed guidance, please refer to the [vLLM instructions](https://docs.vllm.ai/en/latest/serving/distributed_serving.html). Please feel free to follow [the enhancement plan](https://github.com/vllm-project/vllm/issues/11539) as well.
### 6.6 Recommended Inference Functionality with AMD GPUs
In collaboration with the AMD team, we have achieved Day-One support for AMD GPUs using SGLang, with full compatibility for both FP8 and BF16 precision. For detailed guidance, please refer to the [SGLang instructions](#63-inference-with-lmdeploy-recommended).
### 6.7 Recommended Inference Functionality with Huawei Ascend NPUs
The [MindIE](https://www.hiascend.com/en/software/mindie) framework from the Huawei Ascend community has successfully adapted the BF16 version of DeepSeek-V3. For step-by-step guidance on Ascend NPUs, please follow the [instructions here](https://modelers.cn/models/MindIE/deepseekv3).
## 7. License
This code repository is licensed under [the MIT License](LICENSE-CODE). The use of DeepSeek-V3 Base/Chat models is subject to [the Model License](LICENSE-MODEL). DeepSeek-V3 series (including Base and Chat) supports commercial use.
## 8. Citation
```
@misc{deepseekai2024deepseekv3technicalreport,
title={DeepSeek-V3 Technical Report},
author={DeepSeek-AI and Aixin Liu and Bei Feng and Bing Xue and Bingxuan Wang and Bochao Wu and Chengda Lu and Chenggang Zhao and Chengqi Deng and Chenyu Zhang and Chong Ruan and Damai Dai and Daya Guo and Dejian Yang and Deli Chen and Dongjie Ji and Erhang Li and Fangyun Lin and Fucong Dai and Fuli Luo and Guangbo Hao and Guanting Chen and Guowei Li and H. Zhang and Han Bao and Hanwei Xu and Haocheng Wang and Haowei Zhang and Honghui Ding and Huajian Xin and Huazuo Gao and Hui Li and Hui Qu and J. L. Cai and Jian Liang and Jianzhong Guo and Jiaqi Ni and Jiashi Li and Jiawei Wang and Jin Chen and Jingchang Chen and Jingyang Yuan and Junjie Qiu and Junlong Li and Junxiao Song and Kai Dong and Kai Hu and Kaige Gao and Kang Guan and Kexin Huang and Kuai Yu and Lean Wang and Lecong Zhang and Lei Xu and Leyi Xia and Liang Zhao and Litong Wang and Liyue Zhang and Meng Li and Miaojun Wang and Mingchuan Zhang and Minghua Zhang and Minghui Tang and Mingming Li and Ning Tian and Panpan Huang and Peiyi Wang and Peng Zhang and Qiancheng Wang and Qihao Zhu and Qinyu Chen and Qiushi Du and R. J. Chen and R. L. Jin and Ruiqi Ge and Ruisong Zhang and Ruizhe Pan and Runji Wang and Runxin Xu and Ruoyu Zhang and Ruyi Chen and S. S. Li and Shanghao Lu and Shangyan Zhou and Shanhuang Chen and Shaoqing Wu and Shengfeng Ye and Shengfeng Ye and Shirong Ma and Shiyu Wang and Shuang Zhou and Shuiping Yu and Shunfeng Zhou and Shuting Pan and T. Wang and Tao Yun and Tian Pei and Tianyu Sun and W. L. Xiao and Wangding Zeng and Wanjia Zhao and Wei An and Wen Liu and Wenfeng Liang and Wenjun Gao and Wenqin Yu and Wentao Zhang and X. Q. Li and Xiangyue Jin and Xianzu Wang and Xiao Bi and Xiaodong Liu and Xiaohan Wang and Xiaojin Shen and Xiaokang Chen and Xiaokang Zhang and Xiaosha Chen and Xiaotao Nie and Xiaowen Sun and Xiaoxiang Wang and Xin Cheng and Xin Liu and Xin Xie and Xingchao Liu and Xingkai Yu and Xinnan Song and Xinxia Shan and Xinyi Zhou and Xinyu Yang and Xinyuan Li and Xuecheng Su and Xuheng Lin and Y. K. Li and Y. Q. Wang and Y. X. Wei and Y. X. Zhu and Yang Zhang and Yanhong Xu and Yanhong Xu and Yanping Huang and Yao Li and Yao Zhao and Yaofeng Sun and Yaohui Li and Yaohui Wang and Yi Yu and Yi Zheng and Yichao Zhang and Yifan Shi and Yiliang Xiong and Ying He and Ying Tang and Yishi Piao and Yisong Wang and Yixuan Tan and Yiyang Ma and Yiyuan Liu and Yongqiang Guo and Yu Wu and Yuan Ou and Yuchen Zhu and Yuduan Wang and Yue Gong and Yuheng Zou and Yujia He and Yukun Zha and Yunfan Xiong and Yunxian Ma and Yuting Yan and Yuxiang Luo and Yuxiang You and Yuxuan Liu and Yuyang Zhou and Z. F. Wu and Z. Z. Ren and Zehui Ren and Zhangli Sha and Zhe Fu and Zhean Xu and Zhen Huang and Zhen Zhang and Zhenda Xie and Zhengyan Zhang and Zhewen Hao and Zhibin Gou and Zhicheng Ma and Zhigang Yan and Zhihong Shao and Zhipeng Xu and Zhiyu Wu and Zhongyu Zhang and Zhuoshu Li and Zihui Gu and Zijia Zhu and Zijun Liu and Zilin Li and Ziwei Xie and Ziyang Song and Ziyi Gao and Zizheng Pan},
year={2024},
eprint={2412.19437},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2412.19437},
}
```
## 9. Contact
If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
|
itlwas/Nemotron-Mini-4B-Instruct-Q4_K_M-GGUF | itlwas | "2025-04-19T10:51:33Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"nvidia",
"llama-3",
"pytorch",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:nvidia/Nemotron-Mini-4B-Instruct",
"base_model:quantized:nvidia/Nemotron-Mini-4B-Instruct",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-04-19T10:51:15Z" | ---
base_model: nvidia/Nemotron-Mini-4B-Instruct
language:
- en
library_name: transformers
license: other
license_name: nvidia-open-model-license
license_link: https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf
pipeline_tag: text-generation
tags:
- nvidia
- llama-3
- pytorch
- llama-cpp
- gguf-my-repo
---
# itlwas/Nemotron-Mini-4B-Instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`nvidia/Nemotron-Mini-4B-Instruct`](https://huggingface.co/nvidia/Nemotron-Mini-4B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nvidia/Nemotron-Mini-4B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo itlwas/Nemotron-Mini-4B-Instruct-Q4_K_M-GGUF --hf-file nemotron-mini-4b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo itlwas/Nemotron-Mini-4B-Instruct-Q4_K_M-GGUF --hf-file nemotron-mini-4b-instruct-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo itlwas/Nemotron-Mini-4B-Instruct-Q4_K_M-GGUF --hf-file nemotron-mini-4b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo itlwas/Nemotron-Mini-4B-Instruct-Q4_K_M-GGUF --hf-file nemotron-mini-4b-instruct-q4_k_m.gguf -c 2048
```
|
amarkale/irnx_ironman_suit_mac_42 | amarkale | "2025-04-19T10:48:57Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-04-18T16:03:00Z" | ---
license: apache-2.0
---
|
RobotsMali/stt-bm-quartznet15x5-V0 | RobotsMali | "2025-04-19T10:47:18Z" | 75 | 1 | nemo | [
"nemo",
"automatic-speech-recognition",
"speech",
"audio",
"CTC",
"QuartzNet",
"pytorch",
"Bambara",
"NeMo",
"bm",
"dataset:RobotsMali/bam-asr-early",
"license:cc-by-4.0",
"model-index",
"region:us"
] | automatic-speech-recognition | "2025-02-07T04:07:11Z" | ---
language:
- bm
library_name: nemo
datasets:
- RobotsMali/bam-asr-early
thumbnail: null
tags:
- automatic-speech-recognition
- speech
- audio
- CTC
- QuartzNet
- pytorch
- Bambara
- NeMo
license: cc-by-4.0
base_model: stt_fr_quartznet15x5
model-index:
- name: stt-bm-quartznet15x5
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: bam-asr-early
type: RobotsMali/bam-asr-early
split: test
args:
language: bm
metrics:
- name: Test WER
type: wer
value: 46.5
metrics:
- wer
pipeline_tag: automatic-speech-recognition
---
# QuartzNet 15x5 CTC Bambara
<style>
img {
display: inline;
}
</style>
[](#model-architecture)
| [](#model-architecture)
| [](#datasets)
`stt-bm-quartznet15x5-V0` is a fine-tuned version of NVIDIA’s [`stt_fr_quartznet15x5`](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/nemo/models/stt_fr_quartznet15x5) optimized for **Bambara ASR**. This model cannot write **Punctuations and Capitalizations**, it utilizes a character encoding scheme, and transcribes text in the standard character set that is provided in the training set of bam-asr-all dataset.
The model was fine-tuned using **NVIDIA NeMo** and is trained with **CTC (Connectionist Temporal Classification) Loss**.
## **🚨 Important Note**
This model, along with its associated resources, is part of an **ongoing research effort**, improvements and refinements are expected in future versions. Users should be aware that:
- **The model may not generalize very well accross all speaking conditions and dialects.**
- **Community feedback is welcome, and contributions are encouraged to refine the model further.**
## NVIDIA NeMo: Training
To fine-tune or use the model, install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend installing it after setting up the latest PyTorch version.
```bash
pip install nemo_toolkit['asr']
```
## How to Use This Model
### Load Model with NeMo
```python
import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.EncDecCTCModel.from_pretrained(model_name="RobotsMali/stt-bm-quartznet15x5")
```
### Transcribe Audio
```python
# Assuming you have a test audio file named sample_audio.wav
asr_model.transcribe(['sample_audio.wav'])
```
### Input
This model accepts **16 kHz mono-channel audio (wav files)** as input.
### Output
This model provides transcribed speech as a string for a given speech sample.
## Model Architecture
QuartzNet is a convolutional architecture, which consists of **1D time-channel separable convolutions** optimized for speech recognition. More information on QuartzNet can be found here: [QuartzNet Model](https://docs.nvidia.com/nemo-framework/user-guide/latest/nemotoolkit/asr/models.html#quartznet).
## Training
The NeMo toolkit was used to fine-tune this model for **25939 steps** over the `stt_fr_quartznet15x5` model. This model is trained with this [base config](https://github.com/RobotsMali-AI/bambara-asr/blob/main/configs/quartznet-20m-config-v2.yaml). The full training configurations, scripts, and experimental logs are available here:
🔗 [Bambara-ASR Experiments](https://github.com/RobotsMali-AI/bambara-asr)
## Dataset
This model was fine-tuned on the [bam-asr-early](https://huggingface.co/datasets/RobotsMali/bam-asr-early) dataset, which consists of **37 hours of transcribed Bambara speech data**. The dataset is primarily derived from **Jeli-ASR dataset** (~87%).
## Performance
The performance of Automatic Speech Recognition models is measured using **Word Error Rate (WER%)**.
|**Version**|**Tokenizer**|**Vocabulary Size**|**bam-asr-all (test set)**|
|---------|-----------------------|-----------------|---------|
| V2 | Character-wise | 45 | 46.5 |
These are **greedy WER numbers without external LM**.
## License
This model is released under the **CC-BY-4.0** license. By using this model, you agree to the terms of the license.
---
More details are available in the **Experimental Technical Report**:
📄 [Draft Technical Report - Weights & Biases](https://wandb.ai/yacoudiarra-wl/bam-asr-nemo-training/reports/Draft-Technical-Report-V1--VmlldzoxMTIyOTMzOA).
Feel free to open a discussion on Hugging Face or [file an issue](https://github.com/RobotsMali-AI/bambara-asr/issues) on GitHub if you have any contributions.
---
|
itlwas/Nemotron-4-Mini-Hindi-4B-Instruct-Q4_K_M-GGUF | itlwas | "2025-04-19T10:44:20Z" | 0 | 0 | nemo | [
"nemo",
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"hi",
"base_model:nvidia/Nemotron-4-Mini-Hindi-4B-Instruct",
"base_model:quantized:nvidia/Nemotron-4-Mini-Hindi-4B-Instruct",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-19T10:44:07Z" | ---
base_model: nvidia/Nemotron-4-Mini-Hindi-4B-Instruct
language:
- en
- hi
library_name: nemo
license: other
license_name: nvidia-open-model-license
license_link: https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf
tags:
- llama-cpp
- gguf-my-repo
---
# itlwas/Nemotron-4-Mini-Hindi-4B-Instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`nvidia/Nemotron-4-Mini-Hindi-4B-Instruct`](https://huggingface.co/nvidia/Nemotron-4-Mini-Hindi-4B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nvidia/Nemotron-4-Mini-Hindi-4B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo itlwas/Nemotron-4-Mini-Hindi-4B-Instruct-Q4_K_M-GGUF --hf-file nemotron-4-mini-hindi-4b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo itlwas/Nemotron-4-Mini-Hindi-4B-Instruct-Q4_K_M-GGUF --hf-file nemotron-4-mini-hindi-4b-instruct-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo itlwas/Nemotron-4-Mini-Hindi-4B-Instruct-Q4_K_M-GGUF --hf-file nemotron-4-mini-hindi-4b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo itlwas/Nemotron-4-Mini-Hindi-4B-Instruct-Q4_K_M-GGUF --hf-file nemotron-4-mini-hindi-4b-instruct-q4_k_m.gguf -c 2048
```
|
RobotsMali/soloni-114m-tdt-ctc-V0 | RobotsMali | "2025-04-19T10:42:31Z" | 23 | 2 | nemo | [
"nemo",
"automatic-speech-recognition",
"speech",
"audio",
"Transducer",
"TDT",
"FastConformer",
"Conformer",
"pytorch",
"Bambara",
"NeMo",
"bm",
"dataset:RobotsMali/bam-asr-early",
"base_model:nvidia/parakeet-tdt_ctc-110m",
"base_model:finetune:nvidia/parakeet-tdt_ctc-110m",
"license:cc-by-4.0",
"model-index",
"region:us"
] | automatic-speech-recognition | "2025-02-07T04:04:28Z" | ---
language:
- bm
library_name: nemo
datasets:
- RobotsMali/bam-asr-early
thumbnail: null
tags:
- automatic-speech-recognition
- speech
- audio
- Transducer
- TDT
- FastConformer
- Conformer
- pytorch
- Bambara
- NeMo
license: cc-by-4.0
base_model: nvidia/parakeet-tdt_ctc-110m
model-index:
- name: soloni-114m-tdt-ctc
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: bam-asr-early
type: RobotsMali/bam-asr-early
split: test
args:
language: bm
metrics:
- name: Test WER (TDT)
type: wer
value: 66.7
- name: Test WER (CTC)
type: wer
value: 40.6
metrics:
- wer
pipeline_tag: automatic-speech-recognition
---
# Soloni TDT-CTC 114M Bambara
<style>
img {
display: inline;
}
</style>
[](#model-architecture)
| [](#model-architecture)
| [](#datasets)
`soloni-114m-tdt-ctc-V0` is a fine tuned version of nvidia's [`parakeet-tdt_ctc-110m`](https://huggingface.co/nvidia/parakeet-tdt_ctc-110m) that transcribes bambara language speech. Unlike its base model, this model cannot write Punctuations and Capitalizations since these were absent from its training.
The model was fine-tuned using **NVIDIA NeMo** and supports **both TDT (Token-and-Duration Transducer) and CTC (Connectionist Temporal Classification) decoding**.
## **🚨 Important Note**
**Update (February 17th):** We observed a significantly lower WER **(\~36%)** for the TDT branch when using an external WER calculation method that relies solely on the predicted and reference transcriptions. However, the WER values reported in this model card are derived from the standard NeMo workflow using PyTorch Lightning's trainer, where the TDT branch yielded higher WER scores **(\~66%)**. Differences may arise due to variations in post-processing, alignment handling, or evaluation methodologies.
This model, along with its associated resources, is part of an **ongoing research effort**, improvements and refinements are expected in future versions. Users should be aware that:
- **The model may not generalize very well accross all speaking conditions and dialects.**
- **Community feedback is welcome, and contributions are encouraged to refine the model further.**
## NVIDIA NeMo: Training
To fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest PyTorch version.
```bash
pip install nemo_toolkit['asr']
```
## How to Use This Model
Note that this model has been released for research purposes primarily.
### Load Model with NeMo
```python
import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.EncDecHybridRNNTCTCBPEModel.from_pretrained(model_name="RobotsMali/soloni-114m-tdt-ctc")
```
### Transcribe Audio
```python
# Assuming you have a test audio file named sample_audio.wav
asr_model.transcribe(['sample_audio.wav'])
```
Note that the decoding strategy for the TDT decoder use CUDA Graphs by default but not all GPUs and versions of cuda support that parameter. If you run into a `RuntimeError: CUDA error: invalid argument` you should set that argument to false in the decoding strategy before calling asr_model.transcribe()
```python
decoding_cfg = asr_model.cfg.decoding
# Disable CUDA Graphs
decoding_cfg.greedy.use_cuda_graph_decoder = False
# Then change the decoding strategy
asr_model.change_decoding_strategy(decoding_cfg=decoding_cfg)
```
### Input
This model accepts **16000 Hz mono-channel** audio (wav files) as input.
### Output
This model provides transcribed speech as a string for a given audio sample.
## Model Architecture
This model uses a Hybrid FastConformer-TDT-CTC architecture. FastConformer is an optimized version of the Conformer model with 8x depthwise-separable convolutional downsampling. You may find more information on the details of FastConformer here: [Fast-Conformer Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#fast-conformer).
## Training
The NeMo toolkit was used for finetuning this model for **16,296 steps** over `parakeet-tdt_ctc-110m` model. This model is trained with this [base config](https://github.com/RobotsMali-AI/bambara-asr/blob/main/configs/parakeet-110m-config-v6.yaml). The full training configurations, scripts, and experimental logs are available here:
🔗 [Bambara-ASR Experiments](https://github.com/RobotsMali-AI/bambara-asr)
The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py).
## Dataset
This model was fine-tuned on the [bam-asr-early](https://huggingface.co/datasets/RobotsMali/bam-asr-early) dataset, which consists of 37 hours of transcribed Bambara speech data. The dataset is primarily derived from **Jeli-ASR dataset** (~87%).
## Performance
The performance of Automatic Speech Recognition models is measured using Word Error Rate. Since this model has two decoders operating independently, each decoder is evaluated independently too.
The following table summarizes the performance of the available models in this collection with the Transducer decoder. Performances of the ASR models are reported in terms of **Word Error Rate (WER%)**.
|**Decoder (Version)**|**Tokenizer**|**Vocabulary Size**|**bam-asr-all (test set)**|
|---------|-----------------------|-----------------|---------|
| CTC (V6) | BPE | 1024 | 40.6 |
| TDT (V6) | BPE | 1024 | 66.7 |
These are greedy WER numbers without external LM. By default the main decoder branch is the TDT branch, if you would like to switch to the CTC decoder simply run this block of code before calling the .transcribe method
```python
# Retrieve the CTC decoding config
ctc_decoding_cfg = asr_model.cfg.aux_ctc.decoding
# Then change the decoding strategy
asr_model.change_decoding_strategy(decoder_type='ctc', decoding_cfg=ctc_decoding_cfg)
# Transcribe with the CTC decoder
asr_model.transcribe(['sample_audio.wav'])
```
## License
This model is released under the **CC-BY-4.0** license. By using this model, you agree to the terms of the license.
---
More details are available in the **Experimental Technical Report**:
📄 [Draft Technical Report - Weights & Biases](https://wandb.ai/yacoudiarra-wl/bam-asr-nemo-training/reports/Draft-Technical-Report-V1--VmlldzoxMTIyOTMzOA).
Feel free to open a discussion on Hugging Face or [file an issue](https://github.com/RobotsMali-AI/bambara-asr/issues) on github if you have any contributions
---
|
ishvets/Sarah | ishvets | "2025-04-19T10:40:42Z" | 0 | 0 | null | [
"license:other",
"region:us"
] | null | "2025-04-19T09:30:42Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
Raniahossam33/qwen2.5-7b-instruct-ditto-Syria-topic-sap1-custom | Raniahossam33 | "2025-04-19T10:37:45Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-19T19:51:28Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
aliyuksal/mistral-mailwizz-merged | aliyuksal | "2025-04-19T10:37:12Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-19T10:23:22Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
heyIamUmair/llama3-3b-merged-legal | heyIamUmair | "2025-04-19T10:36:06Z" | 0 | 0 | null | [
"safetensors",
"llama",
"legal",
"pakistan",
"merged",
"instruction-tuned",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-3B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-04-19T10:31:00Z" | ---
license: apache-2.0
tags:
- llama
- legal
- pakistan
- merged
- instruction-tuned
model_type: causal-lm
base_model: unsloth/Llama-3.2-3B-Instruct
inference: true
---
# 🧠 LLaMA 3.2 3B – Legal Chatbot (Merged)
This is a merged model combining `unsloth/Llama-3.2-3B-Instruct` with LoRA adapters fine-tuned on Pakistani law, including family, criminal, and property law.
✅ Merged
✅ Inference API compatible
✅ No Unsloth or adapter loading needed
|
Bouquets/SecGPT-1.5B-Q4_K_M-GGUF | Bouquets | "2025-04-19T10:35:48Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"cybersecurity",
"security",
"network-security",
"llama-cpp",
"gguf-my-repo",
"zh",
"en",
"base_model:clouditera/SecGPT-1.5B",
"base_model:quantized:clouditera/SecGPT-1.5B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-19T10:35:40Z" | ---
base_model: clouditera/SecGPT-1.5B
language:
- zh
- en
library_name: transformers
license: apache-2.0
tags:
- cybersecurity
- security
- network-security
- llama-cpp
- gguf-my-repo
---
# Bouquets/SecGPT-1.5B-Q4_K_M-GGUF
This model was converted to GGUF format from [`clouditera/SecGPT-1.5B`](https://huggingface.co/clouditera/SecGPT-1.5B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/clouditera/SecGPT-1.5B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Bouquets/SecGPT-1.5B-Q4_K_M-GGUF --hf-file secgpt-1.5b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Bouquets/SecGPT-1.5B-Q4_K_M-GGUF --hf-file secgpt-1.5b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Bouquets/SecGPT-1.5B-Q4_K_M-GGUF --hf-file secgpt-1.5b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Bouquets/SecGPT-1.5B-Q4_K_M-GGUF --hf-file secgpt-1.5b-q4_k_m.gguf -c 2048
```
|
itlwas/Llama-3.1-Nemotron-Nano-8B-v1-Q4_K_M-GGUF | itlwas | "2025-04-19T10:35:08Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"nvidia",
"llama-3",
"pytorch",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:nvidia/Llama-3.1-Nemotron-Nano-8B-v1",
"base_model:quantized:nvidia/Llama-3.1-Nemotron-Nano-8B-v1",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-04-19T10:34:44Z" | ---
base_model: nvidia/Llama-3.1-Nemotron-Nano-8B-v1
language:
- en
library_name: transformers
license: other
license_name: nvidia-open-model-license
license_link: https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
pipeline_tag: text-generation
tags:
- nvidia
- llama-3
- pytorch
- llama-cpp
- gguf-my-repo
---
# itlwas/Llama-3.1-Nemotron-Nano-8B-v1-Q4_K_M-GGUF
This model was converted to GGUF format from [`nvidia/Llama-3.1-Nemotron-Nano-8B-v1`](https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-8B-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-8B-v1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo itlwas/Llama-3.1-Nemotron-Nano-8B-v1-Q4_K_M-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo itlwas/Llama-3.1-Nemotron-Nano-8B-v1-Q4_K_M-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo itlwas/Llama-3.1-Nemotron-Nano-8B-v1-Q4_K_M-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo itlwas/Llama-3.1-Nemotron-Nano-8B-v1-Q4_K_M-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-q4_k_m.gguf -c 2048
```
|
iTroned/weight_test_early_fusion_sentiment_False_hate_speech_False_extra_layer_True | iTroned | "2025-04-19T10:33:56Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | "2025-04-19T09:39:26Z" | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: weight_test_early_fusion_sentiment_False_hate_speech_False_extra_layer_True
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/itroned-ntnu/huggingface/runs/moufl4td)
# weight_test_early_fusion_sentiment_False_hate_speech_False_extra_layer_True
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7058
- Accuracy Offensive: 0.7980
- F1 Offensive: 0.7770
- Accuracy Targeted: 0.7727
- F1 Targeted: 0.5053
- Accuracy Stance: 0.7247
- F1 Stance: 0.3955
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy Offensive | F1 Offensive | Accuracy Targeted | F1 Targeted | Accuracy Stance | F1 Stance |
|:-------------:|:-----:|:----:|:---------------:|:------------------:|:------------:|:-----------------:|:-----------:|:---------------:|:---------:|
| 0.8406 | 1.0 | 1324 | 0.7136 | 0.7938 | 0.7702 | 0.7659 | 0.4982 | 0.6952 | 0.2286 |
| 0.6996 | 2.0 | 2648 | 0.7058 | 0.7980 | 0.7770 | 0.7727 | 0.5053 | 0.7247 | 0.3955 |
| 0.6313 | 3.0 | 3972 | 0.8419 | 0.7874 | 0.7714 | 0.7632 | 0.5038 | 0.6798 | 0.3803 |
| 0.5086 | 4.0 | 5296 | 1.2924 | 0.7949 | 0.7662 | 0.7704 | 0.5001 | 0.7356 | 0.3709 |
### Framework versions
- Transformers 4.50.2
- Pytorch 2.6.0+cu124
- Datasets 3.0.1
- Tokenizers 0.21.1
|
yamatazen/StarrySky-12B-Q4_K_M-GGUF | yamatazen | "2025-04-19T10:31:54Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:yamatazen/StarrySky-12B",
"base_model:quantized:yamatazen/StarrySky-12B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-19T10:31:21Z" | ---
base_model: yamatazen/StarrySky-12B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# yamatazen/StarrySky-12B-Q4_K_M-GGUF
This model was converted to GGUF format from [`yamatazen/StarrySky-12B`](https://huggingface.co/yamatazen/StarrySky-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/yamatazen/StarrySky-12B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo yamatazen/StarrySky-12B-Q4_K_M-GGUF --hf-file starrysky-12b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo yamatazen/StarrySky-12B-Q4_K_M-GGUF --hf-file starrysky-12b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo yamatazen/StarrySky-12B-Q4_K_M-GGUF --hf-file starrysky-12b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo yamatazen/StarrySky-12B-Q4_K_M-GGUF --hf-file starrysky-12b-q4_k_m.gguf -c 2048
```
|
adarsh3601/my_gemma3_4b_pt | adarsh3601 | "2025-04-19T10:28:51Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-4b-it",
"base_model:finetune:unsloth/gemma-3-4b-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-04-19T10:28:37Z" | ---
base_model: unsloth/gemma-3-4b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** adarsh3601
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ngtranAI1/RawMomentum | ngtranAI1 | "2025-04-19T10:27:08Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-04-19T10:27:03Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
itlwas/Llama-3.1-Nemotron-8B-UltraLong-4M-Instruct-Q4_K_M-GGUF | itlwas | "2025-04-19T10:27:04Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:nvidia/Llama-3.1-Nemotron-8B-UltraLong-4M-Instruct",
"base_model:quantized:nvidia/Llama-3.1-Nemotron-8B-UltraLong-4M-Instruct",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-19T10:26:42Z" | ---
base_model: nvidia/Llama-3.1-Nemotron-8B-UltraLong-4M-Instruct
language:
- en
library_name: transformers
license: cc-by-nc-4.0
tags:
- llama-cpp
- gguf-my-repo
---
# itlwas/Llama-3.1-Nemotron-8B-UltraLong-4M-Instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`nvidia/Llama-3.1-Nemotron-8B-UltraLong-4M-Instruct`](https://huggingface.co/nvidia/Llama-3.1-Nemotron-8B-UltraLong-4M-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nvidia/Llama-3.1-Nemotron-8B-UltraLong-4M-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo itlwas/Llama-3.1-Nemotron-8B-UltraLong-4M-Instruct-Q4_K_M-GGUF --hf-file llama-3.1-nemotron-8b-ultralong-4m-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo itlwas/Llama-3.1-Nemotron-8B-UltraLong-4M-Instruct-Q4_K_M-GGUF --hf-file llama-3.1-nemotron-8b-ultralong-4m-instruct-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo itlwas/Llama-3.1-Nemotron-8B-UltraLong-4M-Instruct-Q4_K_M-GGUF --hf-file llama-3.1-nemotron-8b-ultralong-4m-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo itlwas/Llama-3.1-Nemotron-8B-UltraLong-4M-Instruct-Q4_K_M-GGUF --hf-file llama-3.1-nemotron-8b-ultralong-4m-instruct-q4_k_m.gguf -c 2048
```
|
CrimsonZockt/PaigeBueckers-FLUXLORA | CrimsonZockt | "2025-04-19T10:25:15Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | "2025-04-19T10:24:44Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
photoshoot of Paige Bueckers, female, woman, solo, black tanktop,
professional headshot.
output:
url: images/photoshoot of Paige Bueckers, female, woman, so....png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Paige Bueckers
---
# PaigeBueckers
<Gallery />
## Model description
This is a LORA Model that i have train on Weights.gg
## Trigger words
You should use `Paige Bueckers` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/CrimsonZockt/PaigeBueckers-FLUXLORA/tree/main) them in the Files & versions tab.
|
mradermacher/Pathos-Eta-LLaMa-70B-GGUF | mradermacher | "2025-04-19T10:23:22Z" | 108 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:TareksGraveyard/Pathos-Eta-LLaMa-70B",
"base_model:quantized:TareksGraveyard/Pathos-Eta-LLaMa-70B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-20T03:18:43Z" | ---
base_model: TareksGraveyard/Pathos-Eta-LLaMa-70B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/TareksGraveyard/Pathos-Eta-LLaMa-70B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Pathos-Eta-LLaMa-70B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Pathos-Eta-LLaMa-70B-GGUF/resolve/main/Pathos-Eta-LLaMa-70B.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/Pathos-Eta-LLaMa-70B-GGUF/resolve/main/Pathos-Eta-LLaMa-70B.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Pathos-Eta-LLaMa-70B-GGUF/resolve/main/Pathos-Eta-LLaMa-70B.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Pathos-Eta-LLaMa-70B-GGUF/resolve/main/Pathos-Eta-LLaMa-70B.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Pathos-Eta-LLaMa-70B-GGUF/resolve/main/Pathos-Eta-LLaMa-70B.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/Pathos-Eta-LLaMa-70B-GGUF/resolve/main/Pathos-Eta-LLaMa-70B.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Pathos-Eta-LLaMa-70B-GGUF/resolve/main/Pathos-Eta-LLaMa-70B.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Pathos-Eta-LLaMa-70B-GGUF/resolve/main/Pathos-Eta-LLaMa-70B.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Pathos-Eta-LLaMa-70B-GGUF/resolve/main/Pathos-Eta-LLaMa-70B.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Pathos-Eta-LLaMa-70B-GGUF/resolve/main/Pathos-Eta-LLaMa-70B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Pathos-Eta-LLaMa-70B-GGUF/resolve/main/Pathos-Eta-LLaMa-70B.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Pathos-Eta-LLaMa-70B-GGUF/resolve/main/Pathos-Eta-LLaMa-70B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Pathos-Eta-LLaMa-70B-GGUF/resolve/main/Pathos-Eta-LLaMa-70B.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-GGUF | mradermacher | "2025-04-19T10:23:22Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Shaleen123/MedicalEDI-14b-EDI-Reasoning-Final",
"base_model:quantized:Shaleen123/MedicalEDI-14b-EDI-Reasoning-Final",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-19T09:42:12Z" | ---
base_model: Shaleen123/MedicalEDI-14b-EDI-Reasoning-Final
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Shaleen123/MedicalEDI-14b-EDI-Reasoning-Final
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ahmadtalha/whisper-small-dv | ahmadtalha | "2025-04-19T10:20:40Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dv",
"dataset:mozilla-foundation/common_voice_13_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2025-04-19T09:03:11Z" | ---
library_name: transformers
language:
- dv
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Small Dv - Sanchit Gandhi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: dv
split: test
args: dv
metrics:
- name: Wer
type: wer
value: 13.683624856556664
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Dv - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1749
- Wer Ortho: 63.2844
- Wer: 13.6836
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:------:|:----:|:---------------:|:---------:|:-------:|
| 0.1201 | 1.6287 | 500 | 0.1749 | 63.2844 | 13.6836 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
mlfoundations-dev/b1_code_top_8_3k | mlfoundations-dev | "2025-04-19T10:19:51Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-19T05:18:44Z" | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: b1_code_top_8_3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# b1_code_top_8_3k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/b1_code_top_8_3k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 24
- total_train_batch_size: 96
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
MinaMila/phi3_LoRa_Adult_ep5_22 | MinaMila | "2025-04-19T10:16:37Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-04-19T10:16:34Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dgambettaphd/M_gmm2_gen8_run0_W_doc1000_synt64_MPP | dgambettaphd | "2025-04-19T10:12:42Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-04-19T10:12:05Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
railispeople/nalmis | railispeople | "2025-04-19T10:11:05Z" | 0 | 0 | null | [
"license:other",
"region:us"
] | null | "2025-04-19T09:37:15Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
thanhhau097/cm9o18aaq009ags6rcdmtp9ks | thanhhau097 | "2025-04-19T10:06:16Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"flux",
"flux-diffusers",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-04-19T09:46:48Z" | ---
base_model: black-forest-labs/FLUX.1-dev
library_name: diffusers
license: other
instance_prompt: a photo of sks fashion model
widget: []
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
- template:sd-lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Flux DreamBooth LoRA - thanhhau097/cm9o18aaq009ags6rcdmtp9ks
<Gallery />
## Model description
These are thanhhau097/cm9o18aaq009ags6rcdmtp9ks DreamBooth LoRA weights for black-forest-labs/FLUX.1-dev.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [Flux diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_flux.md).
Was LoRA for the text encoder enabled? False.
## Trigger words
You should use `a photo of sks fashion model` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](thanhhau097/cm9o18aaq009ags6rcdmtp9ks/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('thanhhau097/cm9o18aaq009ags6rcdmtp9ks', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('a photo of sks fashion model').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
TommyClas/phaseseg_models | TommyClas | "2025-04-19T10:06:10Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"segformer",
"image-segmentation",
"vision",
"generated_from_trainer",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | "2025-04-19T02:57:18Z" | ---
library_name: transformers
license: other
base_model: nvidia/mit-b0
tags:
- image-segmentation
- vision
- generated_from_trainer
model-index:
- name: phaseseg_models
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phaseseg_models
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the TommyClas/phase_seg dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0372
- Mean Iou: 0.9744
- Mean Accuracy: 0.9872
- Overall Accuracy: 0.9869
- Accuracy 背景: nan
- Accuracy 未水化水泥颗粒: 0.9806
- Accuracy 孔隙: 0.9893
- Accuracy 氢氧化钙: 0.9901
- Accuracy 其他水化物: 0.9887
- Iou 背景: nan
- Iou 未水化水泥颗粒: 0.9730
- Iou 孔隙: 0.9695
- Iou 氢氧化钙: 0.9767
- Iou 其他水化物: 0.9782
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: polynomial
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy 背景 | Accuracy 未水化水泥颗粒 | Accuracy 孔隙 | Accuracy 氢氧化钙 | Accuracy 其他水化物 | Iou 背景 | Iou 未水化水泥颗粒 | Iou 孔隙 | Iou 氢氧化钙 | Iou 其他水化物 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------------:|:----------------:|:-----------:|:----------------:|:-----------:|:-------------:|:--------------:|:------:|:-----------:|:------:|:--------:|:---------:|
| No log | 1.0 | 50 | 0.2994 | 0.7375 | 0.9586 | 0.9580 | nan | 0.9557 | 0.9290 | 0.9696 | 0.9802 | 0.0 | 0.9224 | 0.9066 | 0.9195 | 0.9392 |
| 0.4501 | 2.0 | 100 | 0.1558 | 0.7645 | 0.9767 | 0.9766 | nan | 0.9802 | 0.9580 | 0.9758 | 0.9929 | 0.0 | 0.9585 | 0.9504 | 0.9524 | 0.9609 |
| 0.4501 | 3.0 | 150 | 0.1193 | 0.7715 | 0.9814 | 0.9812 | nan | 0.9797 | 0.9718 | 0.9829 | 0.9912 | 0.0 | 0.9661 | 0.9628 | 0.9607 | 0.9680 |
| 0.0949 | 4.0 | 200 | 0.0898 | 0.7745 | 0.9835 | 0.9834 | nan | 0.9844 | 0.9751 | 0.9842 | 0.9902 | 0.0 | 0.9702 | 0.9667 | 0.9655 | 0.9699 |
| 0.0949 | 5.0 | 250 | 0.0766 | 0.7762 | 0.9848 | 0.9848 | nan | 0.9848 | 0.9799 | 0.9842 | 0.9905 | 0.0 | 0.9729 | 0.9696 | 0.9674 | 0.9713 |
| 0.0584 | 6.0 | 300 | 0.0624 | 0.7771 | 0.9856 | 0.9855 | nan | 0.9865 | 0.9802 | 0.9852 | 0.9905 | 0.0 | 0.9747 | 0.9704 | 0.9684 | 0.9723 |
| 0.0584 | 7.0 | 350 | 0.0628 | 0.7777 | 0.9859 | 0.9858 | nan | 0.9845 | 0.9817 | 0.9865 | 0.9907 | 0.0 | 0.9743 | 0.9717 | 0.9695 | 0.9731 |
| 0.0441 | 8.0 | 400 | 0.0575 | 0.7784 | 0.9863 | 0.9863 | nan | 0.9852 | 0.9841 | 0.9846 | 0.9914 | 0.0 | 0.9750 | 0.9732 | 0.9709 | 0.9732 |
| 0.0441 | 9.0 | 450 | 0.0500 | 0.7788 | 0.9867 | 0.9866 | nan | 0.9855 | 0.9847 | 0.9839 | 0.9925 | 0.0 | 0.9762 | 0.9738 | 0.9706 | 0.9734 |
| 0.0363 | 10.0 | 500 | 0.0496 | 0.7795 | 0.9870 | 0.9869 | nan | 0.9841 | 0.9859 | 0.9875 | 0.9905 | 0.0 | 0.9753 | 0.9745 | 0.9726 | 0.9751 |
| 0.0363 | 11.0 | 550 | 0.0458 | 0.7798 | 0.9873 | 0.9872 | nan | 0.9844 | 0.9863 | 0.9875 | 0.9910 | 0.0 | 0.9758 | 0.9749 | 0.9731 | 0.9755 |
| 0.0315 | 12.0 | 600 | 0.0423 | 0.7802 | 0.9875 | 0.9875 | nan | 0.9872 | 0.9845 | 0.9895 | 0.9891 | 0.0 | 0.9771 | 0.9750 | 0.9731 | 0.9757 |
| 0.0315 | 13.0 | 650 | 0.0437 | 0.7800 | 0.9874 | 0.9873 | nan | 0.9851 | 0.9848 | 0.9891 | 0.9908 | 0.0 | 0.9762 | 0.9749 | 0.9731 | 0.9760 |
| 0.0278 | 14.0 | 700 | 0.0390 | 0.7805 | 0.9878 | 0.9877 | nan | 0.9862 | 0.9859 | 0.9874 | 0.9916 | 0.0 | 0.9772 | 0.9753 | 0.9738 | 0.9762 |
| 0.0278 | 15.0 | 750 | 0.0404 | 0.7799 | 0.9874 | 0.9873 | nan | 0.9834 | 0.9872 | 0.9896 | 0.9896 | 0.0 | 0.9753 | 0.9738 | 0.9740 | 0.9764 |
| 0.0255 | 16.0 | 800 | 0.0422 | 0.7789 | 0.9868 | 0.9866 | nan | 0.9809 | 0.9872 | 0.9878 | 0.9911 | 0.0 | 0.9725 | 0.9709 | 0.9745 | 0.9765 |
| 0.0255 | 17.0 | 850 | 0.0387 | 0.7794 | 0.9871 | 0.9869 | nan | 0.9831 | 0.9858 | 0.9900 | 0.9895 | 0.0 | 0.9742 | 0.9720 | 0.9739 | 0.9767 |
| 0.0235 | 18.0 | 900 | 0.0395 | 0.7791 | 0.9869 | 0.9867 | nan | 0.9810 | 0.9882 | 0.9881 | 0.9903 | 0.0 | 0.9725 | 0.9706 | 0.9751 | 0.9770 |
| 0.0235 | 19.0 | 950 | 0.0364 | 0.7790 | 0.9868 | 0.9866 | nan | 0.9809 | 0.9886 | 0.9867 | 0.9911 | 0.0 | 0.9723 | 0.9706 | 0.9752 | 0.9769 |
| 0.0221 | 20.0 | 1000 | 0.0394 | 0.7785 | 0.9865 | 0.9863 | nan | 0.9801 | 0.9870 | 0.9887 | 0.9904 | 0.0 | 0.9713 | 0.9691 | 0.9751 | 0.9769 |
| 0.0221 | 21.0 | 1050 | 0.0374 | 0.7787 | 0.9866 | 0.9864 | nan | 0.9812 | 0.9873 | 0.9871 | 0.9910 | 0.0 | 0.9720 | 0.9697 | 0.9750 | 0.9768 |
| 0.021 | 22.0 | 1100 | 0.0364 | 0.7787 | 0.9867 | 0.9865 | nan | 0.9804 | 0.9874 | 0.9884 | 0.9906 | 0.0 | 0.9718 | 0.9695 | 0.9753 | 0.9771 |
| 0.021 | 23.0 | 1150 | 0.0375 | 0.7784 | 0.9865 | 0.9863 | nan | 0.9792 | 0.9883 | 0.9888 | 0.9897 | 0.0 | 0.9708 | 0.9687 | 0.9754 | 0.9774 |
| 0.0199 | 24.0 | 1200 | 0.0371 | 0.7782 | 0.9864 | 0.9861 | nan | 0.9792 | 0.9871 | 0.9878 | 0.9913 | 0.0 | 0.9709 | 0.9684 | 0.9749 | 0.9768 |
| 0.0199 | 25.0 | 1250 | 0.0393 | 0.7784 | 0.9865 | 0.9862 | nan | 0.9788 | 0.9885 | 0.9890 | 0.9897 | 0.0 | 0.9707 | 0.9683 | 0.9754 | 0.9776 |
| 0.0191 | 26.0 | 1300 | 0.0387 | 0.7783 | 0.9865 | 0.9862 | nan | 0.9791 | 0.9878 | 0.9904 | 0.9887 | 0.0 | 0.9709 | 0.9683 | 0.9750 | 0.9775 |
| 0.0191 | 27.0 | 1350 | 0.0384 | 0.7785 | 0.9865 | 0.9863 | nan | 0.9794 | 0.9880 | 0.9897 | 0.9890 | 0.0 | 0.9711 | 0.9685 | 0.9754 | 0.9775 |
| 0.0188 | 28.0 | 1400 | 0.0383 | 0.7783 | 0.9865 | 0.9862 | nan | 0.9779 | 0.9893 | 0.9884 | 0.9903 | 0.0 | 0.9705 | 0.9682 | 0.9754 | 0.9776 |
| 0.0188 | 29.0 | 1450 | 0.0377 | 0.7784 | 0.9864 | 0.9862 | nan | 0.9785 | 0.9902 | 0.9890 | 0.9880 | 0.0 | 0.9703 | 0.9680 | 0.9759 | 0.9775 |
| 0.018 | 30.0 | 1500 | 0.0378 | 0.9732 | 0.9866 | 0.9863 | nan | 0.9794 | 0.9885 | 0.9888 | 0.9895 | nan | 0.9710 | 0.9683 | 0.9757 | 0.9777 |
| 0.018 | 31.0 | 1550 | 0.0379 | 0.9730 | 0.9865 | 0.9862 | nan | 0.9794 | 0.9875 | 0.9901 | 0.9890 | nan | 0.9710 | 0.9681 | 0.9753 | 0.9776 |
| 0.0175 | 32.0 | 1600 | 0.0381 | 0.9730 | 0.9865 | 0.9862 | nan | 0.9792 | 0.9884 | 0.9894 | 0.9889 | nan | 0.9708 | 0.9682 | 0.9755 | 0.9775 |
| 0.0175 | 33.0 | 1650 | 0.0394 | 0.7784 | 0.9865 | 0.9862 | nan | 0.9783 | 0.9896 | 0.9894 | 0.9886 | 0.0 | 0.9705 | 0.9679 | 0.9758 | 0.9777 |
| 0.0171 | 34.0 | 1700 | 0.0390 | 0.7784 | 0.9865 | 0.9863 | nan | 0.9800 | 0.9871 | 0.9902 | 0.9887 | 0.0 | 0.9712 | 0.9682 | 0.9753 | 0.9775 |
| 0.0171 | 35.0 | 1750 | 0.0385 | 0.9729 | 0.9865 | 0.9862 | nan | 0.9790 | 0.9878 | 0.9892 | 0.9899 | nan | 0.9710 | 0.9680 | 0.9754 | 0.9774 |
| 0.0166 | 36.0 | 1800 | 0.0384 | 0.9731 | 0.9865 | 0.9863 | nan | 0.9791 | 0.9884 | 0.9889 | 0.9897 | nan | 0.9711 | 0.9682 | 0.9756 | 0.9775 |
| 0.0166 | 37.0 | 1850 | 0.0389 | 0.9730 | 0.9865 | 0.9862 | nan | 0.9794 | 0.9875 | 0.9891 | 0.9898 | nan | 0.9711 | 0.9680 | 0.9754 | 0.9775 |
| 0.0162 | 38.0 | 1900 | 0.0375 | 0.9731 | 0.9865 | 0.9863 | nan | 0.9797 | 0.9879 | 0.9901 | 0.9884 | nan | 0.9711 | 0.9681 | 0.9755 | 0.9777 |
| 0.0162 | 39.0 | 1950 | 0.0389 | 0.9731 | 0.9866 | 0.9863 | nan | 0.9786 | 0.9891 | 0.9891 | 0.9894 | nan | 0.9709 | 0.9681 | 0.9759 | 0.9776 |
| 0.0158 | 40.0 | 2000 | 0.0396 | 0.9730 | 0.9865 | 0.9862 | nan | 0.9783 | 0.9897 | 0.9894 | 0.9886 | nan | 0.9705 | 0.9678 | 0.9761 | 0.9777 |
| 0.0158 | 41.0 | 2050 | 0.0397 | 0.7784 | 0.9865 | 0.9862 | nan | 0.9788 | 0.9889 | 0.9887 | 0.9895 | 0.0 | 0.9708 | 0.9679 | 0.9759 | 0.9773 |
| 0.0156 | 42.0 | 2100 | 0.0401 | 0.9730 | 0.9865 | 0.9862 | nan | 0.9782 | 0.9889 | 0.9890 | 0.9898 | nan | 0.9707 | 0.9678 | 0.9758 | 0.9775 |
| 0.0156 | 43.0 | 2150 | 0.0399 | 0.9730 | 0.9865 | 0.9862 | nan | 0.9789 | 0.9886 | 0.9896 | 0.9889 | nan | 0.9708 | 0.9678 | 0.9757 | 0.9777 |
| 0.0154 | 44.0 | 2200 | 0.0407 | 0.9728 | 0.9864 | 0.9861 | nan | 0.9781 | 0.9900 | 0.9884 | 0.9891 | nan | 0.9702 | 0.9673 | 0.9762 | 0.9776 |
| 0.0154 | 45.0 | 2250 | 0.0405 | 0.7784 | 0.9865 | 0.9862 | nan | 0.9785 | 0.9901 | 0.9896 | 0.9877 | 0.0 | 0.9706 | 0.9675 | 0.9761 | 0.9776 |
| 0.0151 | 46.0 | 2300 | 0.0411 | 0.7782 | 0.9864 | 0.9861 | nan | 0.9784 | 0.9903 | 0.9901 | 0.9866 | 0.0 | 0.9704 | 0.9673 | 0.9758 | 0.9775 |
| 0.0151 | 47.0 | 2350 | 0.0394 | 0.9732 | 0.9866 | 0.9863 | nan | 0.9790 | 0.9896 | 0.9890 | 0.9886 | nan | 0.9709 | 0.9681 | 0.9759 | 0.9777 |
| 0.015 | 48.0 | 2400 | 0.0405 | 0.7784 | 0.9865 | 0.9863 | nan | 0.9787 | 0.9885 | 0.9892 | 0.9898 | 0.0 | 0.9708 | 0.9677 | 0.9757 | 0.9780 |
| 0.015 | 49.0 | 2450 | 0.0399 | 0.7786 | 0.9866 | 0.9863 | nan | 0.9787 | 0.9905 | 0.9882 | 0.9888 | 0.0 | 0.9707 | 0.9678 | 0.9764 | 0.9779 |
| 0.0149 | 50.0 | 2500 | 0.0410 | 0.7783 | 0.9864 | 0.9861 | nan | 0.9781 | 0.9895 | 0.9889 | 0.9891 | 0.0 | 0.9705 | 0.9673 | 0.9761 | 0.9776 |
| 0.0149 | 51.0 | 2550 | 0.0405 | 0.7784 | 0.9865 | 0.9862 | nan | 0.9793 | 0.9898 | 0.9895 | 0.9872 | 0.0 | 0.9707 | 0.9676 | 0.9763 | 0.9776 |
| 0.0145 | 52.0 | 2600 | 0.0402 | 0.7785 | 0.9866 | 0.9863 | nan | 0.9788 | 0.9895 | 0.9893 | 0.9887 | 0.0 | 0.9710 | 0.9678 | 0.9760 | 0.9778 |
| 0.0145 | 53.0 | 2650 | 0.0401 | 0.7786 | 0.9866 | 0.9864 | nan | 0.9791 | 0.9889 | 0.9898 | 0.9887 | 0.0 | 0.9710 | 0.9680 | 0.9761 | 0.9780 |
| 0.0144 | 54.0 | 2700 | 0.0392 | 0.7787 | 0.9867 | 0.9864 | nan | 0.9795 | 0.9888 | 0.9887 | 0.9896 | 0.0 | 0.9714 | 0.9682 | 0.9761 | 0.9777 |
| 0.0144 | 55.0 | 2750 | 0.0409 | 0.7784 | 0.9865 | 0.9862 | nan | 0.9787 | 0.9886 | 0.9895 | 0.9891 | 0.0 | 0.9706 | 0.9675 | 0.9760 | 0.9777 |
| 0.0141 | 56.0 | 2800 | 0.0410 | 0.7784 | 0.9865 | 0.9862 | nan | 0.9779 | 0.9897 | 0.9897 | 0.9887 | 0.0 | 0.9707 | 0.9675 | 0.9759 | 0.9778 |
| 0.0141 | 57.0 | 2850 | 0.0412 | 0.7784 | 0.9865 | 0.9862 | nan | 0.9780 | 0.9898 | 0.9891 | 0.9891 | 0.0 | 0.9707 | 0.9676 | 0.9761 | 0.9776 |
| 0.014 | 58.0 | 2900 | 0.0403 | 0.9732 | 0.9866 | 0.9863 | nan | 0.9794 | 0.9889 | 0.9889 | 0.9891 | nan | 0.9713 | 0.9680 | 0.9761 | 0.9775 |
| 0.014 | 59.0 | 2950 | 0.0404 | 0.7787 | 0.9867 | 0.9864 | nan | 0.9787 | 0.9899 | 0.9889 | 0.9892 | 0.0 | 0.9711 | 0.9680 | 0.9763 | 0.9779 |
| 0.0139 | 60.0 | 3000 | 0.0412 | 0.7783 | 0.9865 | 0.9862 | nan | 0.9786 | 0.9893 | 0.9900 | 0.9879 | 0.0 | 0.9708 | 0.9675 | 0.9758 | 0.9775 |
| 0.0139 | 61.0 | 3050 | 0.0410 | 0.7785 | 0.9866 | 0.9863 | nan | 0.9789 | 0.9893 | 0.9901 | 0.9879 | 0.0 | 0.9708 | 0.9676 | 0.9762 | 0.9780 |
| 0.0138 | 62.0 | 3100 | 0.0413 | 0.7784 | 0.9865 | 0.9862 | nan | 0.9778 | 0.9896 | 0.9893 | 0.9894 | 0.0 | 0.9705 | 0.9675 | 0.9763 | 0.9779 |
| 0.0138 | 63.0 | 3150 | 0.0400 | 0.7786 | 0.9866 | 0.9863 | nan | 0.9794 | 0.9887 | 0.9908 | 0.9874 | 0.0 | 0.9715 | 0.9681 | 0.9757 | 0.9776 |
| 0.0138 | 64.0 | 3200 | 0.0401 | 0.7786 | 0.9866 | 0.9864 | nan | 0.9800 | 0.9888 | 0.9904 | 0.9873 | 0.0 | 0.9715 | 0.9682 | 0.9758 | 0.9776 |
| 0.0138 | 65.0 | 3250 | 0.0414 | 0.7784 | 0.9865 | 0.9862 | nan | 0.9788 | 0.9888 | 0.9905 | 0.9879 | 0.0 | 0.9708 | 0.9675 | 0.9759 | 0.9776 |
| 0.0136 | 66.0 | 3300 | 0.0397 | 0.7787 | 0.9867 | 0.9864 | nan | 0.9796 | 0.9895 | 0.9897 | 0.9880 | 0.0 | 0.9714 | 0.9683 | 0.9763 | 0.9776 |
| 0.0136 | 67.0 | 3350 | 0.0417 | 0.7783 | 0.9864 | 0.9861 | nan | 0.9777 | 0.9894 | 0.9903 | 0.9884 | 0.0 | 0.9702 | 0.9671 | 0.9761 | 0.9779 |
| 0.0135 | 68.0 | 3400 | 0.0409 | 0.7784 | 0.9865 | 0.9862 | nan | 0.9790 | 0.9898 | 0.9908 | 0.9862 | 0.0 | 0.9711 | 0.9678 | 0.9758 | 0.9773 |
| 0.0135 | 69.0 | 3450 | 0.0399 | 0.7787 | 0.9867 | 0.9864 | nan | 0.9796 | 0.9896 | 0.9887 | 0.9888 | 0.0 | 0.9714 | 0.9681 | 0.9764 | 0.9778 |
| 0.0133 | 70.0 | 3500 | 0.0407 | 0.7785 | 0.9865 | 0.9863 | nan | 0.9792 | 0.9903 | 0.9901 | 0.9866 | 0.0 | 0.9713 | 0.9676 | 0.9761 | 0.9775 |
| 0.0133 | 71.0 | 3550 | 0.0407 | 0.7786 | 0.9866 | 0.9864 | nan | 0.9787 | 0.9896 | 0.9892 | 0.9890 | 0.0 | 0.9712 | 0.9679 | 0.9761 | 0.9778 |
| 0.0131 | 72.0 | 3600 | 0.0394 | 0.7789 | 0.9868 | 0.9865 | nan | 0.9790 | 0.9899 | 0.9895 | 0.9887 | 0.0 | 0.9714 | 0.9681 | 0.9766 | 0.9781 |
| 0.0131 | 73.0 | 3650 | 0.0410 | 0.7785 | 0.9865 | 0.9863 | nan | 0.9796 | 0.9897 | 0.9903 | 0.9865 | 0.0 | 0.9713 | 0.9678 | 0.9759 | 0.9774 |
| 0.0132 | 74.0 | 3700 | 0.0412 | 0.7785 | 0.9866 | 0.9863 | nan | 0.9791 | 0.9900 | 0.9901 | 0.9871 | 0.0 | 0.9713 | 0.9678 | 0.9761 | 0.9774 |
| 0.0132 | 75.0 | 3750 | 0.0412 | 0.7786 | 0.9866 | 0.9863 | nan | 0.9785 | 0.9902 | 0.9898 | 0.9879 | 0.0 | 0.9711 | 0.9676 | 0.9763 | 0.9779 |
| 0.0131 | 76.0 | 3800 | 0.0396 | 0.7786 | 0.9866 | 0.9864 | nan | 0.9798 | 0.9893 | 0.9904 | 0.9870 | 0.0 | 0.9716 | 0.9682 | 0.9760 | 0.9775 |
| 0.0131 | 77.0 | 3850 | 0.0418 | 0.7784 | 0.9865 | 0.9862 | nan | 0.9789 | 0.9896 | 0.9905 | 0.9871 | 0.0 | 0.9711 | 0.9676 | 0.9760 | 0.9775 |
| 0.013 | 78.0 | 3900 | 0.0396 | 0.7786 | 0.9866 | 0.9864 | nan | 0.9787 | 0.9899 | 0.9906 | 0.9872 | 0.0 | 0.9713 | 0.9678 | 0.9760 | 0.9779 |
| 0.013 | 79.0 | 3950 | 0.0398 | 0.7787 | 0.9867 | 0.9864 | nan | 0.9794 | 0.9898 | 0.9905 | 0.9869 | 0.0 | 0.9715 | 0.9680 | 0.9762 | 0.9777 |
| 0.0128 | 80.0 | 4000 | 0.0402 | 0.7788 | 0.9867 | 0.9865 | nan | 0.9789 | 0.9898 | 0.9896 | 0.9885 | 0.0 | 0.9714 | 0.9680 | 0.9765 | 0.9779 |
| 0.0128 | 81.0 | 4050 | 0.0404 | 0.7787 | 0.9867 | 0.9864 | nan | 0.9787 | 0.9903 | 0.9902 | 0.9874 | 0.0 | 0.9713 | 0.9677 | 0.9763 | 0.9779 |
| 0.0127 | 82.0 | 4100 | 0.0397 | 0.7787 | 0.9867 | 0.9864 | nan | 0.9794 | 0.9896 | 0.9901 | 0.9877 | 0.0 | 0.9716 | 0.9681 | 0.9762 | 0.9778 |
| 0.0127 | 83.0 | 4150 | 0.0411 | 0.7786 | 0.9866 | 0.9863 | nan | 0.9786 | 0.9898 | 0.9899 | 0.9881 | 0.0 | 0.9712 | 0.9677 | 0.9763 | 0.9778 |
| 0.0127 | 84.0 | 4200 | 0.0406 | 0.7788 | 0.9867 | 0.9865 | nan | 0.9787 | 0.9903 | 0.9890 | 0.9889 | 0.0 | 0.9713 | 0.9680 | 0.9766 | 0.9781 |
| 0.0127 | 85.0 | 4250 | 0.0413 | 0.7786 | 0.9866 | 0.9864 | nan | 0.9787 | 0.9900 | 0.9888 | 0.9891 | 0.0 | 0.9711 | 0.9677 | 0.9764 | 0.9779 |
| 0.0126 | 86.0 | 4300 | 0.0400 | 0.7788 | 0.9867 | 0.9865 | nan | 0.9792 | 0.9904 | 0.9895 | 0.9878 | 0.0 | 0.9715 | 0.9681 | 0.9765 | 0.9778 |
| 0.0126 | 87.0 | 4350 | 0.0397 | 0.7788 | 0.9868 | 0.9865 | nan | 0.9789 | 0.9898 | 0.9898 | 0.9885 | 0.0 | 0.9715 | 0.9682 | 0.9765 | 0.9780 |
| 0.0125 | 88.0 | 4400 | 0.0398 | 0.7788 | 0.9868 | 0.9865 | nan | 0.9791 | 0.9903 | 0.9894 | 0.9883 | 0.0 | 0.9716 | 0.9681 | 0.9767 | 0.9779 |
| 0.0125 | 89.0 | 4450 | 0.0400 | 0.7787 | 0.9867 | 0.9865 | nan | 0.9795 | 0.9898 | 0.9902 | 0.9872 | 0.0 | 0.9716 | 0.9682 | 0.9763 | 0.9776 |
| 0.0125 | 90.0 | 4500 | 0.0397 | 0.7788 | 0.9867 | 0.9865 | nan | 0.9788 | 0.9902 | 0.9893 | 0.9887 | 0.0 | 0.9716 | 0.9680 | 0.9765 | 0.9779 |
| 0.0125 | 91.0 | 4550 | 0.0400 | 0.7787 | 0.9867 | 0.9864 | nan | 0.9790 | 0.9901 | 0.9903 | 0.9875 | 0.0 | 0.9715 | 0.9680 | 0.9762 | 0.9779 |
| 0.0125 | 92.0 | 4600 | 0.0392 | 0.7787 | 0.9867 | 0.9864 | nan | 0.9790 | 0.9903 | 0.9898 | 0.9878 | 0.0 | 0.9716 | 0.9680 | 0.9765 | 0.9777 |
| 0.0125 | 93.0 | 4650 | 0.0403 | 0.7787 | 0.9867 | 0.9864 | nan | 0.9791 | 0.9900 | 0.9905 | 0.9873 | 0.0 | 0.9716 | 0.9681 | 0.9763 | 0.9777 |
| 0.0123 | 94.0 | 4700 | 0.0396 | 0.7789 | 0.9868 | 0.9865 | nan | 0.9797 | 0.9898 | 0.9903 | 0.9874 | 0.0 | 0.9718 | 0.9684 | 0.9764 | 0.9778 |
| 0.0123 | 95.0 | 4750 | 0.0405 | 0.7787 | 0.9867 | 0.9864 | nan | 0.9790 | 0.9901 | 0.9903 | 0.9874 | 0.0 | 0.9715 | 0.9679 | 0.9764 | 0.9778 |
| 0.0122 | 96.0 | 4800 | 0.0394 | 0.7789 | 0.9868 | 0.9865 | nan | 0.9793 | 0.9896 | 0.9898 | 0.9884 | 0.0 | 0.9717 | 0.9682 | 0.9764 | 0.9780 |
| 0.0122 | 97.0 | 4850 | 0.0396 | 0.7789 | 0.9868 | 0.9865 | nan | 0.9790 | 0.9900 | 0.9900 | 0.9882 | 0.0 | 0.9716 | 0.9681 | 0.9766 | 0.9780 |
| 0.0122 | 98.0 | 4900 | 0.0399 | 0.7788 | 0.9867 | 0.9865 | nan | 0.9797 | 0.9900 | 0.9904 | 0.9870 | 0.0 | 0.9718 | 0.9682 | 0.9764 | 0.9776 |
| 0.0122 | 99.0 | 4950 | 0.0394 | 0.7789 | 0.9868 | 0.9865 | nan | 0.9793 | 0.9896 | 0.9897 | 0.9885 | 0.0 | 0.9717 | 0.9682 | 0.9766 | 0.9780 |
| 0.0122 | 100.0 | 5000 | 0.0383 | 0.7790 | 0.9868 | 0.9866 | nan | 0.9804 | 0.9899 | 0.9895 | 0.9876 | 0.0 | 0.9720 | 0.9686 | 0.9767 | 0.9777 |
| 0.0122 | 101.0 | 5050 | 0.0399 | 0.7788 | 0.9867 | 0.9865 | nan | 0.9794 | 0.9904 | 0.9895 | 0.9877 | 0.0 | 0.9716 | 0.9680 | 0.9766 | 0.9779 |
| 0.0121 | 102.0 | 5100 | 0.0392 | 0.7790 | 0.9868 | 0.9866 | nan | 0.9796 | 0.9898 | 0.9889 | 0.9890 | 0.0 | 0.9718 | 0.9685 | 0.9767 | 0.9779 |
| 0.0121 | 103.0 | 5150 | 0.0393 | 0.7788 | 0.9867 | 0.9865 | nan | 0.9788 | 0.9901 | 0.9900 | 0.9881 | 0.0 | 0.9715 | 0.9679 | 0.9765 | 0.9781 |
| 0.012 | 104.0 | 5200 | 0.0400 | 0.7788 | 0.9867 | 0.9865 | nan | 0.9790 | 0.9894 | 0.9904 | 0.9881 | 0.0 | 0.9716 | 0.9682 | 0.9763 | 0.9779 |
| 0.012 | 105.0 | 5250 | 0.0393 | 0.7789 | 0.9868 | 0.9865 | nan | 0.9796 | 0.9894 | 0.9904 | 0.9878 | 0.0 | 0.9718 | 0.9683 | 0.9764 | 0.9780 |
| 0.012 | 106.0 | 5300 | 0.0390 | 0.7789 | 0.9868 | 0.9866 | nan | 0.9794 | 0.9900 | 0.9890 | 0.9888 | 0.0 | 0.9719 | 0.9683 | 0.9766 | 0.9780 |
| 0.012 | 107.0 | 5350 | 0.0383 | 0.7790 | 0.9868 | 0.9866 | nan | 0.9801 | 0.9899 | 0.9903 | 0.9870 | 0.0 | 0.9721 | 0.9684 | 0.9765 | 0.9779 |
| 0.0119 | 108.0 | 5400 | 0.0380 | 0.7792 | 0.9870 | 0.9868 | nan | 0.9807 | 0.9892 | 0.9897 | 0.9883 | 0.0 | 0.9724 | 0.9690 | 0.9768 | 0.9780 |
| 0.0119 | 109.0 | 5450 | 0.0400 | 0.7787 | 0.9867 | 0.9864 | nan | 0.9786 | 0.9902 | 0.9902 | 0.9876 | 0.0 | 0.9714 | 0.9677 | 0.9764 | 0.9778 |
| 0.0119 | 110.0 | 5500 | 0.0385 | 0.7791 | 0.9869 | 0.9867 | nan | 0.9801 | 0.9894 | 0.9891 | 0.9889 | 0.0 | 0.9721 | 0.9686 | 0.9768 | 0.9780 |
| 0.0119 | 111.0 | 5550 | 0.0385 | 0.7790 | 0.9869 | 0.9866 | nan | 0.9798 | 0.9896 | 0.9902 | 0.9879 | 0.0 | 0.9719 | 0.9685 | 0.9767 | 0.9781 |
| 0.0118 | 112.0 | 5600 | 0.0377 | 0.7791 | 0.9869 | 0.9867 | nan | 0.9798 | 0.9891 | 0.9897 | 0.9891 | 0.0 | 0.9722 | 0.9687 | 0.9766 | 0.9782 |
| 0.0118 | 113.0 | 5650 | 0.0388 | 0.7790 | 0.9869 | 0.9866 | nan | 0.9794 | 0.9899 | 0.9904 | 0.9878 | 0.0 | 0.9719 | 0.9683 | 0.9767 | 0.9781 |
| 0.0118 | 114.0 | 5700 | 0.0391 | 0.7789 | 0.9868 | 0.9866 | nan | 0.9797 | 0.9891 | 0.9906 | 0.9880 | 0.0 | 0.9719 | 0.9683 | 0.9763 | 0.9781 |
| 0.0118 | 115.0 | 5750 | 0.0390 | 0.7789 | 0.9868 | 0.9866 | nan | 0.9796 | 0.9902 | 0.9899 | 0.9876 | 0.0 | 0.9719 | 0.9683 | 0.9766 | 0.9779 |
| 0.0118 | 116.0 | 5800 | 0.0390 | 0.7789 | 0.9868 | 0.9866 | nan | 0.9795 | 0.9899 | 0.9896 | 0.9882 | 0.0 | 0.9718 | 0.9682 | 0.9767 | 0.9779 |
| 0.0118 | 117.0 | 5850 | 0.0394 | 0.7788 | 0.9867 | 0.9865 | nan | 0.9791 | 0.9899 | 0.9896 | 0.9883 | 0.0 | 0.9717 | 0.9679 | 0.9765 | 0.9778 |
| 0.0117 | 118.0 | 5900 | 0.0386 | 0.7789 | 0.9868 | 0.9866 | nan | 0.9796 | 0.9898 | 0.9900 | 0.9879 | 0.0 | 0.9719 | 0.9682 | 0.9766 | 0.9779 |
| 0.0117 | 119.0 | 5950 | 0.0386 | 0.7791 | 0.9869 | 0.9867 | nan | 0.9800 | 0.9895 | 0.9896 | 0.9885 | 0.0 | 0.9721 | 0.9686 | 0.9767 | 0.9781 |
| 0.0117 | 120.0 | 6000 | 0.0388 | 0.7790 | 0.9869 | 0.9866 | nan | 0.9796 | 0.9899 | 0.9902 | 0.9878 | 0.0 | 0.9719 | 0.9684 | 0.9767 | 0.9781 |
| 0.0117 | 121.0 | 6050 | 0.0389 | 0.7790 | 0.9868 | 0.9866 | nan | 0.9800 | 0.9896 | 0.9894 | 0.9883 | 0.0 | 0.9721 | 0.9684 | 0.9767 | 0.9778 |
| 0.0116 | 122.0 | 6100 | 0.0384 | 0.7790 | 0.9869 | 0.9866 | nan | 0.9796 | 0.9896 | 0.9897 | 0.9886 | 0.0 | 0.9720 | 0.9684 | 0.9767 | 0.9780 |
| 0.0116 | 123.0 | 6150 | 0.0386 | 0.7789 | 0.9868 | 0.9865 | nan | 0.9793 | 0.9899 | 0.9901 | 0.9879 | 0.0 | 0.9718 | 0.9680 | 0.9765 | 0.9781 |
| 0.0115 | 124.0 | 6200 | 0.0383 | 0.7792 | 0.9870 | 0.9867 | nan | 0.9802 | 0.9890 | 0.9900 | 0.9888 | 0.0 | 0.9722 | 0.9688 | 0.9767 | 0.9781 |
| 0.0115 | 125.0 | 6250 | 0.0381 | 0.7790 | 0.9869 | 0.9866 | nan | 0.9796 | 0.9892 | 0.9900 | 0.9888 | 0.0 | 0.9721 | 0.9685 | 0.9766 | 0.9780 |
| 0.0115 | 126.0 | 6300 | 0.0383 | 0.7791 | 0.9869 | 0.9867 | nan | 0.9797 | 0.9894 | 0.9894 | 0.9893 | 0.0 | 0.9720 | 0.9686 | 0.9767 | 0.9782 |
| 0.0115 | 127.0 | 6350 | 0.0384 | 0.7790 | 0.9869 | 0.9866 | nan | 0.9797 | 0.9895 | 0.9901 | 0.9881 | 0.0 | 0.9719 | 0.9684 | 0.9766 | 0.9781 |
| 0.0115 | 128.0 | 6400 | 0.0377 | 0.7792 | 0.9870 | 0.9867 | nan | 0.9801 | 0.9891 | 0.9896 | 0.9891 | 0.0 | 0.9722 | 0.9688 | 0.9767 | 0.9781 |
| 0.0115 | 129.0 | 6450 | 0.0383 | 0.7791 | 0.9869 | 0.9867 | nan | 0.9800 | 0.9898 | 0.9899 | 0.9880 | 0.0 | 0.9721 | 0.9685 | 0.9768 | 0.9782 |
| 0.0115 | 130.0 | 6500 | 0.0377 | 0.7791 | 0.9870 | 0.9867 | nan | 0.9797 | 0.9895 | 0.9901 | 0.9885 | 0.0 | 0.9723 | 0.9687 | 0.9767 | 0.9781 |
| 0.0115 | 131.0 | 6550 | 0.0380 | 0.7791 | 0.9869 | 0.9867 | nan | 0.9800 | 0.9891 | 0.9897 | 0.9890 | 0.0 | 0.9722 | 0.9687 | 0.9767 | 0.9780 |
| 0.0114 | 132.0 | 6600 | 0.0377 | 0.7792 | 0.9870 | 0.9868 | nan | 0.9799 | 0.9893 | 0.9901 | 0.9887 | 0.0 | 0.9724 | 0.9689 | 0.9766 | 0.9782 |
| 0.0114 | 133.0 | 6650 | 0.0378 | 0.7792 | 0.9870 | 0.9867 | nan | 0.9801 | 0.9899 | 0.9897 | 0.9882 | 0.0 | 0.9722 | 0.9687 | 0.9769 | 0.9782 |
| 0.0114 | 134.0 | 6700 | 0.0379 | 0.7791 | 0.9869 | 0.9867 | nan | 0.9801 | 0.9896 | 0.9902 | 0.9879 | 0.0 | 0.9723 | 0.9688 | 0.9767 | 0.9780 |
| 0.0114 | 135.0 | 6750 | 0.0374 | 0.7793 | 0.9870 | 0.9868 | nan | 0.9803 | 0.9894 | 0.9899 | 0.9884 | 0.0 | 0.9724 | 0.9690 | 0.9768 | 0.9782 |
| 0.0113 | 136.0 | 6800 | 0.0386 | 0.7790 | 0.9869 | 0.9866 | nan | 0.9796 | 0.9897 | 0.9903 | 0.9878 | 0.0 | 0.9720 | 0.9683 | 0.9766 | 0.9781 |
| 0.0113 | 137.0 | 6850 | 0.0378 | 0.9739 | 0.9870 | 0.9867 | nan | 0.9802 | 0.9895 | 0.9900 | 0.9880 | nan | 0.9724 | 0.9688 | 0.9766 | 0.9779 |
| 0.0114 | 138.0 | 6900 | 0.0378 | 0.9740 | 0.9870 | 0.9868 | nan | 0.9800 | 0.9895 | 0.9893 | 0.9893 | nan | 0.9722 | 0.9688 | 0.9769 | 0.9783 |
| 0.0114 | 139.0 | 6950 | 0.0380 | 0.7791 | 0.9869 | 0.9867 | nan | 0.9797 | 0.9896 | 0.9897 | 0.9888 | 0.0 | 0.9722 | 0.9685 | 0.9767 | 0.9782 |
| 0.0113 | 140.0 | 7000 | 0.0374 | 0.7793 | 0.9871 | 0.9868 | nan | 0.9803 | 0.9893 | 0.9899 | 0.9887 | 0.0 | 0.9725 | 0.9690 | 0.9768 | 0.9783 |
| 0.0113 | 141.0 | 7050 | 0.0378 | 0.7792 | 0.9870 | 0.9868 | nan | 0.9801 | 0.9894 | 0.9900 | 0.9886 | 0.0 | 0.9724 | 0.9689 | 0.9767 | 0.9781 |
| 0.0112 | 142.0 | 7100 | 0.0380 | 0.9740 | 0.9870 | 0.9868 | nan | 0.9801 | 0.9899 | 0.9897 | 0.9882 | nan | 0.9724 | 0.9687 | 0.9768 | 0.9782 |
| 0.0112 | 143.0 | 7150 | 0.0380 | 0.9740 | 0.9870 | 0.9868 | nan | 0.9800 | 0.9897 | 0.9899 | 0.9883 | nan | 0.9724 | 0.9688 | 0.9768 | 0.9781 |
| 0.0112 | 144.0 | 7200 | 0.0378 | 0.9741 | 0.9870 | 0.9868 | nan | 0.9802 | 0.9896 | 0.9897 | 0.9887 | nan | 0.9725 | 0.9690 | 0.9768 | 0.9781 |
| 0.0112 | 145.0 | 7250 | 0.0376 | 0.7793 | 0.9870 | 0.9868 | nan | 0.9806 | 0.9892 | 0.9903 | 0.9880 | 0.0 | 0.9726 | 0.9690 | 0.9767 | 0.9782 |
| 0.0112 | 146.0 | 7300 | 0.0380 | 0.7792 | 0.9870 | 0.9867 | nan | 0.9801 | 0.9899 | 0.9898 | 0.9880 | 0.0 | 0.9724 | 0.9687 | 0.9767 | 0.9780 |
| 0.0112 | 147.0 | 7350 | 0.0381 | 0.9740 | 0.9870 | 0.9867 | nan | 0.9800 | 0.9900 | 0.9899 | 0.9880 | nan | 0.9723 | 0.9687 | 0.9767 | 0.9781 |
| 0.0111 | 148.0 | 7400 | 0.0374 | 0.9742 | 0.9871 | 0.9868 | nan | 0.9805 | 0.9895 | 0.9900 | 0.9883 | nan | 0.9726 | 0.9690 | 0.9768 | 0.9782 |
| 0.0111 | 149.0 | 7450 | 0.0378 | 0.9740 | 0.9870 | 0.9868 | nan | 0.9801 | 0.9897 | 0.9902 | 0.9879 | nan | 0.9724 | 0.9687 | 0.9767 | 0.9781 |
| 0.0112 | 150.0 | 7500 | 0.0377 | 0.9741 | 0.9870 | 0.9868 | nan | 0.9800 | 0.9892 | 0.9897 | 0.9891 | nan | 0.9725 | 0.9690 | 0.9767 | 0.9781 |
| 0.0112 | 151.0 | 7550 | 0.0377 | 0.9742 | 0.9871 | 0.9868 | nan | 0.9802 | 0.9893 | 0.9895 | 0.9893 | nan | 0.9725 | 0.9691 | 0.9768 | 0.9782 |
| 0.0111 | 152.0 | 7600 | 0.0374 | 0.9741 | 0.9870 | 0.9868 | nan | 0.9804 | 0.9898 | 0.9898 | 0.9883 | nan | 0.9726 | 0.9690 | 0.9768 | 0.9782 |
| 0.0111 | 153.0 | 7650 | 0.0380 | 0.9740 | 0.9870 | 0.9868 | nan | 0.9800 | 0.9898 | 0.9897 | 0.9884 | nan | 0.9725 | 0.9688 | 0.9767 | 0.9781 |
| 0.0111 | 154.0 | 7700 | 0.0373 | 0.9742 | 0.9871 | 0.9869 | nan | 0.9805 | 0.9891 | 0.9901 | 0.9887 | nan | 0.9727 | 0.9692 | 0.9767 | 0.9782 |
| 0.0111 | 155.0 | 7750 | 0.0375 | 0.9742 | 0.9871 | 0.9868 | nan | 0.9804 | 0.9896 | 0.9893 | 0.9891 | nan | 0.9727 | 0.9692 | 0.9768 | 0.9781 |
| 0.0111 | 156.0 | 7800 | 0.0378 | 0.9741 | 0.9870 | 0.9868 | nan | 0.9801 | 0.9898 | 0.9897 | 0.9886 | nan | 0.9727 | 0.9689 | 0.9767 | 0.9781 |
| 0.0111 | 157.0 | 7850 | 0.0376 | 0.9742 | 0.9871 | 0.9868 | nan | 0.9805 | 0.9891 | 0.9902 | 0.9885 | nan | 0.9727 | 0.9691 | 0.9766 | 0.9782 |
| 0.0111 | 158.0 | 7900 | 0.0375 | 0.9742 | 0.9871 | 0.9868 | nan | 0.9804 | 0.9893 | 0.9899 | 0.9887 | nan | 0.9727 | 0.9691 | 0.9767 | 0.9782 |
| 0.0111 | 159.0 | 7950 | 0.0372 | 0.9743 | 0.9871 | 0.9869 | nan | 0.9805 | 0.9892 | 0.9904 | 0.9884 | nan | 0.9728 | 0.9693 | 0.9766 | 0.9783 |
| 0.0111 | 160.0 | 8000 | 0.0367 | 0.9743 | 0.9871 | 0.9869 | nan | 0.9809 | 0.9896 | 0.9898 | 0.9882 | nan | 0.9730 | 0.9693 | 0.9768 | 0.9782 |
| 0.0111 | 161.0 | 8050 | 0.0370 | 0.9744 | 0.9871 | 0.9869 | nan | 0.9808 | 0.9898 | 0.9894 | 0.9886 | nan | 0.9728 | 0.9693 | 0.9770 | 0.9783 |
| 0.0111 | 162.0 | 8100 | 0.0371 | 0.9743 | 0.9871 | 0.9869 | nan | 0.9806 | 0.9892 | 0.9901 | 0.9885 | nan | 0.9729 | 0.9694 | 0.9767 | 0.9782 |
| 0.0111 | 163.0 | 8150 | 0.0372 | 0.9743 | 0.9871 | 0.9869 | nan | 0.9807 | 0.9894 | 0.9901 | 0.9882 | nan | 0.9729 | 0.9694 | 0.9767 | 0.9781 |
| 0.011 | 164.0 | 8200 | 0.0373 | 0.9743 | 0.9871 | 0.9869 | nan | 0.9805 | 0.9896 | 0.9894 | 0.9889 | nan | 0.9728 | 0.9693 | 0.9768 | 0.9781 |
| 0.011 | 165.0 | 8250 | 0.0371 | 0.9743 | 0.9871 | 0.9869 | nan | 0.9808 | 0.9897 | 0.9898 | 0.9882 | nan | 0.9729 | 0.9694 | 0.9768 | 0.9783 |
| 0.011 | 166.0 | 8300 | 0.0372 | 0.9743 | 0.9871 | 0.9869 | nan | 0.9806 | 0.9897 | 0.9898 | 0.9884 | nan | 0.9729 | 0.9693 | 0.9768 | 0.9781 |
| 0.011 | 167.0 | 8350 | 0.0373 | 0.9743 | 0.9871 | 0.9869 | nan | 0.9804 | 0.9896 | 0.9900 | 0.9885 | nan | 0.9728 | 0.9692 | 0.9768 | 0.9783 |
| 0.011 | 168.0 | 8400 | 0.0369 | 0.9744 | 0.9872 | 0.9870 | nan | 0.9807 | 0.9895 | 0.9899 | 0.9885 | nan | 0.9731 | 0.9695 | 0.9767 | 0.9782 |
| 0.011 | 169.0 | 8450 | 0.0375 | 0.9742 | 0.9871 | 0.9869 | nan | 0.9802 | 0.9897 | 0.9898 | 0.9886 | nan | 0.9727 | 0.9691 | 0.9768 | 0.9782 |
| 0.0109 | 170.0 | 8500 | 0.0363 | 0.9746 | 0.9873 | 0.9871 | nan | 0.9814 | 0.9892 | 0.9894 | 0.9891 | nan | 0.9734 | 0.9699 | 0.9769 | 0.9782 |
| 0.0109 | 171.0 | 8550 | 0.0371 | 0.9743 | 0.9871 | 0.9869 | nan | 0.9805 | 0.9895 | 0.9900 | 0.9885 | nan | 0.9729 | 0.9693 | 0.9767 | 0.9782 |
| 0.011 | 172.0 | 8600 | 0.0371 | 0.9743 | 0.9871 | 0.9869 | nan | 0.9807 | 0.9896 | 0.9898 | 0.9885 | nan | 0.9729 | 0.9693 | 0.9768 | 0.9782 |
| 0.011 | 173.0 | 8650 | 0.0373 | 0.9743 | 0.9871 | 0.9869 | nan | 0.9807 | 0.9893 | 0.9901 | 0.9885 | nan | 0.9728 | 0.9694 | 0.9767 | 0.9783 |
| 0.0109 | 174.0 | 8700 | 0.0372 | 0.9744 | 0.9872 | 0.9869 | nan | 0.9806 | 0.9894 | 0.9898 | 0.9889 | nan | 0.9729 | 0.9694 | 0.9768 | 0.9783 |
| 0.0109 | 175.0 | 8750 | 0.0373 | 0.9743 | 0.9871 | 0.9869 | nan | 0.9806 | 0.9895 | 0.9899 | 0.9885 | nan | 0.9729 | 0.9694 | 0.9768 | 0.9782 |
| 0.0109 | 176.0 | 8800 | 0.0371 | 0.9744 | 0.9872 | 0.9869 | nan | 0.9808 | 0.9894 | 0.9898 | 0.9886 | nan | 0.9730 | 0.9694 | 0.9768 | 0.9782 |
| 0.0109 | 177.0 | 8850 | 0.0370 | 0.9744 | 0.9872 | 0.9870 | nan | 0.9808 | 0.9897 | 0.9896 | 0.9886 | nan | 0.9730 | 0.9695 | 0.9768 | 0.9782 |
| 0.0109 | 178.0 | 8900 | 0.0373 | 0.9744 | 0.9872 | 0.9869 | nan | 0.9808 | 0.9895 | 0.9899 | 0.9885 | nan | 0.9729 | 0.9694 | 0.9768 | 0.9783 |
| 0.0109 | 179.0 | 8950 | 0.0372 | 0.9743 | 0.9871 | 0.9869 | nan | 0.9806 | 0.9894 | 0.9897 | 0.9888 | nan | 0.9729 | 0.9694 | 0.9768 | 0.9782 |
| 0.0109 | 180.0 | 9000 | 0.0368 | 0.9744 | 0.9872 | 0.9870 | nan | 0.9811 | 0.9894 | 0.9897 | 0.9885 | nan | 0.9731 | 0.9696 | 0.9768 | 0.9781 |
| 0.0109 | 181.0 | 9050 | 0.0371 | 0.9744 | 0.9872 | 0.9869 | nan | 0.9807 | 0.9894 | 0.9900 | 0.9886 | nan | 0.9730 | 0.9694 | 0.9768 | 0.9783 |
| 0.0109 | 182.0 | 9100 | 0.0370 | 0.9744 | 0.9872 | 0.9869 | nan | 0.9808 | 0.9894 | 0.9898 | 0.9887 | nan | 0.9730 | 0.9695 | 0.9768 | 0.9782 |
| 0.0109 | 183.0 | 9150 | 0.0368 | 0.9744 | 0.9872 | 0.9870 | nan | 0.9810 | 0.9892 | 0.9901 | 0.9885 | nan | 0.9732 | 0.9697 | 0.9767 | 0.9782 |
| 0.0108 | 184.0 | 9200 | 0.0373 | 0.9743 | 0.9871 | 0.9869 | nan | 0.9805 | 0.9896 | 0.9897 | 0.9887 | nan | 0.9729 | 0.9693 | 0.9767 | 0.9782 |
| 0.0108 | 185.0 | 9250 | 0.0371 | 0.9743 | 0.9872 | 0.9869 | nan | 0.9806 | 0.9895 | 0.9900 | 0.9885 | nan | 0.9730 | 0.9694 | 0.9767 | 0.9783 |
| 0.0108 | 186.0 | 9300 | 0.0371 | 0.9744 | 0.9872 | 0.9870 | nan | 0.9808 | 0.9897 | 0.9896 | 0.9886 | nan | 0.9731 | 0.9695 | 0.9769 | 0.9782 |
| 0.0108 | 187.0 | 9350 | 0.0371 | 0.9744 | 0.9872 | 0.9870 | nan | 0.9808 | 0.9895 | 0.9899 | 0.9886 | nan | 0.9731 | 0.9695 | 0.9768 | 0.9782 |
| 0.0108 | 188.0 | 9400 | 0.0370 | 0.9744 | 0.9872 | 0.9870 | nan | 0.9808 | 0.9893 | 0.9900 | 0.9886 | nan | 0.9730 | 0.9695 | 0.9768 | 0.9782 |
| 0.0108 | 189.0 | 9450 | 0.0371 | 0.9743 | 0.9872 | 0.9869 | nan | 0.9807 | 0.9895 | 0.9901 | 0.9883 | nan | 0.9730 | 0.9694 | 0.9767 | 0.9782 |
| 0.0108 | 190.0 | 9500 | 0.0370 | 0.9744 | 0.9872 | 0.9870 | nan | 0.9808 | 0.9896 | 0.9899 | 0.9885 | nan | 0.9731 | 0.9695 | 0.9768 | 0.9782 |
| 0.0108 | 191.0 | 9550 | 0.0373 | 0.9743 | 0.9871 | 0.9869 | nan | 0.9806 | 0.9895 | 0.9899 | 0.9886 | nan | 0.9729 | 0.9693 | 0.9768 | 0.9782 |
| 0.0108 | 192.0 | 9600 | 0.0371 | 0.9744 | 0.9872 | 0.9869 | nan | 0.9807 | 0.9894 | 0.9899 | 0.9887 | nan | 0.9730 | 0.9695 | 0.9768 | 0.9782 |
| 0.0108 | 193.0 | 9650 | 0.0374 | 0.9743 | 0.9871 | 0.9869 | nan | 0.9805 | 0.9898 | 0.9897 | 0.9886 | nan | 0.9729 | 0.9693 | 0.9768 | 0.9782 |
| 0.0108 | 194.0 | 9700 | 0.0371 | 0.9744 | 0.9872 | 0.9870 | nan | 0.9807 | 0.9896 | 0.9899 | 0.9885 | nan | 0.9730 | 0.9695 | 0.9768 | 0.9783 |
| 0.0108 | 195.0 | 9750 | 0.0370 | 0.9744 | 0.9872 | 0.9870 | nan | 0.9809 | 0.9896 | 0.9898 | 0.9885 | nan | 0.9731 | 0.9696 | 0.9768 | 0.9782 |
| 0.0108 | 196.0 | 9800 | 0.0370 | 0.9745 | 0.9872 | 0.9870 | nan | 0.9810 | 0.9894 | 0.9898 | 0.9887 | nan | 0.9732 | 0.9697 | 0.9768 | 0.9782 |
| 0.0108 | 197.0 | 9850 | 0.0371 | 0.9744 | 0.9872 | 0.9870 | nan | 0.9809 | 0.9896 | 0.9897 | 0.9886 | nan | 0.9731 | 0.9695 | 0.9768 | 0.9782 |
| 0.0108 | 198.0 | 9900 | 0.0371 | 0.9744 | 0.9872 | 0.9870 | nan | 0.9809 | 0.9896 | 0.9897 | 0.9886 | nan | 0.9731 | 0.9695 | 0.9768 | 0.9782 |
| 0.0108 | 199.0 | 9950 | 0.0371 | 0.9744 | 0.9872 | 0.9870 | nan | 0.9809 | 0.9895 | 0.9898 | 0.9886 | nan | 0.9731 | 0.9696 | 0.9768 | 0.9782 |
| 0.0108 | 200.0 | 10000 | 0.0372 | 0.9744 | 0.9872 | 0.9869 | nan | 0.9806 | 0.9893 | 0.9901 | 0.9887 | nan | 0.9730 | 0.9695 | 0.9767 | 0.9782 |
### Framework versions
- Transformers 4.52.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
Sachleen/fintune_new | Sachleen | "2025-04-19T10:05:33Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/orpheus-3b-0.1-pretrained",
"base_model:adapter:unsloth/orpheus-3b-0.1-pretrained",
"region:us"
] | null | "2025-04-19T10:01:11Z" | ---
base_model: unsloth/orpheus-3b-0.1-pretrained
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
mlfoundations-dev/b1_code_top_16 | mlfoundations-dev | "2025-04-19T10:05:00Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-19T03:15:19Z" | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: b1_code_top_16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# b1_code_top_16
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/b1_code_top_16 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1
- Datasets 3.0.2
- Tokenizers 0.20.3
|
hanzogak/comradeshipLU | hanzogak | "2025-04-19T09:51:27Z" | 0 | 0 | null | [
"anime",
"merge",
"text-to-image",
"base_model:OnomaAIResearch/Illustrious-Lumina-v0.03",
"base_model:finetune:OnomaAIResearch/Illustrious-Lumina-v0.03",
"license:apache-2.0",
"region:us"
] | text-to-image | "2025-04-19T09:36:59Z" | ---
license: apache-2.0
base_model:
- OnomaAIResearch/Illustrious-Lumina-v0.03
pipeline_tag: text-to-image
tags:
- anime
- merge
---
Comradeship LU
=============
전우애는 계속되어야 한다.
## comradeshipLU-v1T2
This is a merged anime model based on Illustrious-Lumina-v0.03. This is a test merged model.
Illustrious-Lumina-v0.03 + ((LeX-Lumina - Lumina-Image-2.0) × 0.6) = Comradeship LU v1T2 |
linoyts/dog-hidream-lora | linoyts | "2025-04-19T09:50:12Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"hidream",
"hidream-diffusers",
"template:sd-lora",
"base_model:HiDream-ai/HiDream-I1-Full",
"base_model:adapter:HiDream-ai/HiDream-I1-Full",
"license:mit",
"region:us"
] | text-to-image | "2025-04-19T06:42:42Z" | ---
base_model: HiDream-ai/HiDream-I1-Full
library_name: diffusers
license: mit
instance_prompt: a photo of sks dog
widget: []
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- hidream
- hidream-diffusers
- template:sd-lora
- text-to-image
- diffusers-training
- diffusers
- lora
- hidream
- hidream-diffusers
- template:sd-lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# HiDream Image DreamBooth LoRA - linoyts/dog-hidream-lora
<Gallery />
## Model description
These are linoyts/dog-hidream-lora DreamBooth LoRA weights for HiDream-ai/HiDream-I1-Full.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [HiDream Image diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_hidream.md).
## Trigger words
You should use `a photo of sks dog` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](linoyts/dog-hidream-lora/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
>>> import torch
>>> from transformers import PreTrainedTokenizerFast, LlamaForCausalLM
>>> from diffusers import UniPCMultistepScheduler, HiDreamImagePipeline
>>> scheduler = UniPCMultistepScheduler(
... flow_shift=3.0, prediction_type="flow_prediction", use_flow_sigmas=True
... )
>>> tokenizer_4 = PreTrainedTokenizerFast.from_pretrained("meta-llama/Meta-Llama-3.1-8B-Instruct")
>>> text_encoder_4 = LlamaForCausalLM.from_pretrained(
... "meta-llama/Meta-Llama-3.1-8B-Instruct",
... output_hidden_states=True,
... output_attentions=True,
... torch_dtype=torch.bfloat16,
... )
>>> pipe = HiDreamImagePipeline.from_pretrained(
... "HiDream-ai/HiDream-I1-Full",
... scheduler=scheduler,
... tokenizer_4=tokenizer_4,
... text_encoder_4=text_encoder_4,
... torch_dtype=torch.bfloat16,
... )
>>> pipe.enable_model_cpu_offload()
>>> pipe.load_lora_weights(f"linoyts/dog-hidream-lora")
>>> image = pipe(f"a photo of sks dog").images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
LlamaQwenDeepSeek/Qwen2.5-1.5B-scierc | LlamaQwenDeepSeek | "2025-04-19T09:49:23Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-04-19T09:47:40Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TareksTesting/Alkahest-V6-LLaMa-70B | TareksTesting | "2025-04-19T09:47:32Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"base_model:TareksLab/Dungeons-and-Dragons-V1.2-LLaMa-70B",
"base_model:merge:TareksLab/Dungeons-and-Dragons-V1.2-LLaMa-70B",
"base_model:TareksLab/Malediction-V2-LLaMa-70B",
"base_model:merge:TareksLab/Malediction-V2-LLaMa-70B",
"base_model:TareksLab/Stylizer-V2b-LLaMa-70B",
"base_model:merge:TareksLab/Stylizer-V2b-LLaMa-70B",
"base_model:TareksLab/Wordsmith-V9-LLaMa-70B",
"base_model:merge:TareksLab/Wordsmith-V9-LLaMa-70B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-19T09:12:45Z" | ---
base_model:
- TareksLab/Malediction-V2-LLaMa-70B
- TareksLab/Wordsmith-V9-LLaMa-70B
- TareksLab/Dungeons-and-Dragons-V1.2-LLaMa-70B
- TareksLab/Stylizer-V2b-LLaMa-70B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [TareksLab/Stylizer-V2b-LLaMa-70B](https://huggingface.co/TareksLab/Stylizer-V2b-LLaMa-70B) as a base.
### Models Merged
The following models were included in the merge:
* [TareksLab/Malediction-V2-LLaMa-70B](https://huggingface.co/TareksLab/Malediction-V2-LLaMa-70B)
* [TareksLab/Wordsmith-V9-LLaMa-70B](https://huggingface.co/TareksLab/Wordsmith-V9-LLaMa-70B)
* [TareksLab/Dungeons-and-Dragons-V1.2-LLaMa-70B](https://huggingface.co/TareksLab/Dungeons-and-Dragons-V1.2-LLaMa-70B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: TareksLab/Wordsmith-V9-LLaMa-70B
parameters:
weight: 0.25
density: 0.5
- model: TareksLab/Malediction-V2-LLaMa-70B
parameters:
weight: 0.25
density: 0.5
- model: TareksLab/Dungeons-and-Dragons-V1.2-LLaMa-70B
parameters:
weight: 0.25
density: 0.5
- model: TareksLab/Stylizer-V2b-LLaMa-70B
parameters:
weight: 0.25
density: 0.5
merge_method: dare_ties
base_model: TareksLab/Stylizer-V2b-LLaMa-70B
parameters:
normalize: false
out_dtype: bfloat16
chat_template: llama3
tokenizer:
source: base
```
|
sunnnil/Smanhwa | sunnnil | "2025-04-19T09:46:29Z" | 0 | 0 | null | [
"license:other",
"region:us"
] | null | "2025-04-18T10:20:39Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
dazzlinggopi/gopikPEFT_expo | dazzlinggopi | "2025-04-19T09:46:14Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"base_model:facebook/bart-large-cnn",
"base_model:finetune:facebook/bart-large-cnn",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-04-19T09:44:32Z" | ---
library_name: transformers
license: mit
base_model: facebook/bart-large-cnn
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: gopikPEFT_expo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gopikPEFT_expo
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2669
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9187 | 1.0 | 19 | 0.2777 |
| 0.1883 | 2.0 | 38 | 0.2669 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
mlfoundations-dev/b1_science_top_8_10k | mlfoundations-dev | "2025-04-19T09:41:16Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-19T04:45:18Z" | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: b1_science_top_8_10k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# b1_science_top_8_10k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/b1_science_top_8_10k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
raveas/klue-roberta-base-klue-sts | raveas | "2025-04-19T09:37:22Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2025-04-19T09:37:05Z" | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 657 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Hartunka/tiny_bert_km_20_v2 | Hartunka | "2025-04-19T09:28:12Z" | 0 | 0 | null | [
"safetensors",
"distilbert",
"generated_from_trainer",
"dataset:Hartunka/processed_wikitext-103-raw-v1-km-20_v2",
"model-index",
"region:us"
] | null | "2025-04-13T16:24:27Z" | ---
tags:
- generated_from_trainer
datasets:
- Hartunka/processed_wikitext-103-raw-v1-km-20_v2
metrics:
- accuracy
model-index:
- name: tiny_bert_km_20_v2
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: Hartunka/processed_wikitext-103-raw-v1-km-20_v2
type: Hartunka/processed_wikitext-103-raw-v1-km-20_v2
metrics:
- name: Accuracy
type: accuracy
value: 0.15406566084647116
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny_bert_km_20_v2
This model is a fine-tuned version of [](https://huggingface.co/) on the Hartunka/processed_wikitext-103-raw-v1-km-20_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 6.5066
- Accuracy: 0.1541
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 96
- eval_batch_size: 96
- seed: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|
| 6.6514 | 4.1982 | 10000 | 6.7571 | 0.1485 |
| 6.326 | 8.3963 | 20000 | 6.5497 | 0.1526 |
| 6.1671 | 12.5945 | 30000 | 6.6530 | 0.1544 |
| 6.0706 | 16.7926 | 40000 | 6.6305 | 0.1514 |
| 6.0123 | 20.9908 | 50000 | 6.7075 | 0.1511 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.19.1
|
sony-fashion-photography/riq | sony-fashion-photography | "2025-04-19T09:27:14Z" | 5 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-04-07T11:10:21Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Liza
---
# Riq
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Liza` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "Liza",
"lora_weights": "https://huggingface.co/sony-fashion-photography/riq/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('sony-fashion-photography/riq', weight_name='lora.safetensors')
image = pipeline('Liza').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/sony-fashion-photography/riq/discussions) to add images that show off what you’ve made with this LoRA.
|
mradermacher/Llama_3.x_70b_SmarTracks_v1.30_flat-GGUF | mradermacher | "2025-04-19T09:26:22Z" | 207 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Nexesenex/Llama_3.x_70b_SmarTracks_v1.30_flat",
"base_model:quantized:Nexesenex/Llama_3.x_70b_SmarTracks_v1.30_flat",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-01T09:32:37Z" | ---
base_model: Nexesenex/Llama_3.x_70b_SmarTracks_v1.30_flat
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Nexesenex/Llama_3.x_70b_SmarTracks_v1.30_flat
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTracks_v1.30_flat-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTracks_v1.30_flat-GGUF/resolve/main/Llama_3.x_70b_SmarTracks_v1.30_flat.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTracks_v1.30_flat-GGUF/resolve/main/Llama_3.x_70b_SmarTracks_v1.30_flat.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTracks_v1.30_flat-GGUF/resolve/main/Llama_3.x_70b_SmarTracks_v1.30_flat.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTracks_v1.30_flat-GGUF/resolve/main/Llama_3.x_70b_SmarTracks_v1.30_flat.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTracks_v1.30_flat-GGUF/resolve/main/Llama_3.x_70b_SmarTracks_v1.30_flat.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTracks_v1.30_flat-GGUF/resolve/main/Llama_3.x_70b_SmarTracks_v1.30_flat.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTracks_v1.30_flat-GGUF/resolve/main/Llama_3.x_70b_SmarTracks_v1.30_flat.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTracks_v1.30_flat-GGUF/resolve/main/Llama_3.x_70b_SmarTracks_v1.30_flat.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTracks_v1.30_flat-GGUF/resolve/main/Llama_3.x_70b_SmarTracks_v1.30_flat.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTracks_v1.30_flat-GGUF/resolve/main/Llama_3.x_70b_SmarTracks_v1.30_flat.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTracks_v1.30_flat-GGUF/resolve/main/Llama_3.x_70b_SmarTracks_v1.30_flat.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTracks_v1.30_flat-GGUF/resolve/main/Llama_3.x_70b_SmarTracks_v1.30_flat.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTracks_v1.30_flat-GGUF/resolve/main/Llama_3.x_70b_SmarTracks_v1.30_flat.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
rbelanec/train_sst2_1744902628 | rbelanec | "2025-04-19T09:23:18Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lntuning",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3",
"license:apache-2.0",
"region:us"
] | null | "2025-04-18T23:52:50Z" | ---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.3
tags:
- llama-factory
- lntuning
- generated_from_trainer
model-index:
- name: train_sst2_1744902628
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_sst2_1744902628
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) on the sst2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0728
- Num Input Tokens Seen: 33458560
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 123
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- training_steps: 40000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-------:|:-----:|:---------------:|:-----------------:|
| 0.1539 | 0.0528 | 200 | 0.1842 | 166688 |
| 0.1944 | 0.1056 | 400 | 0.1516 | 334048 |
| 0.0708 | 0.1584 | 600 | 0.1392 | 500448 |
| 0.098 | 0.2112 | 800 | 0.1304 | 667872 |
| 0.1267 | 0.2640 | 1000 | 0.1269 | 834848 |
| 0.1179 | 0.3167 | 1200 | 0.1228 | 1002816 |
| 0.0794 | 0.3695 | 1400 | 0.1210 | 1169088 |
| 0.2142 | 0.4223 | 1600 | 0.1180 | 1337088 |
| 0.1079 | 0.4751 | 1800 | 0.1159 | 1505536 |
| 0.1278 | 0.5279 | 2000 | 0.1137 | 1673024 |
| 0.1395 | 0.5807 | 2200 | 0.1122 | 1842304 |
| 0.1163 | 0.6335 | 2400 | 0.1105 | 2007328 |
| 0.0823 | 0.6863 | 2600 | 0.1093 | 2174880 |
| 0.1669 | 0.7391 | 2800 | 0.1076 | 2341280 |
| 0.1197 | 0.7919 | 3000 | 0.1067 | 2509440 |
| 0.1089 | 0.8447 | 3200 | 0.1046 | 2674784 |
| 0.088 | 0.8975 | 3400 | 0.1030 | 2843680 |
| 0.1537 | 0.9502 | 3600 | 0.1062 | 3011904 |
| 0.0942 | 1.0029 | 3800 | 0.1014 | 3178064 |
| 0.1005 | 1.0557 | 4000 | 0.1001 | 3345904 |
| 0.0968 | 1.1085 | 4200 | 0.0992 | 3514608 |
| 0.0908 | 1.1613 | 4400 | 0.0987 | 3680560 |
| 0.0681 | 1.2141 | 4600 | 0.0979 | 3849328 |
| 0.1266 | 1.2669 | 4800 | 0.0978 | 4017200 |
| 0.08 | 1.3197 | 5000 | 0.0961 | 4187184 |
| 0.0749 | 1.3724 | 5200 | 0.0955 | 4354416 |
| 0.1066 | 1.4252 | 5400 | 0.0947 | 4519856 |
| 0.1032 | 1.4780 | 5600 | 0.0941 | 4687280 |
| 0.0682 | 1.5308 | 5800 | 0.0934 | 4856112 |
| 0.05 | 1.5836 | 6000 | 0.0938 | 5022736 |
| 0.1597 | 1.6364 | 6200 | 0.0919 | 5188656 |
| 0.0547 | 1.6892 | 6400 | 0.0913 | 5356208 |
| 0.1362 | 1.7420 | 6600 | 0.0909 | 5523952 |
| 0.1347 | 1.7948 | 6800 | 0.0901 | 5690672 |
| 0.0949 | 1.8476 | 7000 | 0.0896 | 5857072 |
| 0.0551 | 1.9004 | 7200 | 0.0889 | 6024976 |
| 0.0835 | 1.9531 | 7400 | 0.0890 | 6191664 |
| 0.0984 | 2.0058 | 7600 | 0.0882 | 6357472 |
| 0.1026 | 2.0586 | 7800 | 0.0882 | 6525984 |
| 0.0955 | 2.1114 | 8000 | 0.0871 | 6692320 |
| 0.0731 | 2.1642 | 8200 | 0.0870 | 6860064 |
| 0.1487 | 2.2170 | 8400 | 0.0870 | 7026528 |
| 0.0494 | 2.2698 | 8600 | 0.0868 | 7192384 |
| 0.1057 | 2.3226 | 8800 | 0.0859 | 7358816 |
| 0.0801 | 2.3753 | 9000 | 0.0857 | 7526496 |
| 0.0757 | 2.4281 | 9200 | 0.0852 | 7696064 |
| 0.0889 | 2.4809 | 9400 | 0.0851 | 7863456 |
| 0.1005 | 2.5337 | 9600 | 0.0845 | 8031776 |
| 0.0785 | 2.5865 | 9800 | 0.0843 | 8199584 |
| 0.0925 | 2.6393 | 10000 | 0.0836 | 8366016 |
| 0.0698 | 2.6921 | 10200 | 0.0839 | 8531808 |
| 0.0657 | 2.7449 | 10400 | 0.0830 | 8702976 |
| 0.0471 | 2.7977 | 10600 | 0.0829 | 8870944 |
| 0.0946 | 2.8505 | 10800 | 0.0830 | 9039680 |
| 0.1393 | 2.9033 | 11000 | 0.0829 | 9206880 |
| 0.0655 | 2.9561 | 11200 | 0.0823 | 9372128 |
| 0.0833 | 3.0087 | 11400 | 0.0818 | 9538768 |
| 0.0782 | 3.0615 | 11600 | 0.0815 | 9705232 |
| 0.0579 | 3.1143 | 11800 | 0.0814 | 9871632 |
| 0.0707 | 3.1671 | 12000 | 0.0814 | 10039472 |
| 0.0276 | 3.2199 | 12200 | 0.0809 | 10206320 |
| 0.0665 | 3.2727 | 12400 | 0.0807 | 10376240 |
| 0.0816 | 3.3255 | 12600 | 0.0807 | 10544464 |
| 0.0344 | 3.3782 | 12800 | 0.0801 | 10712240 |
| 0.044 | 3.4310 | 13000 | 0.0801 | 10879120 |
| 0.0479 | 3.4838 | 13200 | 0.0800 | 11045072 |
| 0.0631 | 3.5366 | 13400 | 0.0800 | 11211312 |
| 0.0876 | 3.5894 | 13600 | 0.0795 | 11378128 |
| 0.0434 | 3.6422 | 13800 | 0.0800 | 11544592 |
| 0.1373 | 3.6950 | 14000 | 0.0790 | 11713040 |
| 0.1293 | 3.7478 | 14200 | 0.0794 | 11880432 |
| 0.115 | 3.8006 | 14400 | 0.0788 | 12048176 |
| 0.0573 | 3.8534 | 14600 | 0.0790 | 12215792 |
| 0.0487 | 3.9062 | 14800 | 0.0789 | 12383792 |
| 0.0535 | 3.9590 | 15000 | 0.0786 | 12549680 |
| 0.0956 | 4.0116 | 15200 | 0.0782 | 12716448 |
| 0.0484 | 4.0644 | 15400 | 0.0781 | 12882752 |
| 0.0618 | 4.1172 | 15600 | 0.0780 | 13051200 |
| 0.0581 | 4.1700 | 15800 | 0.0777 | 13217024 |
| 0.0639 | 4.2228 | 16000 | 0.0776 | 13382784 |
| 0.0619 | 4.2756 | 16200 | 0.0781 | 13549216 |
| 0.0358 | 4.3284 | 16400 | 0.0772 | 13719072 |
| 0.1304 | 4.3812 | 16600 | 0.0771 | 13884928 |
| 0.0876 | 4.4339 | 16800 | 0.0768 | 14051584 |
| 0.0492 | 4.4867 | 17000 | 0.0781 | 14220704 |
| 0.045 | 4.5395 | 17200 | 0.0766 | 14387008 |
| 0.1129 | 4.5923 | 17400 | 0.0768 | 14555808 |
| 0.0602 | 4.6451 | 17600 | 0.0766 | 14723456 |
| 0.1302 | 4.6979 | 17800 | 0.0764 | 14890880 |
| 0.0634 | 4.7507 | 18000 | 0.0769 | 15059744 |
| 0.0675 | 4.8035 | 18200 | 0.0763 | 15224512 |
| 0.12 | 4.8563 | 18400 | 0.0761 | 15392960 |
| 0.0468 | 4.9091 | 18600 | 0.0759 | 15561696 |
| 0.0968 | 4.9619 | 18800 | 0.0763 | 15728800 |
| 0.0793 | 5.0145 | 19000 | 0.0758 | 15897552 |
| 0.0726 | 5.0673 | 19200 | 0.0757 | 16064688 |
| 0.0647 | 5.1201 | 19400 | 0.0754 | 16231120 |
| 0.0802 | 5.1729 | 19600 | 0.0755 | 16397744 |
| 0.1297 | 5.2257 | 19800 | 0.0753 | 16564176 |
| 0.0624 | 5.2785 | 20000 | 0.0752 | 16731600 |
| 0.0329 | 5.3313 | 20200 | 0.0756 | 16898064 |
| 0.0914 | 5.3841 | 20400 | 0.0753 | 17064080 |
| 0.0583 | 5.4368 | 20600 | 0.0751 | 17231888 |
| 0.0622 | 5.4896 | 20800 | 0.0750 | 17399184 |
| 0.0676 | 5.5424 | 21000 | 0.0758 | 17566160 |
| 0.0667 | 5.5952 | 21200 | 0.0748 | 17732304 |
| 0.0507 | 5.6480 | 21400 | 0.0750 | 17900880 |
| 0.0453 | 5.7008 | 21600 | 0.0747 | 18070192 |
| 0.0833 | 5.7536 | 21800 | 0.0748 | 18237168 |
| 0.0535 | 5.8064 | 22000 | 0.0748 | 18403856 |
| 0.1257 | 5.8592 | 22200 | 0.0745 | 18571248 |
| 0.0289 | 5.9120 | 22400 | 0.0747 | 18738672 |
| 0.0504 | 5.9648 | 22600 | 0.0747 | 18905744 |
| 0.0855 | 6.0174 | 22800 | 0.0744 | 19073440 |
| 0.0699 | 6.0702 | 23000 | 0.0744 | 19241920 |
| 0.1241 | 6.1230 | 23200 | 0.0745 | 19409408 |
| 0.077 | 6.1758 | 23400 | 0.0751 | 19577024 |
| 0.0498 | 6.2286 | 23600 | 0.0741 | 19744608 |
| 0.0814 | 6.2814 | 23800 | 0.0742 | 19911488 |
| 0.0741 | 6.3342 | 24000 | 0.0741 | 20078944 |
| 0.0561 | 6.3870 | 24200 | 0.0740 | 20244928 |
| 0.0998 | 6.4398 | 24400 | 0.0741 | 20411232 |
| 0.0599 | 6.4925 | 24600 | 0.0740 | 20578080 |
| 0.0745 | 6.5453 | 24800 | 0.0737 | 20746592 |
| 0.1089 | 6.5981 | 25000 | 0.0741 | 20913344 |
| 0.0357 | 6.6509 | 25200 | 0.0738 | 21081952 |
| 0.0983 | 6.7037 | 25400 | 0.0739 | 21248384 |
| 0.0928 | 6.7565 | 25600 | 0.0738 | 21415872 |
| 0.0561 | 6.8093 | 25800 | 0.0740 | 21584000 |
| 0.1221 | 6.8621 | 26000 | 0.0736 | 21751168 |
| 0.0501 | 6.9149 | 26200 | 0.0737 | 21918816 |
| 0.0735 | 6.9677 | 26400 | 0.0735 | 22084384 |
| 0.073 | 7.0203 | 26600 | 0.0737 | 22251776 |
| 0.0831 | 7.0731 | 26800 | 0.0736 | 22418080 |
| 0.0576 | 7.1259 | 27000 | 0.0735 | 22587392 |
| 0.0622 | 7.1787 | 27200 | 0.0736 | 22753056 |
| 0.0731 | 7.2315 | 27400 | 0.0738 | 22920768 |
| 0.0822 | 7.2843 | 27600 | 0.0734 | 23087296 |
| 0.0392 | 7.3371 | 27800 | 0.0735 | 23254400 |
| 0.0331 | 7.3899 | 28000 | 0.0733 | 23422752 |
| 0.0444 | 7.4427 | 28200 | 0.0734 | 23588352 |
| 0.0614 | 7.4954 | 28400 | 0.0737 | 23755840 |
| 0.0677 | 7.5482 | 28600 | 0.0734 | 23923680 |
| 0.0489 | 7.6010 | 28800 | 0.0734 | 24091168 |
| 0.0393 | 7.6538 | 29000 | 0.0735 | 24258016 |
| 0.0912 | 7.7066 | 29200 | 0.0733 | 24427808 |
| 0.0217 | 7.7594 | 29400 | 0.0734 | 24596288 |
| 0.0513 | 7.8122 | 29600 | 0.0730 | 24764192 |
| 0.0658 | 7.8650 | 29800 | 0.0732 | 24932000 |
| 0.0394 | 7.9178 | 30000 | 0.0731 | 25100224 |
| 0.0558 | 7.9706 | 30200 | 0.0735 | 25267808 |
| 0.0496 | 8.0232 | 30400 | 0.0733 | 25433440 |
| 0.0734 | 8.0760 | 30600 | 0.0733 | 25600672 |
| 0.0612 | 8.1288 | 30800 | 0.0731 | 25769408 |
| 0.0413 | 8.1816 | 31000 | 0.0728 | 25936160 |
| 0.0626 | 8.2344 | 31200 | 0.0731 | 26103744 |
| 0.0785 | 8.2872 | 31400 | 0.0731 | 26270560 |
| 0.0716 | 8.3400 | 31600 | 0.0732 | 26437536 |
| 0.0325 | 8.3928 | 31800 | 0.0732 | 26604480 |
| 0.0484 | 8.4456 | 32000 | 0.0732 | 26771680 |
| 0.1124 | 8.4984 | 32200 | 0.0730 | 26940256 |
| 0.0738 | 8.5511 | 32400 | 0.0730 | 27107680 |
| 0.0793 | 8.6039 | 32600 | 0.0731 | 27274048 |
| 0.1138 | 8.6567 | 32800 | 0.0731 | 27440544 |
| 0.1 | 8.7095 | 33000 | 0.0731 | 27608000 |
| 0.03 | 8.7623 | 33200 | 0.0732 | 27776704 |
| 0.0178 | 8.8151 | 33400 | 0.0728 | 27942752 |
| 0.0735 | 8.8679 | 33600 | 0.0729 | 28108864 |
| 0.032 | 8.9207 | 33800 | 0.0728 | 28275296 |
| 0.0549 | 8.9735 | 34000 | 0.0730 | 28443520 |
| 0.0322 | 9.0261 | 34200 | 0.0730 | 28609776 |
| 0.0633 | 9.0789 | 34400 | 0.0729 | 28777712 |
| 0.1346 | 9.1317 | 34600 | 0.0729 | 28944144 |
| 0.0417 | 9.1845 | 34800 | 0.0732 | 29111152 |
| 0.0991 | 9.2373 | 35000 | 0.0731 | 29278000 |
| 0.043 | 9.2901 | 35200 | 0.0729 | 29443792 |
| 0.0353 | 9.3429 | 35400 | 0.0729 | 29609072 |
| 0.1 | 9.3957 | 35600 | 0.0730 | 29776592 |
| 0.0774 | 9.4485 | 35800 | 0.0730 | 29941616 |
| 0.0649 | 9.5013 | 36000 | 0.0729 | 30110160 |
| 0.0702 | 9.5540 | 36200 | 0.0730 | 30277744 |
| 0.1259 | 9.6068 | 36400 | 0.0729 | 30447152 |
| 0.0281 | 9.6596 | 36600 | 0.0729 | 30612976 |
| 0.0457 | 9.7124 | 36800 | 0.0730 | 30780240 |
| 0.0235 | 9.7652 | 37000 | 0.0728 | 30948048 |
| 0.0478 | 9.8180 | 37200 | 0.0730 | 31116368 |
| 0.0338 | 9.8708 | 37400 | 0.0729 | 31283888 |
| 0.0387 | 9.9236 | 37600 | 0.0730 | 31452560 |
| 0.0603 | 9.9764 | 37800 | 0.0728 | 31620720 |
| 0.1036 | 10.0290 | 38000 | 0.0730 | 31786016 |
| 0.0954 | 10.0818 | 38200 | 0.0728 | 31952768 |
| 0.0774 | 10.1346 | 38400 | 0.0728 | 32120320 |
| 0.0159 | 10.1874 | 38600 | 0.0728 | 32287584 |
| 0.0459 | 10.2402 | 38800 | 0.0730 | 32455072 |
| 0.0525 | 10.2930 | 39000 | 0.0729 | 32621184 |
| 0.0389 | 10.3458 | 39200 | 0.0730 | 32788960 |
| 0.0456 | 10.3986 | 39400 | 0.0729 | 32955776 |
| 0.0359 | 10.4514 | 39600 | 0.0728 | 33122816 |
| 0.0964 | 10.5042 | 39800 | 0.0729 | 33291072 |
| 0.0656 | 10.5569 | 40000 | 0.0728 | 33458560 |
### Framework versions
- PEFT 0.15.1
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
MercuraTech/v4_articles_single_base | MercuraTech | "2025-04-19T09:19:44Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-04-19T04:28:45Z" | ---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: v4_articles_single_base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# v4_articles_single_base
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7167
- Accuracy: 0.4397
- F1: 0.4350
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|:------:|
| 9.9789 | 0.1895 | 500 | 9.9718 | 0.0001 | 0.0000 |
| 9.8369 | 0.3791 | 1000 | 9.7965 | 0.0094 | 0.0002 |
| 9.5592 | 0.5686 | 1500 | 9.5071 | 0.0094 | 0.0002 |
| 9.237 | 0.7582 | 2000 | 9.1586 | 0.0094 | 0.0002 |
| 8.8972 | 0.9477 | 2500 | 8.8566 | 0.0094 | 0.0002 |
| 8.6564 | 1.1372 | 3000 | 8.6478 | 0.0160 | 0.0019 |
| 8.4797 | 1.3268 | 3500 | 8.4819 | 0.0246 | 0.0046 |
| 8.3484 | 1.5163 | 4000 | 8.3282 | 0.0317 | 0.0064 |
| 8.2146 | 1.7058 | 4500 | 8.1572 | 0.0340 | 0.0068 |
| 8.0264 | 1.8954 | 5000 | 7.9514 | 0.0455 | 0.0112 |
| 7.7606 | 2.0849 | 5500 | 7.7213 | 0.0583 | 0.0177 |
| 7.4704 | 2.2745 | 6000 | 7.4514 | 0.0838 | 0.0287 |
| 7.2206 | 2.4640 | 6500 | 7.1248 | 0.1120 | 0.0452 |
| 6.8785 | 2.6535 | 7000 | 6.7901 | 0.1367 | 0.0610 |
| 6.5434 | 2.8431 | 7500 | 6.4326 | 0.1625 | 0.0772 |
| 6.1765 | 3.0326 | 8000 | 6.0874 | 0.1800 | 0.0911 |
| 5.8391 | 3.2221 | 8500 | 5.7638 | 0.2015 | 0.1086 |
| 5.4916 | 3.4117 | 9000 | 5.4800 | 0.2161 | 0.1224 |
| 5.3123 | 3.6012 | 9500 | 5.2343 | 0.2268 | 0.1327 |
| 5.0068 | 3.7908 | 10000 | 5.0158 | 0.2416 | 0.1460 |
| 4.8917 | 3.9803 | 10500 | 4.8260 | 0.2518 | 0.1572 |
| 4.5999 | 4.1698 | 11000 | 4.6644 | 0.2668 | 0.1714 |
| 4.5399 | 4.3594 | 11500 | 4.5127 | 0.2752 | 0.1824 |
| 4.2681 | 4.5489 | 12000 | 4.3847 | 0.2841 | 0.1909 |
| 4.2411 | 4.7384 | 12500 | 4.2655 | 0.2917 | 0.1999 |
| 4.0436 | 4.9280 | 13000 | 4.1556 | 0.2996 | 0.2096 |
| 3.8549 | 5.1175 | 13500 | 4.0580 | 0.3090 | 0.2198 |
| 3.8365 | 5.3071 | 14000 | 3.9771 | 0.3157 | 0.2266 |
| 3.7002 | 5.4966 | 14500 | 3.8831 | 0.3225 | 0.2368 |
| 3.6145 | 5.6861 | 15000 | 3.8118 | 0.3313 | 0.2461 |
| 3.5779 | 5.8757 | 15500 | 3.7317 | 0.3384 | 0.2570 |
| 3.4283 | 6.0652 | 16000 | 3.6797 | 0.3390 | 0.2603 |
| 3.3538 | 6.2547 | 16500 | 3.6148 | 0.3463 | 0.2692 |
| 3.3319 | 6.4443 | 17000 | 3.5629 | 0.3511 | 0.2749 |
| 3.226 | 6.6338 | 17500 | 3.5067 | 0.3564 | 0.2814 |
| 3.2061 | 6.8234 | 18000 | 3.4567 | 0.3604 | 0.2869 |
| 3.1053 | 7.0129 | 18500 | 3.4042 | 0.3675 | 0.2957 |
| 3.0195 | 7.2024 | 19000 | 3.3702 | 0.3704 | 0.3014 |
| 2.9741 | 7.3920 | 19500 | 3.3274 | 0.3755 | 0.3074 |
| 2.9456 | 7.5815 | 20000 | 3.2985 | 0.3761 | 0.3086 |
| 2.9216 | 7.7710 | 20500 | 3.2658 | 0.3772 | 0.3119 |
| 2.8645 | 7.9606 | 21000 | 3.2231 | 0.3847 | 0.3226 |
| 2.7615 | 8.1501 | 21500 | 3.2023 | 0.3899 | 0.3265 |
| 2.7581 | 8.3397 | 22000 | 3.1769 | 0.3878 | 0.3273 |
| 2.7612 | 8.5292 | 22500 | 3.1357 | 0.3936 | 0.3370 |
| 2.656 | 8.7187 | 23000 | 3.1208 | 0.3893 | 0.3372 |
| 2.6204 | 8.9083 | 23500 | 3.0876 | 0.3973 | 0.3440 |
| 2.5629 | 9.0978 | 24000 | 3.0708 | 0.3957 | 0.3435 |
| 2.5407 | 9.2873 | 24500 | 3.0475 | 0.4014 | 0.3564 |
| 2.501 | 9.4769 | 25000 | 3.0425 | 0.4007 | 0.3512 |
| 2.4615 | 9.6664 | 25500 | 3.0077 | 0.4064 | 0.3592 |
| 2.4667 | 9.8560 | 26000 | 2.9950 | 0.4061 | 0.3634 |
| 2.3594 | 10.0455 | 26500 | 2.9875 | 0.4048 | 0.3661 |
| 2.3613 | 10.2350 | 27000 | 2.9587 | 0.4056 | 0.3727 |
| 2.3253 | 10.4246 | 27500 | 2.9467 | 0.4110 | 0.3751 |
| 2.332 | 10.6141 | 28000 | 2.9342 | 0.4114 | 0.3734 |
| 2.2866 | 10.8036 | 28500 | 2.9034 | 0.4138 | 0.3830 |
| 2.2932 | 10.9932 | 29000 | 2.8993 | 0.4117 | 0.3822 |
| 2.2165 | 11.1827 | 29500 | 2.8904 | 0.4199 | 0.3865 |
| 2.1911 | 11.3723 | 30000 | 2.8893 | 0.4145 | 0.3858 |
| 2.1368 | 11.5618 | 30500 | 2.8658 | 0.4212 | 0.3951 |
| 2.147 | 11.7513 | 31000 | 2.8640 | 0.4144 | 0.3911 |
| 2.0725 | 11.9409 | 31500 | 2.8407 | 0.4203 | 0.3978 |
| 2.071 | 12.1304 | 32000 | 2.8350 | 0.4237 | 0.4005 |
| 2.0455 | 12.3199 | 32500 | 2.8318 | 0.4233 | 0.3999 |
| 2.02 | 12.5095 | 33000 | 2.8176 | 0.4256 | 0.4033 |
| 2.0375 | 12.6990 | 33500 | 2.8144 | 0.4264 | 0.4063 |
| 1.9853 | 12.8886 | 34000 | 2.7982 | 0.4290 | 0.4075 |
| 1.9396 | 13.0781 | 34500 | 2.7921 | 0.4271 | 0.4120 |
| 1.9214 | 13.2676 | 35000 | 2.7846 | 0.4261 | 0.4100 |
| 1.9103 | 13.4572 | 35500 | 2.7845 | 0.4246 | 0.4099 |
| 1.9422 | 13.6467 | 36000 | 2.7822 | 0.4285 | 0.4112 |
| 1.9098 | 13.8362 | 36500 | 2.7708 | 0.4290 | 0.4130 |
| 1.8087 | 14.0258 | 37000 | 2.7687 | 0.4320 | 0.4177 |
| 1.7799 | 14.2153 | 37500 | 2.7529 | 0.4326 | 0.4176 |
| 1.7517 | 14.4049 | 38000 | 2.7543 | 0.4345 | 0.4218 |
| 1.8091 | 14.5944 | 38500 | 2.7533 | 0.4347 | 0.4215 |
| 1.8129 | 14.7839 | 39000 | 2.7444 | 0.4330 | 0.4230 |
| 1.777 | 14.9735 | 39500 | 2.7382 | 0.4370 | 0.4284 |
| 1.6449 | 15.1630 | 40000 | 2.7459 | 0.4344 | 0.4240 |
| 1.7006 | 15.3525 | 40500 | 2.7225 | 0.4375 | 0.4312 |
| 1.7103 | 15.5421 | 41000 | 2.7314 | 0.4402 | 0.4308 |
| 1.7152 | 15.7316 | 41500 | 2.7247 | 0.4401 | 0.4331 |
| 1.7274 | 15.9212 | 42000 | 2.7218 | 0.4388 | 0.4310 |
| 1.6366 | 16.1107 | 42500 | 2.7167 | 0.4397 | 0.4350 |
| 1.6787 | 16.3002 | 43000 | 2.6995 | 0.4425 | 0.4375 |
| 1.5951 | 16.4898 | 43500 | 2.7195 | 0.4390 | 0.4313 |
| 1.6202 | 16.6793 | 44000 | 2.7076 | 0.4406 | 0.4358 |
| 1.6674 | 16.8688 | 44500 | 2.7015 | 0.4414 | 0.4360 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
summerstars/Solara | summerstars | "2025-04-19T09:17:30Z" | 2 | 0 | transformers.js | [
"transformers.js",
"safetensors",
"llama",
"text-generation",
"onnx",
"conversational",
"en",
"base_model:HuggingFaceTB/SmolLM2-360M-Instruct",
"base_model:finetune:HuggingFaceTB/SmolLM2-360M-Instruct",
"license:apache-2.0",
"region:us"
] | text-generation | "2025-04-19T05:17:38Z" | ---
license: apache-2.0
base_model:
- HuggingFaceTB/SmolLM2-360M-Instruct
language:
- en
pipeline_tag: text-generation
tags:
- safetensors
- onnx
- transformers.js
---
# 🌞 Solara — summerstars/Solara
## **Created by a High School Student | Built on Google Colab (T4 GPU)**
## **高校生によって開発 | Google Colab(T4 GPU)で作成**
**Solara** is a lightweight, instruction-tuned language model based on [`HuggingFaceTB/SmolLM2-360M-Instruct`](https://huggingface.co/HuggingFaceTB/SmolLM2-360M-Instruct).
It was developed by a high school student using Google Colab with a T4 GPU.
Despite its compact size, Solara delivers quick responses and handles everyday tasks efficiently.
**Solara(ソララ)** は、[`HuggingFaceTB/SmolLM2-360M-Instruct`](https://huggingface.co/HuggingFaceTB/SmolLM2-360M-Instruct) をベースとした軽量な指示応答型言語モデルです。
Google Colab(T4 GPU)を使用して高校生が開発しました。
小型ながら、日常のタスクや会話を効率的かつ高速に処理します。
---
## 📌 Model Details / モデル詳細
- **Base Model / ベースモデル**: HuggingFaceTB/SmolLM2-360M-Instruct
- **Parameters / パラメータ数**: 360M
- **Architecture / アーキテクチャ**: Decoder-only Transformer / デコーダ専用トランスフォーマー
- **Languages / 対応言語**: English / 英語
- **License / ライセンス**: Apache 2.0
---
## 🚀 Use Cases / 主な用途
- Lightweight chatbots / 軽量チャットボット
- Inference on CPUs or mobile devices / CPU・モバイル端末での推論
- Educational or hobbyist projects / 教育・趣味用途
- Instruction-following tasks / 指示応答タスク
---
## 🛠️ How to Use / 使用方法
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "summerstars/Solara"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
prompt = "Please explain black holes in simple terms."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=128)
# Print the result / 結果を表示
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
shibajustfor/802ae234-5e4c-4ba5-8809-0a981f51f7ba | shibajustfor | "2025-04-19T09:11:32Z" | 0 | 0 | transformers | [
"transformers",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | "2025-04-19T09:11:18Z" | ---
library_name: transformers
model_name: shibajustfor/802ae234-5e4c-4ba5-8809-0a981f51f7ba
tags:
- generated_from_trainer
licence: license
---
# Model Card for shibajustfor/802ae234-5e4c-4ba5-8809-0a981f51f7ba
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
### Framework versions
- TRL: 0.12.0
- Transformers: 4.46.3
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
ahmet71cakir/phi4-turbochat-full | ahmet71cakir | "2025-04-19T09:11:28Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-18T18:14:56Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Llama_3.x_70b_Triads_V1-i1-GGUF | mradermacher | "2025-04-19T09:10:26Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:NexesMess/Llama_3.x_70b_Triads_V1",
"base_model:quantized:NexesMess/Llama_3.x_70b_Triads_V1",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-04-18T08:45:09Z" | ---
base_model: NexesMess/Llama_3.x_70b_Triads_V1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/NexesMess/Llama_3.x_70b_Triads_V1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama_3.x_70b_Triads_V1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Triads_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Triads_V1.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Triads_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Triads_V1.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Triads_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Triads_V1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Triads_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Triads_V1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Triads_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Triads_V1.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Triads_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Triads_V1.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Triads_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Triads_V1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 24.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Triads_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Triads_V1.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Triads_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Triads_V1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Triads_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Triads_V1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Triads_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Triads_V1.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Triads_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Triads_V1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Triads_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Triads_V1.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Triads_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Triads_V1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Triads_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Triads_V1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Triads_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Triads_V1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Triads_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Triads_V1.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Triads_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Triads_V1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Triads_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Triads_V1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Triads_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Triads_V1.i1-Q4_1.gguf) | i1-Q4_1 | 44.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Triads_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Triads_V1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Triads_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Triads_V1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.1 | |
| [PART 1](https://huggingface.co/mradermacher/Llama_3.x_70b_Triads_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Triads_V1.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama_3.x_70b_Triads_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Triads_V1.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
umar141/llama3.2_Baro_v2 | umar141 | "2025-04-19T09:07:17Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-04-19T09:07:06Z" | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** umar141
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
rbelanec/train_qnli_1744902616 | rbelanec | "2025-04-19T09:05:59Z" | 3 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lntuning",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3",
"license:apache-2.0",
"region:us"
] | null | "2025-04-18T18:05:23Z" | ---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.3
tags:
- llama-factory
- lntuning
- generated_from_trainer
model-index:
- name: train_qnli_1744902616
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_qnli_1744902616
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) on the qnli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0513
- Num Input Tokens Seen: 74724160
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 123
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- training_steps: 40000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:------:|:-----:|:---------------:|:-----------------:|
| 0.1529 | 0.0339 | 200 | 0.1279 | 375872 |
| 0.1128 | 0.0679 | 400 | 0.1065 | 754656 |
| 0.0909 | 0.1018 | 600 | 0.1009 | 1127296 |
| 0.1101 | 0.1358 | 800 | 0.0961 | 1500832 |
| 0.0937 | 0.1697 | 1000 | 0.0936 | 1870752 |
| 0.0823 | 0.2037 | 1200 | 0.0897 | 2248448 |
| 0.1123 | 0.2376 | 1400 | 0.0871 | 2622784 |
| 0.0786 | 0.2716 | 1600 | 0.0847 | 2995616 |
| 0.0674 | 0.3055 | 1800 | 0.0829 | 3370144 |
| 0.0745 | 0.3395 | 2000 | 0.0811 | 3747936 |
| 0.074 | 0.3734 | 2200 | 0.0795 | 4126560 |
| 0.0795 | 0.4073 | 2400 | 0.0788 | 4497920 |
| 0.068 | 0.4413 | 2600 | 0.0770 | 4870432 |
| 0.0948 | 0.4752 | 2800 | 0.0759 | 5242976 |
| 0.0751 | 0.5092 | 3000 | 0.0748 | 5615808 |
| 0.0722 | 0.5431 | 3200 | 0.0738 | 5984672 |
| 0.0915 | 0.5771 | 3400 | 0.0733 | 6356832 |
| 0.0585 | 0.6110 | 3600 | 0.0729 | 6732928 |
| 0.0577 | 0.6450 | 3800 | 0.0723 | 7111456 |
| 0.0511 | 0.6789 | 4000 | 0.0714 | 7481824 |
| 0.0664 | 0.7129 | 4200 | 0.0702 | 7857440 |
| 0.0703 | 0.7468 | 4400 | 0.0699 | 8229632 |
| 0.0676 | 0.7808 | 4600 | 0.0689 | 8601824 |
| 0.0665 | 0.8147 | 4800 | 0.0683 | 8974688 |
| 0.0851 | 0.8486 | 5000 | 0.0682 | 9345088 |
| 0.0874 | 0.8826 | 5200 | 0.0673 | 9720928 |
| 0.063 | 0.9165 | 5400 | 0.0667 | 10090976 |
| 0.0652 | 0.9505 | 5600 | 0.0681 | 10461824 |
| 0.0807 | 0.9844 | 5800 | 0.0660 | 10837568 |
| 0.0644 | 1.0183 | 6000 | 0.0662 | 11211008 |
| 0.0679 | 1.0523 | 6200 | 0.0649 | 11582528 |
| 0.0594 | 1.0862 | 6400 | 0.0647 | 11958208 |
| 0.0808 | 1.1202 | 6600 | 0.0645 | 12334752 |
| 0.0535 | 1.1541 | 6800 | 0.0650 | 12710176 |
| 0.0547 | 1.1881 | 7000 | 0.0650 | 13083200 |
| 0.0692 | 1.2220 | 7200 | 0.0632 | 13458944 |
| 0.0571 | 1.2560 | 7400 | 0.0629 | 13836256 |
| 0.0874 | 1.2899 | 7600 | 0.0630 | 14209248 |
| 0.0752 | 1.3238 | 7800 | 0.0626 | 14585344 |
| 0.0605 | 1.3578 | 8000 | 0.0623 | 14955328 |
| 0.0412 | 1.3917 | 8200 | 0.0616 | 15331776 |
| 0.0396 | 1.4257 | 8400 | 0.0615 | 15706624 |
| 0.0307 | 1.4596 | 8600 | 0.0619 | 16075392 |
| 0.0412 | 1.4936 | 8800 | 0.0610 | 16445568 |
| 0.0981 | 1.5275 | 9000 | 0.0609 | 16819648 |
| 0.0615 | 1.5615 | 9200 | 0.0605 | 17191872 |
| 0.0829 | 1.5954 | 9400 | 0.0611 | 17561280 |
| 0.0505 | 1.6294 | 9600 | 0.0598 | 17936128 |
| 0.0755 | 1.6633 | 9800 | 0.0598 | 18307616 |
| 0.0499 | 1.6972 | 10000 | 0.0593 | 18683168 |
| 0.0665 | 1.7312 | 10200 | 0.0595 | 19053408 |
| 0.0485 | 1.7651 | 10400 | 0.0591 | 19427296 |
| 0.0618 | 1.7991 | 10600 | 0.0589 | 19802400 |
| 0.0495 | 1.8330 | 10800 | 0.0586 | 20173056 |
| 0.054 | 1.8670 | 11000 | 0.0585 | 20550720 |
| 0.0792 | 1.9009 | 11200 | 0.0589 | 20920224 |
| 0.0688 | 1.9349 | 11400 | 0.0581 | 21289344 |
| 0.0484 | 1.9688 | 11600 | 0.0580 | 21666048 |
| 0.0698 | 2.0027 | 11800 | 0.0603 | 22041760 |
| 0.0501 | 2.0367 | 12000 | 0.0576 | 22412256 |
| 0.0569 | 2.0706 | 12200 | 0.0573 | 22782848 |
| 0.048 | 2.1046 | 12400 | 0.0574 | 23151392 |
| 0.0453 | 2.1385 | 12600 | 0.0571 | 23523648 |
| 0.0656 | 2.1724 | 12800 | 0.0569 | 23892992 |
| 0.0461 | 2.2064 | 13000 | 0.0571 | 24264192 |
| 0.0471 | 2.2403 | 13200 | 0.0567 | 24635264 |
| 0.0702 | 2.2743 | 13400 | 0.0564 | 25009664 |
| 0.0558 | 2.3082 | 13600 | 0.0563 | 25382432 |
| 0.0769 | 2.3422 | 13800 | 0.0568 | 25755616 |
| 0.0487 | 2.3761 | 14000 | 0.0560 | 26131424 |
| 0.0775 | 2.4101 | 14200 | 0.0560 | 26504960 |
| 0.0526 | 2.4440 | 14400 | 0.0568 | 26877888 |
| 0.0483 | 2.4780 | 14600 | 0.0558 | 27248384 |
| 0.0695 | 2.5119 | 14800 | 0.0556 | 27625376 |
| 0.0663 | 2.5458 | 15000 | 0.0561 | 28005696 |
| 0.0542 | 2.5798 | 15200 | 0.0555 | 28379936 |
| 0.0754 | 2.6137 | 15400 | 0.0557 | 28749536 |
| 0.0406 | 2.6477 | 15600 | 0.0553 | 29128672 |
| 0.0476 | 2.6816 | 15800 | 0.0552 | 29503456 |
| 0.0725 | 2.7156 | 16000 | 0.0549 | 29874176 |
| 0.07 | 2.7495 | 16200 | 0.0549 | 30251904 |
| 0.0544 | 2.7835 | 16400 | 0.0552 | 30626560 |
| 0.0545 | 2.8174 | 16600 | 0.0547 | 30999968 |
| 0.0425 | 2.8514 | 16800 | 0.0546 | 31376704 |
| 0.0646 | 2.8853 | 17000 | 0.0562 | 31749472 |
| 0.0542 | 2.9193 | 17200 | 0.0546 | 32128320 |
| 0.0445 | 2.9532 | 17400 | 0.0547 | 32501056 |
| 0.065 | 2.9871 | 17600 | 0.0541 | 32872640 |
| 0.0465 | 3.0210 | 17800 | 0.0543 | 33243744 |
| 0.0475 | 3.0550 | 18000 | 0.0546 | 33619808 |
| 0.0886 | 3.0889 | 18200 | 0.0543 | 33994048 |
| 0.0389 | 3.1229 | 18400 | 0.0544 | 34361920 |
| 0.0716 | 3.1568 | 18600 | 0.0537 | 34735392 |
| 0.065 | 3.1908 | 18800 | 0.0537 | 35107872 |
| 0.0658 | 3.2247 | 19000 | 0.0536 | 35486976 |
| 0.063 | 3.2587 | 19200 | 0.0539 | 35862880 |
| 0.0491 | 3.2926 | 19400 | 0.0536 | 36237280 |
| 0.0656 | 3.3266 | 19600 | 0.0535 | 36614176 |
| 0.0568 | 3.3605 | 19800 | 0.0534 | 36987200 |
| 0.058 | 3.3944 | 20000 | 0.0537 | 37357312 |
| 0.0471 | 3.4284 | 20200 | 0.0533 | 37728448 |
| 0.0463 | 3.4623 | 20400 | 0.0535 | 38104736 |
| 0.0691 | 3.4963 | 20600 | 0.0534 | 38477696 |
| 0.0437 | 3.5302 | 20800 | 0.0531 | 38847808 |
| 0.0465 | 3.5642 | 21000 | 0.0529 | 39222464 |
| 0.0529 | 3.5981 | 21200 | 0.0530 | 39595392 |
| 0.0699 | 3.6321 | 21400 | 0.0530 | 39971968 |
| 0.063 | 3.6660 | 21600 | 0.0529 | 40341952 |
| 0.0664 | 3.7000 | 21800 | 0.0530 | 40713376 |
| 0.0464 | 3.7339 | 22000 | 0.0535 | 41085856 |
| 0.0474 | 3.7679 | 22200 | 0.0527 | 41461568 |
| 0.0436 | 3.8018 | 22400 | 0.0526 | 41833280 |
| 0.0458 | 3.8357 | 22600 | 0.0526 | 42205152 |
| 0.0419 | 3.8697 | 22800 | 0.0526 | 42578144 |
| 0.0587 | 3.9036 | 23000 | 0.0527 | 42956608 |
| 0.0522 | 3.9376 | 23200 | 0.0526 | 43327904 |
| 0.0315 | 3.9715 | 23400 | 0.0524 | 43700960 |
| 0.04 | 4.0054 | 23600 | 0.0524 | 44077568 |
| 0.051 | 4.0394 | 23800 | 0.0528 | 44449632 |
| 0.0667 | 4.0733 | 24000 | 0.0524 | 44825184 |
| 0.0606 | 4.1073 | 24200 | 0.0522 | 45195872 |
| 0.0362 | 4.1412 | 24400 | 0.0525 | 45566816 |
| 0.0487 | 4.1752 | 24600 | 0.0523 | 45945824 |
| 0.0492 | 4.2091 | 24800 | 0.0525 | 46322304 |
| 0.0365 | 4.2431 | 25000 | 0.0522 | 46694976 |
| 0.0683 | 4.2770 | 25200 | 0.0521 | 47069472 |
| 0.0513 | 4.3109 | 25400 | 0.0522 | 47444064 |
| 0.0546 | 4.3449 | 25600 | 0.0522 | 47819744 |
| 0.0593 | 4.3788 | 25800 | 0.0522 | 48190912 |
| 0.0514 | 4.4128 | 26000 | 0.0528 | 48563040 |
| 0.0454 | 4.4467 | 26200 | 0.0520 | 48936320 |
| 0.0486 | 4.4807 | 26400 | 0.0519 | 49306944 |
| 0.0393 | 4.5146 | 26600 | 0.0521 | 49683712 |
| 0.0322 | 4.5486 | 26800 | 0.0519 | 50057824 |
| 0.042 | 4.5825 | 27000 | 0.0518 | 50431552 |
| 0.058 | 4.6165 | 27200 | 0.0518 | 50808576 |
| 0.0489 | 4.6504 | 27400 | 0.0518 | 51182144 |
| 0.0376 | 4.6843 | 27600 | 0.0517 | 51554016 |
| 0.0524 | 4.7183 | 27800 | 0.0518 | 51925888 |
| 0.05 | 4.7522 | 28000 | 0.0519 | 52295168 |
| 0.0391 | 4.7862 | 28200 | 0.0519 | 52664096 |
| 0.038 | 4.8201 | 28400 | 0.0517 | 53038784 |
| 0.0566 | 4.8541 | 28600 | 0.0517 | 53412352 |
| 0.0506 | 4.8880 | 28800 | 0.0517 | 53788608 |
| 0.0616 | 4.9220 | 29000 | 0.0518 | 54166176 |
| 0.0675 | 4.9559 | 29200 | 0.0518 | 54541216 |
| 0.066 | 4.9899 | 29400 | 0.0517 | 54916928 |
| 0.0629 | 5.0238 | 29600 | 0.0516 | 55288160 |
| 0.0287 | 5.0577 | 29800 | 0.0516 | 55662784 |
| 0.0421 | 5.0917 | 30000 | 0.0519 | 56034432 |
| 0.0298 | 5.1256 | 30200 | 0.0518 | 56405792 |
| 0.0739 | 5.1595 | 30400 | 0.0516 | 56777504 |
| 0.046 | 5.1935 | 30600 | 0.0516 | 57149760 |
| 0.0529 | 5.2274 | 30800 | 0.0515 | 57521536 |
| 0.0289 | 5.2614 | 31000 | 0.0514 | 57889408 |
| 0.0424 | 5.2953 | 31200 | 0.0519 | 58258624 |
| 0.0427 | 5.3293 | 31400 | 0.0517 | 58635520 |
| 0.0425 | 5.3632 | 31600 | 0.0519 | 59006592 |
| 0.0518 | 5.3972 | 31800 | 0.0515 | 59381312 |
| 0.0716 | 5.4311 | 32000 | 0.0514 | 59761568 |
| 0.059 | 5.4651 | 32200 | 0.0516 | 60138720 |
| 0.0601 | 5.4990 | 32400 | 0.0516 | 60511168 |
| 0.0695 | 5.5329 | 32600 | 0.0514 | 60884448 |
| 0.0269 | 5.5669 | 32800 | 0.0513 | 61259680 |
| 0.0423 | 5.6008 | 33000 | 0.0514 | 61636416 |
| 0.0843 | 5.6348 | 33200 | 0.0514 | 62013760 |
| 0.0657 | 5.6687 | 33400 | 0.0516 | 62389440 |
| 0.0834 | 5.7027 | 33600 | 0.0514 | 62764512 |
| 0.0725 | 5.7366 | 33800 | 0.0514 | 63139872 |
| 0.0354 | 5.7706 | 34000 | 0.0514 | 63517632 |
| 0.0817 | 5.8045 | 34200 | 0.0515 | 63889248 |
| 0.0493 | 5.8385 | 34400 | 0.0513 | 64262048 |
| 0.0603 | 5.8724 | 34600 | 0.0513 | 64632256 |
| 0.0322 | 5.9064 | 34800 | 0.0513 | 65006944 |
| 0.08 | 5.9403 | 35000 | 0.0513 | 65382656 |
| 0.0451 | 5.9742 | 35200 | 0.0514 | 65756992 |
| 0.0516 | 6.0081 | 35400 | 0.0513 | 66125280 |
| 0.0647 | 6.0421 | 35600 | 0.0513 | 66493536 |
| 0.0448 | 6.0760 | 35800 | 0.0514 | 66867936 |
| 0.0546 | 6.1100 | 36000 | 0.0515 | 67243328 |
| 0.0449 | 6.1439 | 36200 | 0.0516 | 67616992 |
| 0.0329 | 6.1779 | 36400 | 0.0516 | 67995520 |
| 0.035 | 6.2118 | 36600 | 0.0514 | 68370624 |
| 0.0461 | 6.2458 | 36800 | 0.0514 | 68746880 |
| 0.0456 | 6.2797 | 37000 | 0.0515 | 69119328 |
| 0.0573 | 6.3137 | 37200 | 0.0514 | 69490336 |
| 0.0501 | 6.3476 | 37400 | 0.0515 | 69862688 |
| 0.0323 | 6.3816 | 37600 | 0.0513 | 70238592 |
| 0.0381 | 6.4155 | 37800 | 0.0514 | 70612608 |
| 0.054 | 6.4494 | 38000 | 0.0514 | 70985568 |
| 0.0242 | 6.4834 | 38200 | 0.0515 | 71360704 |
| 0.0399 | 6.5173 | 38400 | 0.0515 | 71738432 |
| 0.0286 | 6.5513 | 38600 | 0.0514 | 72112640 |
| 0.0532 | 6.5852 | 38800 | 0.0514 | 72484256 |
| 0.0447 | 6.6192 | 39000 | 0.0515 | 72858912 |
| 0.046 | 6.6531 | 39200 | 0.0513 | 73232576 |
| 0.0673 | 6.6871 | 39400 | 0.0516 | 73604352 |
| 0.0389 | 6.7210 | 39600 | 0.0514 | 73975648 |
| 0.0397 | 6.7550 | 39800 | 0.0514 | 74349632 |
| 0.0395 | 6.7889 | 40000 | 0.0514 | 74724160 |
### Framework versions
- PEFT 0.15.1
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
dzanbek/0039ab53-95ed-4f05-9bd4-15486ae682bc | dzanbek | "2025-04-19T09:05:59Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"opt",
"axolotl",
"generated_from_trainer",
"base_model:facebook/opt-1.3b",
"base_model:adapter:facebook/opt-1.3b",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-04-19T08:54:46Z" | ---
library_name: peft
license: other
base_model: facebook/opt-1.3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0039ab53-95ed-4f05-9bd4-15486ae682bc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: facebook/opt-1.3b
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 45af3457229c3363_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/45af3457229c3363_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: dzanbek/0039ab53-95ed-4f05-9bd4-15486ae682bc
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/45af3457229c3363_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 047ce7a0-d57c-486c-ba3f-1f0804c929df
wandb_project: 01-31
wandb_run: your_name
wandb_runid: 047ce7a0-d57c-486c-ba3f-1f0804c929df
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 0039ab53-95ed-4f05-9bd4-15486ae682bc
This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0908
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.964 | 0.0114 | 150 | 3.0908 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
John6666/noborumix-illustrious-xl-20-merged-v20-illustrious20-sdxl | John6666 | "2025-04-19T09:04:12Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"Illustrious XL v2.0",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-XL-v2.0",
"base_model:finetune:OnomaAIResearch/Illustrious-XL-v2.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2025-04-19T08:58:45Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- Illustrious XL v2.0
- illustrious
base_model: OnomaAIResearch/Illustrious-XL-v2.0
---
Original model is [here](https://civitai.com/models/1439680?modelVersionId=1681362).
This model created by [noboru6703](https://civitai.com/user/noboru6703).
|
mlfoundations-dev/b1_science_top_16_10k | mlfoundations-dev | "2025-04-19T08:51:23Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-19T04:45:08Z" | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: b1_science_top_16_10k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# b1_science_top_16_10k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/b1_science_top_16_10k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
vuongpro/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scavenging_skilled_owl | vuongpro | "2025-04-19T08:45:50Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am scavenging skilled owl",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-14T21:16:54Z" | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scavenging_skilled_owl
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am scavenging skilled owl
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scavenging_skilled_owl
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vuongpro/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scavenging_skilled_owl", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
ahmet71cakir/phi4-turbochat | ahmet71cakir | "2025-04-19T08:42:25Z" | 21 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-4-mini-instruct",
"base_model:adapter:microsoft/Phi-4-mini-instruct",
"license:mit",
"region:us"
] | null | "2025-04-18T17:12:05Z" | ---
library_name: peft
license: mit
base_model: microsoft/Phi-4-mini-instruct
tags:
- generated_from_trainer
model-index:
- name: phi4-turbochat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi4-turbochat
This model is a fine-tuned version of [microsoft/Phi-4-mini-instruct](https://huggingface.co/microsoft/Phi-4-mini-instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.14.0
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
rbelanec/train_sst2_1744902625 | rbelanec | "2025-04-19T08:41:51Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"ia3",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3",
"license:apache-2.0",
"region:us"
] | null | "2025-04-18T22:40:11Z" | ---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.3
tags:
- llama-factory
- ia3
- generated_from_trainer
model-index:
- name: train_sst2_1744902625
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_sst2_1744902625
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) on the sst2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0713
- Num Input Tokens Seen: 33458560
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 123
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- training_steps: 40000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-------:|:-----:|:---------------:|:-----------------:|
| 0.1318 | 0.0528 | 200 | 0.1586 | 166688 |
| 0.1951 | 0.1056 | 400 | 0.1397 | 334048 |
| 0.0693 | 0.1584 | 600 | 0.1319 | 500448 |
| 0.0969 | 0.2112 | 800 | 0.1235 | 667872 |
| 0.1196 | 0.2640 | 1000 | 0.1205 | 834848 |
| 0.1125 | 0.3167 | 1200 | 0.1172 | 1002816 |
| 0.0759 | 0.3695 | 1400 | 0.1145 | 1169088 |
| 0.2051 | 0.4223 | 1600 | 0.1123 | 1337088 |
| 0.1022 | 0.4751 | 1800 | 0.1098 | 1505536 |
| 0.1273 | 0.5279 | 2000 | 0.1084 | 1673024 |
| 0.1299 | 0.5807 | 2200 | 0.1064 | 1842304 |
| 0.1179 | 0.6335 | 2400 | 0.1051 | 2007328 |
| 0.0799 | 0.6863 | 2600 | 0.1043 | 2174880 |
| 0.1575 | 0.7391 | 2800 | 0.1027 | 2341280 |
| 0.116 | 0.7919 | 3000 | 0.1017 | 2509440 |
| 0.0956 | 0.8447 | 3200 | 0.1001 | 2674784 |
| 0.0767 | 0.8975 | 3400 | 0.0986 | 2843680 |
| 0.1587 | 0.9502 | 3600 | 0.1008 | 3011904 |
| 0.0879 | 1.0029 | 3800 | 0.0968 | 3178064 |
| 0.1003 | 1.0557 | 4000 | 0.0961 | 3345904 |
| 0.0937 | 1.1085 | 4200 | 0.0947 | 3514608 |
| 0.0834 | 1.1613 | 4400 | 0.0943 | 3680560 |
| 0.0678 | 1.2141 | 4600 | 0.0933 | 3849328 |
| 0.1148 | 1.2669 | 4800 | 0.0934 | 4017200 |
| 0.0771 | 1.3197 | 5000 | 0.0917 | 4187184 |
| 0.0694 | 1.3724 | 5200 | 0.0915 | 4354416 |
| 0.0979 | 1.4252 | 5400 | 0.0907 | 4519856 |
| 0.1024 | 1.4780 | 5600 | 0.0901 | 4687280 |
| 0.0657 | 1.5308 | 5800 | 0.0896 | 4856112 |
| 0.0484 | 1.5836 | 6000 | 0.0896 | 5022736 |
| 0.1531 | 1.6364 | 6200 | 0.0881 | 5188656 |
| 0.0504 | 1.6892 | 6400 | 0.0878 | 5356208 |
| 0.1301 | 1.7420 | 6600 | 0.0874 | 5523952 |
| 0.1359 | 1.7948 | 6800 | 0.0867 | 5690672 |
| 0.0949 | 1.8476 | 7000 | 0.0864 | 5857072 |
| 0.0511 | 1.9004 | 7200 | 0.0864 | 6024976 |
| 0.0878 | 1.9531 | 7400 | 0.0859 | 6191664 |
| 0.0956 | 2.0058 | 7600 | 0.0850 | 6357472 |
| 0.1002 | 2.0586 | 7800 | 0.0851 | 6525984 |
| 0.0851 | 2.1114 | 8000 | 0.0841 | 6692320 |
| 0.0766 | 2.1642 | 8200 | 0.0838 | 6860064 |
| 0.1541 | 2.2170 | 8400 | 0.0846 | 7026528 |
| 0.0482 | 2.2698 | 8600 | 0.0834 | 7192384 |
| 0.1002 | 2.3226 | 8800 | 0.0832 | 7358816 |
| 0.0775 | 2.3753 | 9000 | 0.0827 | 7526496 |
| 0.0724 | 2.4281 | 9200 | 0.0824 | 7696064 |
| 0.0841 | 2.4809 | 9400 | 0.0831 | 7863456 |
| 0.1016 | 2.5337 | 9600 | 0.0818 | 8031776 |
| 0.0756 | 2.5865 | 9800 | 0.0819 | 8199584 |
| 0.0949 | 2.6393 | 10000 | 0.0810 | 8366016 |
| 0.0677 | 2.6921 | 10200 | 0.0812 | 8531808 |
| 0.0611 | 2.7449 | 10400 | 0.0807 | 8702976 |
| 0.0474 | 2.7977 | 10600 | 0.0804 | 8870944 |
| 0.0933 | 2.8505 | 10800 | 0.0812 | 9039680 |
| 0.1127 | 2.9033 | 11000 | 0.0813 | 9206880 |
| 0.0633 | 2.9561 | 11200 | 0.0802 | 9372128 |
| 0.0816 | 3.0087 | 11400 | 0.0794 | 9538768 |
| 0.0781 | 3.0615 | 11600 | 0.0791 | 9705232 |
| 0.0599 | 3.1143 | 11800 | 0.0793 | 9871632 |
| 0.0713 | 3.1671 | 12000 | 0.0794 | 10039472 |
| 0.0291 | 3.2199 | 12200 | 0.0789 | 10206320 |
| 0.0547 | 3.2727 | 12400 | 0.0785 | 10376240 |
| 0.0882 | 3.3255 | 12600 | 0.0787 | 10544464 |
| 0.0322 | 3.3782 | 12800 | 0.0781 | 10712240 |
| 0.0395 | 3.4310 | 13000 | 0.0778 | 10879120 |
| 0.0472 | 3.4838 | 13200 | 0.0779 | 11045072 |
| 0.0689 | 3.5366 | 13400 | 0.0781 | 11211312 |
| 0.09 | 3.5894 | 13600 | 0.0774 | 11378128 |
| 0.0392 | 3.6422 | 13800 | 0.0780 | 11544592 |
| 0.1368 | 3.6950 | 14000 | 0.0771 | 11713040 |
| 0.1223 | 3.7478 | 14200 | 0.0774 | 11880432 |
| 0.106 | 3.8006 | 14400 | 0.0765 | 12048176 |
| 0.049 | 3.8534 | 14600 | 0.0771 | 12215792 |
| 0.0427 | 3.9062 | 14800 | 0.0769 | 12383792 |
| 0.052 | 3.9590 | 15000 | 0.0764 | 12549680 |
| 0.0927 | 4.0116 | 15200 | 0.0763 | 12716448 |
| 0.0437 | 4.0644 | 15400 | 0.0767 | 12882752 |
| 0.0549 | 4.1172 | 15600 | 0.0764 | 13051200 |
| 0.0587 | 4.1700 | 15800 | 0.0761 | 13217024 |
| 0.0562 | 4.2228 | 16000 | 0.0757 | 13382784 |
| 0.0657 | 4.2756 | 16200 | 0.0764 | 13549216 |
| 0.0374 | 4.3284 | 16400 | 0.0752 | 13719072 |
| 0.1196 | 4.3812 | 16600 | 0.0752 | 13884928 |
| 0.0847 | 4.4339 | 16800 | 0.0751 | 14051584 |
| 0.0485 | 4.4867 | 17000 | 0.0769 | 14220704 |
| 0.0352 | 4.5395 | 17200 | 0.0749 | 14387008 |
| 0.1084 | 4.5923 | 17400 | 0.0749 | 14555808 |
| 0.0591 | 4.6451 | 17600 | 0.0755 | 14723456 |
| 0.116 | 4.6979 | 17800 | 0.0749 | 14890880 |
| 0.0692 | 4.7507 | 18000 | 0.0755 | 15059744 |
| 0.0686 | 4.8035 | 18200 | 0.0746 | 15224512 |
| 0.1239 | 4.8563 | 18400 | 0.0744 | 15392960 |
| 0.0474 | 4.9091 | 18600 | 0.0744 | 15561696 |
| 0.0925 | 4.9619 | 18800 | 0.0744 | 15728800 |
| 0.0724 | 5.0145 | 19000 | 0.0741 | 15897552 |
| 0.0674 | 5.0673 | 19200 | 0.0740 | 16064688 |
| 0.0695 | 5.1201 | 19400 | 0.0740 | 16231120 |
| 0.0706 | 5.1729 | 19600 | 0.0737 | 16397744 |
| 0.1331 | 5.2257 | 19800 | 0.0738 | 16564176 |
| 0.0663 | 5.2785 | 20000 | 0.0737 | 16731600 |
| 0.0327 | 5.3313 | 20200 | 0.0748 | 16898064 |
| 0.0879 | 5.3841 | 20400 | 0.0738 | 17064080 |
| 0.0532 | 5.4368 | 20600 | 0.0736 | 17231888 |
| 0.0614 | 5.4896 | 20800 | 0.0735 | 17399184 |
| 0.0563 | 5.5424 | 21000 | 0.0745 | 17566160 |
| 0.0631 | 5.5952 | 21200 | 0.0736 | 17732304 |
| 0.0431 | 5.6480 | 21400 | 0.0733 | 17900880 |
| 0.0466 | 5.7008 | 21600 | 0.0733 | 18070192 |
| 0.0843 | 5.7536 | 21800 | 0.0732 | 18237168 |
| 0.0494 | 5.8064 | 22000 | 0.0731 | 18403856 |
| 0.1229 | 5.8592 | 22200 | 0.0732 | 18571248 |
| 0.0307 | 5.9120 | 22400 | 0.0731 | 18738672 |
| 0.0534 | 5.9648 | 22600 | 0.0730 | 18905744 |
| 0.0806 | 6.0174 | 22800 | 0.0731 | 19073440 |
| 0.0733 | 6.0702 | 23000 | 0.0732 | 19241920 |
| 0.1169 | 6.1230 | 23200 | 0.0732 | 19409408 |
| 0.0757 | 6.1758 | 23400 | 0.0731 | 19577024 |
| 0.0495 | 6.2286 | 23600 | 0.0728 | 19744608 |
| 0.0752 | 6.2814 | 23800 | 0.0727 | 19911488 |
| 0.0694 | 6.3342 | 24000 | 0.0726 | 20078944 |
| 0.0617 | 6.3870 | 24200 | 0.0727 | 20244928 |
| 0.093 | 6.4398 | 24400 | 0.0725 | 20411232 |
| 0.0579 | 6.4925 | 24600 | 0.0728 | 20578080 |
| 0.0712 | 6.5453 | 24800 | 0.0725 | 20746592 |
| 0.1026 | 6.5981 | 25000 | 0.0727 | 20913344 |
| 0.0384 | 6.6509 | 25200 | 0.0725 | 21081952 |
| 0.0928 | 6.7037 | 25400 | 0.0724 | 21248384 |
| 0.0907 | 6.7565 | 25600 | 0.0723 | 21415872 |
| 0.0511 | 6.8093 | 25800 | 0.0729 | 21584000 |
| 0.1154 | 6.8621 | 26000 | 0.0723 | 21751168 |
| 0.0398 | 6.9149 | 26200 | 0.0722 | 21918816 |
| 0.0674 | 6.9677 | 26400 | 0.0723 | 22084384 |
| 0.0688 | 7.0203 | 26600 | 0.0722 | 22251776 |
| 0.0766 | 7.0731 | 26800 | 0.0722 | 22418080 |
| 0.0622 | 7.1259 | 27000 | 0.0721 | 22587392 |
| 0.0562 | 7.1787 | 27200 | 0.0721 | 22753056 |
| 0.0631 | 7.2315 | 27400 | 0.0724 | 22920768 |
| 0.0828 | 7.2843 | 27600 | 0.0718 | 23087296 |
| 0.0412 | 7.3371 | 27800 | 0.0721 | 23254400 |
| 0.0324 | 7.3899 | 28000 | 0.0721 | 23422752 |
| 0.0441 | 7.4427 | 28200 | 0.0721 | 23588352 |
| 0.0616 | 7.4954 | 28400 | 0.0723 | 23755840 |
| 0.0565 | 7.5482 | 28600 | 0.0721 | 23923680 |
| 0.0559 | 7.6010 | 28800 | 0.0719 | 24091168 |
| 0.0394 | 7.6538 | 29000 | 0.0721 | 24258016 |
| 0.0899 | 7.7066 | 29200 | 0.0718 | 24427808 |
| 0.0231 | 7.7594 | 29400 | 0.0718 | 24596288 |
| 0.0492 | 7.8122 | 29600 | 0.0718 | 24764192 |
| 0.0627 | 7.8650 | 29800 | 0.0719 | 24932000 |
| 0.0346 | 7.9178 | 30000 | 0.0718 | 25100224 |
| 0.0597 | 7.9706 | 30200 | 0.0722 | 25267808 |
| 0.0569 | 8.0232 | 30400 | 0.0720 | 25433440 |
| 0.0757 | 8.0760 | 30600 | 0.0717 | 25600672 |
| 0.0524 | 8.1288 | 30800 | 0.0718 | 25769408 |
| 0.0424 | 8.1816 | 31000 | 0.0717 | 25936160 |
| 0.0652 | 8.2344 | 31200 | 0.0718 | 26103744 |
| 0.0822 | 8.2872 | 31400 | 0.0715 | 26270560 |
| 0.0691 | 8.3400 | 31600 | 0.0719 | 26437536 |
| 0.031 | 8.3928 | 31800 | 0.0719 | 26604480 |
| 0.0484 | 8.4456 | 32000 | 0.0716 | 26771680 |
| 0.1148 | 8.4984 | 32200 | 0.0716 | 26940256 |
| 0.073 | 8.5511 | 32400 | 0.0715 | 27107680 |
| 0.0813 | 8.6039 | 32600 | 0.0718 | 27274048 |
| 0.1232 | 8.6567 | 32800 | 0.0717 | 27440544 |
| 0.0994 | 8.7095 | 33000 | 0.0716 | 27608000 |
| 0.0363 | 8.7623 | 33200 | 0.0715 | 27776704 |
| 0.016 | 8.8151 | 33400 | 0.0717 | 27942752 |
| 0.0744 | 8.8679 | 33600 | 0.0716 | 28108864 |
| 0.0325 | 8.9207 | 33800 | 0.0714 | 28275296 |
| 0.0517 | 8.9735 | 34000 | 0.0716 | 28443520 |
| 0.028 | 9.0261 | 34200 | 0.0716 | 28609776 |
| 0.061 | 9.0789 | 34400 | 0.0716 | 28777712 |
| 0.1408 | 9.1317 | 34600 | 0.0717 | 28944144 |
| 0.0362 | 9.1845 | 34800 | 0.0716 | 29111152 |
| 0.0993 | 9.2373 | 35000 | 0.0716 | 29278000 |
| 0.0391 | 9.2901 | 35200 | 0.0716 | 29443792 |
| 0.0398 | 9.3429 | 35400 | 0.0716 | 29609072 |
| 0.0981 | 9.3957 | 35600 | 0.0715 | 29776592 |
| 0.0716 | 9.4485 | 35800 | 0.0716 | 29941616 |
| 0.066 | 9.5013 | 36000 | 0.0717 | 30110160 |
| 0.0694 | 9.5540 | 36200 | 0.0716 | 30277744 |
| 0.1284 | 9.6068 | 36400 | 0.0716 | 30447152 |
| 0.028 | 9.6596 | 36600 | 0.0713 | 30612976 |
| 0.0429 | 9.7124 | 36800 | 0.0714 | 30780240 |
| 0.0227 | 9.7652 | 37000 | 0.0715 | 30948048 |
| 0.05 | 9.8180 | 37200 | 0.0715 | 31116368 |
| 0.0342 | 9.8708 | 37400 | 0.0715 | 31283888 |
| 0.0368 | 9.9236 | 37600 | 0.0716 | 31452560 |
| 0.0681 | 9.9764 | 37800 | 0.0714 | 31620720 |
| 0.0867 | 10.0290 | 38000 | 0.0713 | 31786016 |
| 0.0869 | 10.0818 | 38200 | 0.0715 | 31952768 |
| 0.0735 | 10.1346 | 38400 | 0.0714 | 32120320 |
| 0.0173 | 10.1874 | 38600 | 0.0715 | 32287584 |
| 0.0469 | 10.2402 | 38800 | 0.0716 | 32455072 |
| 0.0459 | 10.2930 | 39000 | 0.0713 | 32621184 |
| 0.0397 | 10.3458 | 39200 | 0.0714 | 32788960 |
| 0.0401 | 10.3986 | 39400 | 0.0716 | 32955776 |
| 0.0332 | 10.4514 | 39600 | 0.0716 | 33122816 |
| 0.0907 | 10.5042 | 39800 | 0.0716 | 33291072 |
| 0.0616 | 10.5569 | 40000 | 0.0716 | 33458560 |
### Framework versions
- PEFT 0.15.1
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta2e-1_epoch2-gguf | RichardErkhov | "2025-04-19T08:41:02Z" | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-19T05:27:18Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
mp_mistral7bv3_sft_dpo_beta2e-1_epoch2 - GGUF
- Model creator: https://huggingface.co/yjwon/
- Original model: https://huggingface.co/yjwon/mp_mistral7bv3_sft_dpo_beta2e-1_epoch2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.Q2_K.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta2e-1_epoch2-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.Q2_K.gguf) | Q2_K | 2.54GB |
| [mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta2e-1_epoch2-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.IQ3_XS.gguf) | IQ3_XS | 2.82GB |
| [mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta2e-1_epoch2-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.IQ3_S.gguf) | IQ3_S | 2.97GB |
| [mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta2e-1_epoch2-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta2e-1_epoch2-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.Q3_K.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta2e-1_epoch2-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.Q3_K.gguf) | Q3_K | 3.28GB |
| [mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta2e-1_epoch2-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta2e-1_epoch2-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta2e-1_epoch2-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.IQ4_XS.gguf) | IQ4_XS | 3.68GB |
| [mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.Q4_0.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta2e-1_epoch2-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.Q4_0.gguf) | Q4_0 | 3.83GB |
| [mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta2e-1_epoch2-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta2e-1_epoch2-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.Q4_K.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta2e-1_epoch2-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.Q4_K.gguf) | Q4_K | 4.07GB |
| [mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta2e-1_epoch2-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.Q4_1.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta2e-1_epoch2-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.Q4_1.gguf) | Q4_1 | 4.24GB |
| [mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.Q5_0.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta2e-1_epoch2-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.Q5_0.gguf) | Q5_0 | 4.66GB |
| [mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta2e-1_epoch2-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.Q5_K_S.gguf) | Q5_K_S | 4.66GB |
| [mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.Q5_K.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta2e-1_epoch2-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.Q5_K.gguf) | Q5_K | 4.78GB |
| [mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta2e-1_epoch2-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.Q5_1.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta2e-1_epoch2-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.Q5_1.gguf) | Q5_1 | 5.07GB |
| [mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.Q6_K.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta2e-1_epoch2-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.Q6_K.gguf) | Q6_K | 5.54GB |
| [mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.Q8_0.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mp_mistral7bv3_sft_dpo_beta2e-1_epoch2-gguf/blob/main/mp_mistral7bv3_sft_dpo_beta2e-1_epoch2.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits