modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
mradermacher/Deep-Reasoning-Llama-3.2-Hermes-3-3B-i1-GGUF | mradermacher | 2025-05-28T20:01:53Z | 88 | 1 | transformers | [
"transformers",
"gguf",
"reasoning",
"thinking",
"cot",
"deepseek",
"Llama 3.2",
"128k context",
"fine tune",
"llama-3",
"llama-3.2",
"en",
"base_model:DavidAU/Deep-Reasoning-Llama-3.2-Hermes-3-3B",
"base_model:quantized:DavidAU/Deep-Reasoning-Llama-3.2-Hermes-3-3B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-02-17T21:27:17Z | ---
base_model: DavidAU/Deep-Reasoning-Llama-3.2-Hermes-3-3B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- reasoning
- thinking
- cot
- deepseek
- Llama 3.2
- 128k context
- fine tune
- llama-3
- llama-3.2
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/DavidAU/Deep-Reasoning-Llama-3.2-Hermes-3-3B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Hermes-3-3B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Hermes-3-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Hermes-3-3B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Hermes-3-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Hermes-3-3B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Hermes-3-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Hermes-3-3B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Hermes-3-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Hermes-3-3B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Hermes-3-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Hermes-3-3B.i1-IQ2_S.gguf) | i1-IQ2_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Hermes-3-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Hermes-3-3B.i1-IQ2_M.gguf) | i1-IQ2_M | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Hermes-3-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Hermes-3-3B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Hermes-3-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Hermes-3-3B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Hermes-3-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Hermes-3-3B.i1-Q2_K.gguf) | i1-Q2_K | 1.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Hermes-3-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Hermes-3-3B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Hermes-3-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Hermes-3-3B.i1-IQ3_S.gguf) | i1-IQ3_S | 1.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Hermes-3-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Hermes-3-3B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Hermes-3-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Hermes-3-3B.i1-IQ3_M.gguf) | i1-IQ3_M | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Hermes-3-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Hermes-3-3B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Hermes-3-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Hermes-3-3B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Hermes-3-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Hermes-3-3B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Hermes-3-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Hermes-3-3B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.0 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Hermes-3-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Hermes-3-3B.i1-Q4_0.gguf) | i1-Q4_0 | 2.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Hermes-3-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Hermes-3-3B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Hermes-3-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Hermes-3-3B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Hermes-3-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Hermes-3-3B.i1-Q4_1.gguf) | i1-Q4_1 | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Hermes-3-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Hermes-3-3B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Hermes-3-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Hermes-3-3B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Hermes-3-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Hermes-3-3B.i1-Q6_K.gguf) | i1-Q6_K | 2.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
FormlessAI/c1e6c04f-5ee9-4d3a-a62e-65094a07bc4f | FormlessAI | 2025-05-28T20:01:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:Qwen/Qwen1.5-0.5B-Chat",
"base_model:finetune:Qwen/Qwen1.5-0.5B-Chat",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T17:28:03Z | ---
base_model: Qwen/Qwen1.5-0.5B-Chat
library_name: transformers
model_name: c1e6c04f-5ee9-4d3a-a62e-65094a07bc4f
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for c1e6c04f-5ee9-4d3a-a62e-65094a07bc4f
This model is a fine-tuned version of [Qwen/Qwen1.5-0.5B-Chat](https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FormlessAI/c1e6c04f-5ee9-4d3a-a62e-65094a07bc4f", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/8ax63y8y)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.52.3
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B-i1-GGUF | mradermacher | 2025-05-28T20:01:13Z | 839 | 1 | transformers | [
"transformers",
"gguf",
"reasoning",
"thinking",
"cot",
"deepseek",
"Llama 3.2",
"128k context",
"llama-3",
"llama-3.2",
"fine tune",
"en",
"base_model:DavidAU/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B",
"base_model:quantized:DavidAU/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-02-18T01:35:04Z | ---
base_model: DavidAU/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- reasoning
- thinking
- cot
- deepseek
- Llama 3.2
- 128k context
- llama-3
- llama-3.2
- fine tune
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/DavidAU/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B.i1-IQ2_S.gguf) | i1-IQ2_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B.i1-IQ2_M.gguf) | i1-IQ2_M | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B.i1-Q2_K.gguf) | i1-Q2_K | 1.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B.i1-IQ3_S.gguf) | i1-IQ3_S | 1.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B.i1-IQ3_M.gguf) | i1-IQ3_M | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.0 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B.i1-Q4_0.gguf) | i1-Q4_0 | 2.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B.i1-Q4_1.gguf) | i1-Q4_1 | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B.i1-Q6_K.gguf) | i1-Q6_K | 2.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
obaysurebay/test | obaysurebay | 2025-05-28T20:00:49Z | 0 | 0 | null | [
"netflix_recommendation",
"region:us"
] | null | 2025-05-28T19:28:27Z | # Netflix Movie Recommendation System
## Model Description
This is a content-based recommendation system for Netflix movies and TV shows using TF-IDF vectorization and cosine similarity.
## How to Use
```python
from huggingface_hub import hf_hub_download
import pickle
import numpy as np
import pandas as pd
# Download model files
tfidf_vectorizer = pickle.load(open(hf_hub_download(repo_id="your-username/netflix-recommender", filename="tfidf_vectorizer.pkl"), 'rb'))
cosine_sim_matrix = np.load(hf_hub_download(repo_id="your-username/netflix-recommender", filename="cosine_similarity_matrix.npy"))
movie_data = pd.read_csv(hf_hub_download(repo_id="your-username/netflix-recommender", filename="final_movie_data.csv"))
# Use the recommendation system
# (Add your FlixHub class here or implement recommendation logic)
```
## Model Files
- `tfidf_vectorizer.pkl`: Trained TF-IDF vectorizer
- `cosine_similarity_matrix.npy`: Precomputed similarity matrix
- `final_movie_data.csv`: Clean dataset with movie titles and types
- `config.json`: Model configuration
## Training Data
- Dataset: Netflix Movies and Shows Dataset
- Features: title, type, director, cast, rating, genre, description
- Preprocessing: Text cleaning, TF-IDF vectorization
- Similarity: Cosine similarity
## Performance
This model provides content-based recommendations based on movie/show features and descriptions. |
mradermacher/E-Star-small-v0.1-GGUF | mradermacher | 2025-05-28T20:00:05Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:datumo/E-Star-small-v0.1",
"base_model:quantized:datumo/E-Star-small-v0.1",
"endpoints_compatible",
"region:us"
] | null | 2025-05-28T18:57:04Z | ---
base_model: datumo/E-Star-small-v0.1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/datumo/E-Star-small-v0.1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/E-Star-small-v0.1-GGUF/resolve/main/E-Star-small-v0.1.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/E-Star-small-v0.1-GGUF/resolve/main/E-Star-small-v0.1.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/E-Star-small-v0.1-GGUF/resolve/main/E-Star-small-v0.1.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/E-Star-small-v0.1-GGUF/resolve/main/E-Star-small-v0.1.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/E-Star-small-v0.1-GGUF/resolve/main/E-Star-small-v0.1.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/E-Star-small-v0.1-GGUF/resolve/main/E-Star-small-v0.1.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/E-Star-small-v0.1-GGUF/resolve/main/E-Star-small-v0.1.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/E-Star-small-v0.1-GGUF/resolve/main/E-Star-small-v0.1.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/E-Star-small-v0.1-GGUF/resolve/main/E-Star-small-v0.1.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/E-Star-small-v0.1-GGUF/resolve/main/E-Star-small-v0.1.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/E-Star-small-v0.1-GGUF/resolve/main/E-Star-small-v0.1.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/E-Star-small-v0.1-GGUF/resolve/main/E-Star-small-v0.1.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
llmware/slim-sql-qwen-base-ov | llmware | 2025-05-28T19:59:17Z | 0 | 0 | null | [
"openvino",
"qwen2",
"green",
"p2",
"llmware-fx",
"ov",
"emerald",
"base_model:llmware/slim-sql-qwen-base",
"base_model:quantized:llmware/slim-sql-qwen-base",
"license:apache-2.0",
"region:us"
] | null | 2024-08-31T09:43:29Z | ---
license: apache-2.0
inference: false
base_model: llmware/slim-sql-qwen-base
base_model_relation: quantized
tags: [green, p2, llmware-fx, ov, emerald]
---
# slim-sql-qwen-base-ov
**slim-sql-qwen-base-ov** is a small specialized function calling model that takes as input a table schema and a natural language query, and outputs a SQL statement that corresponds to the query, and can be run against a database table. This is a very small text-to-sql model designed for reasonable accuracy on single tables and relatively straightforward queries, and for easy integration into multi-step processes.
This is an OpenVino int4 quantized version of slim-sql-qwen-base-ov, providing a very fast, very small inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.
### Model Description
- **Developed by:** llmware
- **Model type:** phi-3
- **Parameters:** 3 billion
- **Model Parent:** llmware/slim-sql-qwen-base
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Uses:** Text-to-SQL conversion
- **RAG Benchmark Accuracy Score:** NA
- **Quantization:** int4
## Model Card Contact
[llmware on github](https://www.github.com/llmware-ai/llmware)
[llmware on hf](https://www.huggingface.co/llmware)
[llmware website](https://www.llmware.ai)
|
mradermacher/L3-Grand-Story-Darkness-MOE-4X8-24.9B-e32-GGUF | mradermacher | 2025-05-28T19:58:47Z | 36 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"mixture of experts",
"moe",
"4x8B",
"float32",
"llama-3",
"llama3",
"32 bit enhanced",
"LLama MOE",
"merge",
"en",
"base_model:DavidAU/L3-Grand-Story-Darkness-MOE-4X8-24.9B-e32",
"base_model:quantized:DavidAU/L3-Grand-Story-Darkness-MOE-4X8-24.9B-e32",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-18T13:49:41Z | ---
base_model: DavidAU/L3-Grand-Story-Darkness-MOE-4X8-24.9B-e32
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- mixture of experts
- moe
- 4x8B
- float32
- llama-3
- llama3
- 32 bit enhanced
- LLama MOE
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/DavidAU/L3-Grand-Story-Darkness-MOE-4X8-24.9B-e32
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/L3-Grand-Story-Darkness-MOE-4X8-24.9B-e32-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3-Grand-Story-Darkness-MOE-4X8-24.9B-e32-GGUF/resolve/main/L3-Grand-Story-Darkness-MOE-4X8-24.9B-e32.Q2_K.gguf) | Q2_K | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Grand-Story-Darkness-MOE-4X8-24.9B-e32-GGUF/resolve/main/L3-Grand-Story-Darkness-MOE-4X8-24.9B-e32.Q3_K_S.gguf) | Q3_K_S | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Grand-Story-Darkness-MOE-4X8-24.9B-e32-GGUF/resolve/main/L3-Grand-Story-Darkness-MOE-4X8-24.9B-e32.Q3_K_M.gguf) | Q3_K_M | 12.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3-Grand-Story-Darkness-MOE-4X8-24.9B-e32-GGUF/resolve/main/L3-Grand-Story-Darkness-MOE-4X8-24.9B-e32.Q3_K_L.gguf) | Q3_K_L | 13.1 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Grand-Story-Darkness-MOE-4X8-24.9B-e32-GGUF/resolve/main/L3-Grand-Story-Darkness-MOE-4X8-24.9B-e32.IQ4_XS.gguf) | IQ4_XS | 13.7 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Grand-Story-Darkness-MOE-4X8-24.9B-e32-GGUF/resolve/main/L3-Grand-Story-Darkness-MOE-4X8-24.9B-e32.Q4_K_S.gguf) | Q4_K_S | 14.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3-Grand-Story-Darkness-MOE-4X8-24.9B-e32-GGUF/resolve/main/L3-Grand-Story-Darkness-MOE-4X8-24.9B-e32.Q4_K_M.gguf) | Q4_K_M | 15.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3-Grand-Story-Darkness-MOE-4X8-24.9B-e32-GGUF/resolve/main/L3-Grand-Story-Darkness-MOE-4X8-24.9B-e32.Q5_K_S.gguf) | Q5_K_S | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Grand-Story-Darkness-MOE-4X8-24.9B-e32-GGUF/resolve/main/L3-Grand-Story-Darkness-MOE-4X8-24.9B-e32.Q5_K_M.gguf) | Q5_K_M | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Grand-Story-Darkness-MOE-4X8-24.9B-e32-GGUF/resolve/main/L3-Grand-Story-Darkness-MOE-4X8-24.9B-e32.Q6_K.gguf) | Q6_K | 20.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/L3-Grand-Story-Darkness-MOE-4X8-24.9B-e32-GGUF/resolve/main/L3-Grand-Story-Darkness-MOE-4X8-24.9B-e32.Q8_0.gguf) | Q8_0 | 26.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
duythanh1022/blip2-finetuned-vivqa | duythanh1022 | 2025-05-28T19:58:14Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:ybelkada/blip2-opt-2.7b-fp16-sharded",
"base_model:adapter:ybelkada/blip2-opt-2.7b-fp16-sharded",
"region:us"
] | null | 2025-05-28T14:41:46Z | ---
library_name: peft
base_model: ybelkada/blip2-opt-2.7b-fp16-sharded
tags:
- generated_from_trainer
model-index:
- name: blip2-finetuned-vivqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# blip2-finetuned-vivqa
This model is a fine-tuned version of [ybelkada/blip2-opt-2.7b-fp16-sharded](https://huggingface.co/ybelkada/blip2-opt-2.7b-fp16-sharded) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4914
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.6558 | 0.4551 | 1000 | 1.5727 |
| 1.5664 | 0.9101 | 2000 | 1.4914 |
### Framework versions
- PEFT 0.15.2.dev0
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1 |
mradermacher/Deep-Reasoning-Llama-3.2-Enigma-3B-i1-GGUF | mradermacher | 2025-05-28T19:57:44Z | 43 | 0 | transformers | [
"transformers",
"gguf",
"reasoning",
"thinking",
"cot",
"deepseek",
"Llama 3.2",
"128k context",
"fine tune",
"llama-3",
"llama-3.2",
"en",
"base_model:DavidAU/Deep-Reasoning-Llama-3.2-Enigma-3B",
"base_model:quantized:DavidAU/Deep-Reasoning-Llama-3.2-Enigma-3B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-02-19T02:18:07Z | ---
base_model: DavidAU/Deep-Reasoning-Llama-3.2-Enigma-3B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- reasoning
- thinking
- cot
- deepseek
- Llama 3.2
- 128k context
- fine tune
- llama-3
- llama-3.2
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/DavidAU/Deep-Reasoning-Llama-3.2-Enigma-3B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Enigma-3B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Enigma-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Enigma-3B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Enigma-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Enigma-3B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Enigma-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Enigma-3B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Enigma-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Enigma-3B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Enigma-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Enigma-3B.i1-IQ2_S.gguf) | i1-IQ2_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Enigma-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Enigma-3B.i1-IQ2_M.gguf) | i1-IQ2_M | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Enigma-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Enigma-3B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Enigma-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Enigma-3B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Enigma-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Enigma-3B.i1-Q2_K.gguf) | i1-Q2_K | 1.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Enigma-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Enigma-3B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Enigma-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Enigma-3B.i1-IQ3_S.gguf) | i1-IQ3_S | 1.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Enigma-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Enigma-3B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Enigma-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Enigma-3B.i1-IQ3_M.gguf) | i1-IQ3_M | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Enigma-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Enigma-3B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Enigma-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Enigma-3B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Enigma-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Enigma-3B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Enigma-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Enigma-3B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.0 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Enigma-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Enigma-3B.i1-Q4_0.gguf) | i1-Q4_0 | 2.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Enigma-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Enigma-3B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Enigma-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Enigma-3B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Enigma-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Enigma-3B.i1-Q4_1.gguf) | i1-Q4_1 | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Enigma-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Enigma-3B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Enigma-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Enigma-3B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Enigma-3B-i1-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Enigma-3B.i1-Q6_K.gguf) | i1-Q6_K | 2.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
RichardErkhov/openaccess-ai-collective_-_dpopenhermes-alpha-v0-8bits | RichardErkhov | 2025-05-28T19:52:28Z | 0 | 0 | null | [
"safetensors",
"mistral",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-28T19:48:30Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
dpopenhermes-alpha-v0 - bnb 8bits
- Model creator: https://huggingface.co/openaccess-ai-collective/
- Original model: https://huggingface.co/openaccess-ai-collective/dpopenhermes-alpha-v0/
Original model description:
---
license: apache-2.0
base_model: teknium/OpenHermes-2.5-Mistral-7B
tags:
- generated_from_trainer
model-index:
- name: qlora-out
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
# DPOpenHermes
## OpenHermes x NoRobots x Neural
This model is a RL fine-tuned version of [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) on the Intel/orca_dpo_pairs and winglian/no_robots_rlhf datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- training_steps: 1348
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
RichardErkhov/MaziyarPanahi_-_M7Yamshadowexperiment28_YamshadowExperiment27pastiche-4bits | RichardErkhov | 2025-05-28T19:51:21Z | 0 | 0 | null | [
"safetensors",
"mistral",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-28T19:47:47Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
M7Yamshadowexperiment28_YamshadowExperiment27pastiche - bnb 4bits
- Model creator: https://huggingface.co/MaziyarPanahi/
- Original model: https://huggingface.co/MaziyarPanahi/M7Yamshadowexperiment28_YamshadowExperiment27pastiche/
Original model description:
---
license: apache-2.0
tags:
- Safetensors
- text-generation-inference
- merge
model_name: M7Yamshadowexperiment28_YamshadowExperiment27pastiche
base_model:
- automerger/M7Yamshadowexperiment28-7B
- automerger/YamshadowExperiment27pastiche-7B
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# M7Yamshadowexperiment28_YamshadowExperiment27pastiche
M7Yamshadowexperiment28_YamshadowExperiment27pastiche is a merge of the following models:
* ['automerger/M7Yamshadowexperiment28-7B'](https://huggingface.co/'automerger/M7Yamshadowexperiment28-7B')
* ['automerger/YamshadowExperiment27pastiche-7B'](https://huggingface.co/'automerger/YamshadowExperiment27pastiche-7B')
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/M7Yamshadowexperiment28_YamshadowExperiment27pastiche"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
RichardErkhov/CorticalStack_-_mistral-7b-openhermes-2.5-sft-4bits | RichardErkhov | 2025-05-28T19:50:52Z | 0 | 0 | null | [
"safetensors",
"mistral",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-28T19:48:27Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
mistral-7b-openhermes-2.5-sft - bnb 4bits
- Model creator: https://huggingface.co/CorticalStack/
- Original model: https://huggingface.co/CorticalStack/mistral-7b-openhermes-2.5-sft/
Original model description:
---
license: apache-2.0
---
# mistral-7b-openhermes-2.5-sft
mistral-7b-openhermes-2.5-sft is an SFT fine-tuned version of [unsloth/mistral-7b-bnb-4bit](https://huggingface.co/unsloth/mistral-7b-bnb-4bit) using the [teknium/OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5) dataset.
## Fine-tuning configuration
### LoRA
- r: 256
- LoRA alpha: 128
- LoRA dropout: 0.0
### Training arguments
- Epochs: 1
- Batch size: 4
- Gradient accumulation steps: 6
- Optimizer: adamw_torch_fused
- Max steps: 100
- Learning rate: 0.0002
- Weight decay: 0.1
- Learning rate scheduler type: linear
- Max seq length: 2048
- 4-bit bnb: True
Trained with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B-i1-GGUF | mradermacher | 2025-05-28T19:49:32Z | 76 | 0 | transformers | [
"transformers",
"gguf",
"Llama 3.2",
"8 X 3B",
"128k context",
"moe",
"8 experts",
"reasoning",
"thinking",
"r1",
"cot",
"deepseek",
"mixture of experts",
"mergekit",
"merge",
"llama-3",
"llama-3.2",
"en",
"base_model:DavidAU/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B",
"base_model:quantized:DavidAU/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-02-21T17:25:25Z | ---
base_model: DavidAU/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- Llama 3.2
- 8 X 3B
- 128k context
- moe
- 8 experts
- reasoning
- thinking
- r1
- cot
- deepseek
- mixture of experts
- mergekit
- merge
- llama-3
- llama-3.2
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/DavidAU/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B-i1-GGUF/resolve/main/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B.i1-IQ1_S.gguf) | i1-IQ1_S | 4.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B-i1-GGUF/resolve/main/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B.i1-IQ1_M.gguf) | i1-IQ1_M | 4.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B-i1-GGUF/resolve/main/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B-i1-GGUF/resolve/main/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B-i1-GGUF/resolve/main/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B.i1-IQ2_S.gguf) | i1-IQ2_S | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B-i1-GGUF/resolve/main/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B.i1-IQ2_M.gguf) | i1-IQ2_M | 6.4 | |
| [GGUF](https://huggingface.co/mradermacher/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B-i1-GGUF/resolve/main/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 6.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B-i1-GGUF/resolve/main/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B.i1-Q2_K.gguf) | i1-Q2_K | 7.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B-i1-GGUF/resolve/main/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 7.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B-i1-GGUF/resolve/main/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 7.9 | |
| [GGUF](https://huggingface.co/mradermacher/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B-i1-GGUF/resolve/main/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B.i1-IQ3_S.gguf) | i1-IQ3_S | 8.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B-i1-GGUF/resolve/main/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 8.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B-i1-GGUF/resolve/main/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B.i1-IQ3_M.gguf) | i1-IQ3_M | 8.7 | |
| [GGUF](https://huggingface.co/mradermacher/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B-i1-GGUF/resolve/main/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 9.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B-i1-GGUF/resolve/main/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 9.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B-i1-GGUF/resolve/main/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B-i1-GGUF/resolve/main/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 10.7 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B-i1-GGUF/resolve/main/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B.i1-Q4_0.gguf) | i1-Q4_0 | 10.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B-i1-GGUF/resolve/main/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 10.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B-i1-GGUF/resolve/main/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 11.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B-i1-GGUF/resolve/main/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B.i1-Q4_1.gguf) | i1-Q4_1 | 11.8 | |
| [GGUF](https://huggingface.co/mradermacher/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B-i1-GGUF/resolve/main/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B-i1-GGUF/resolve/main/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 13.3 | |
| [GGUF](https://huggingface.co/mradermacher/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B-i1-GGUF/resolve/main/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B.i1-Q6_K.gguf) | i1-Q6_K | 15.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
akshugboi/AKSTEVE | akshugboi | 2025-05-28T19:48:45Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-28T19:21:47Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: AKS
---
# Aksteve
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `AKS` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "AKS",
"lora_weights": "https://huggingface.co/akshugboi/AKSTEVE/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('akshugboi/AKSTEVE', weight_name='lora.safetensors')
image = pipeline('AKS').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/akshugboi/AKSTEVE/discussions) to add images that show off what you’ve made with this LoRA.
|
RichardErkhov/AINovice2005_-_ElEmperador-4bits | RichardErkhov | 2025-05-28T19:48:20Z | 0 | 0 | null | [
"safetensors",
"mistral",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-28T19:44:55Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
ElEmperador - bnb 4bits
- Model creator: https://huggingface.co/AINovice2005/
- Original model: https://huggingface.co/AINovice2005/ElEmperador/
Original model description:
---
license: apache-2.0
datasets:
- argilla/ultrafeedback-binarized-preferences-cleaned
language:
- en
base_model:
- mistralai/Mistral-7B-v0.1
library_name: transformers
tags:
- transformers
- ORPO
- RLHF
- notus
- argilla
---
# Model Overview
# 𝐌𝐨𝐝𝐞𝐥 𝐍𝐚𝐦𝐞:ElEmperador

## Model Description:
ElEmperador is an ORPO-based finetune derived from the Mistral-7B-v0.1 base model.
## Evals:
BLEU:0.209
## Inference Script:
```python
def generate_response(model_name, input_text, max_new_tokens=50):
# Load the tokenizer and model from Hugging Face Hub
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Tokenize the input text
input_ids = tokenizer(input_text, return_tensors='pt').input_ids
# Generate a response using the model
with torch.no_grad():
generated_ids = model.generate(input_ids, max_new_tokens=max_new_tokens)
# Decode the generated tokens into text
generated_text = tokenizer.decode(generated_ids[0], skip_special_tokens=True)
return generated_text
if __name__ == "__main__":
# Set the model name from Hugging Face Hub
model_name = "AINovice2005/ElEmperador"
input_text = "Hello, how are you?"
# Generate and print the model's response
output = generate_response(model_name, input_text)
print(f"Input: {input_text}")
print(f"Output: {output}")
```
## Results
Firstly,ORPO is a viable RLHF algorithm to improve the performance of your models along with SFT finetuning.Secondly, it also helps in aligning the model’s outputs more closely with human preferences,
leading to more user-friendly and acceptable results.
|
celinelee/r1qw7B-sve-distill-r1-gemini-ratsuccess | celinelee | 2025-05-28T19:47:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"generated_from_trainer",
"conversational",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-27T16:14:02Z | ---
library_name: transformers
license: mit
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
tags:
- llama-factory
- generated_from_trainer
model-index:
- name: r1qw7B-sve-distill-r1-gemini-ratsuccess
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# r1qw7B-sve-distill-r1-gemini-ratsuccess
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 6
- gradient_accumulation_steps: 4
- total_train_batch_size: 24
- total_eval_batch_size: 48
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.0
|
RichardErkhov/MaziyarPanahi_-_M7Yamshadowexperiment28_Strangemerges_32Yamshadow-4bits | RichardErkhov | 2025-05-28T19:46:10Z | 0 | 0 | null | [
"safetensors",
"mistral",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-28T19:43:52Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
M7Yamshadowexperiment28_Strangemerges_32Yamshadow - bnb 4bits
- Model creator: https://huggingface.co/MaziyarPanahi/
- Original model: https://huggingface.co/MaziyarPanahi/M7Yamshadowexperiment28_Strangemerges_32Yamshadow/
Original model description:
---
license: apache-2.0
tags:
- Safetensors
- text-generation-inference
- merge
model_name: M7Yamshadowexperiment28_Strangemerges_32Yamshadow
base_model:
- automerger/M7Yamshadowexperiment28-7B
- automerger/Strangemerges_32Yamshadow-7B
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# M7Yamshadowexperiment28_Strangemerges_32Yamshadow
M7Yamshadowexperiment28_Strangemerges_32Yamshadow is a merge of the following models:
* [automerger/M7Yamshadowexperiment28-7B](https://huggingface.co/automerger/M7Yamshadowexperiment28-7B)
* [automerger/Strangemerges_32Yamshadow-7B](https://huggingface.co/automerger/Strangemerges_32Yamshadow-7B)
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/M7Yamshadowexperiment28_Strangemerges_32Yamshadow"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
Critical-Future/ace-nemo | Critical-Future | 2025-05-28T19:43:39Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"arxiv:1910.09700",
"region:us"
] | null | 2025-05-28T19:22:47Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Critical-Future/Qwen3-8B-wip | Critical-Future | 2025-05-28T19:43:29Z | 0 | 0 | null | [
"safetensors",
"qwen3",
"arxiv:1910.09700",
"region:us"
] | null | 2025-05-28T19:22:12Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
halchou/FullFeatures-BFConfig-LoRA-open_llama_3b-v01 | halchou | 2025-05-28T19:42:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-28T19:42:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Critical-Future/Deepseek-R1.5 | Critical-Future | 2025-05-28T19:42:06Z | 0 | 0 | null | [
"safetensors",
"deepseek_v3",
"custom_code",
"arxiv:1910.09700",
"fp8",
"region:us"
] | null | 2025-05-28T19:21:29Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Critical-Future/smol-vlm | Critical-Future | 2025-05-28T19:40:57Z | 0 | 0 | null | [
"onnx",
"safetensors",
"idefics3",
"arxiv:1910.09700",
"region:us"
] | null | 2025-05-28T19:26:59Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
abdulsamad99/medical-fine-tuning | abdulsamad99 | 2025-05-28T19:40:46Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2025-05-28T19:40:39Z | ---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
omarelsayeed/QA_Search | omarelsayeed | 2025-05-28T19:40:28Z | 45 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2023-12-19T13:00:27Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 526 with parameters:
```
{'batch_size': 256, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`__main__.LoggingCosineSimLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 5e-06
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 200,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 150, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 256, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
CeciGonSer/translation_pu_es_sintetico4 | CeciGonSer | 2025-05-28T19:39:50Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-28T19:34:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
xw17/Qwen2.5-1.5B-Instruct_finetuned_1_optimized1_oversampling_FT | xw17 | 2025-05-28T19:37:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T19:35:52Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Makrrr/QTable-Taxi-V3 | Makrrr | 2025-05-28T19:36:52Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-28T19:36:48Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: QTable-Taxi-V3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Makrrr/QTable-Taxi-V3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jp003/whisper-large-v3-lora-cv-pt-test | jp003 | 2025-05-28T19:36:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-28T16:30:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
FashionFlora/Hujbert | FashionFlora | 2025-05-28T19:35:56Z | 0 | 0 | null | [
"text-to-speech",
"pl",
"license:apache-2.0",
"region:us"
] | text-to-speech | 2025-04-13T08:08:17Z | ---
license: apache-2.0
language:
- pl
pipeline_tag: text-to-speech
---
To jest fonetyczny ALBERT aka Hujbert wytrenowany na słowach fonetycznych możliwy do użycia jako encoder do TTS'a
-------------
Vocab loss: +- 0.6
Token Loss: +- 1.5 |
unsloth/Qwen2.5-Omni-3B | unsloth | 2025-05-28T19:33:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_omni",
"multimodal",
"unsloth",
"any-to-any",
"en",
"arxiv:2503.20215",
"base_model:Qwen/Qwen2.5-Omni-3B",
"base_model:finetune:Qwen/Qwen2.5-Omni-3B",
"license:other",
"endpoints_compatible",
"region:us"
] | any-to-any | 2025-05-28T19:32:15Z | ---
base_model:
- Qwen/Qwen2.5-Omni-3B
license: other
license_name: qwen-research
license_link: LICENSE
language:
- en
tags:
- multimodal
- unsloth
library_name: transformers
pipeline_tag: any-to-any
---
<div>
<p style="margin-top: 0;margin-bottom: 0;">
<em><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0</a> achieves superior accuracy & outperforms other leading quants.</em>
</p>
<div style="display: flex; gap: 5px; align-items: center; ">
<a href="https://github.com/unslothai/unsloth/">
<img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133">
</a>
<a href="https://discord.gg/unsloth">
<img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
</a>
<a href="https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune">
<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
</a>
</div>
</div>
# Qwen2.5-Omni
<a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Overview
### Introduction
Qwen2.5-Omni is an end-to-end multimodal model designed to perceive diverse modalities, including text, images, audio, and video, while simultaneously generating text and natural speech responses in a streaming manner.
<p align="center">
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Omni/qwen_omni.png" width="80%"/>
<p>
### Key Features
* **Omni and Novel Architecture**: We propose Thinker-Talker architecture, an end-to-end multimodal model designed to perceive diverse modalities, including text, images, audio, and video, while simultaneously generating text and natural speech responses in a streaming manner. We propose a novel position embedding, named TMRoPE (Time-aligned Multimodal RoPE), to synchronize the timestamps of video inputs with audio.
* **Real-Time Voice and Video Chat**: Architecture designed for fully real-time interactions, supporting chunked input and immediate output.
* **Natural and Robust Speech Generation**: Surpassing many existing streaming and non-streaming alternatives, demonstrating superior robustness and naturalness in speech generation.
* **Strong Performance Across Modalities**: Exhibiting exceptional performance across all modalities when benchmarked against similarly sized single-modality models. Qwen2.5-Omni outperforms the similarly sized Qwen2-Audio in audio capabilities and achieves comparable performance to Qwen2.5-VL-7B.
* **Excellent End-to-End Speech Instruction Following**: Qwen2.5-Omni shows performance in end-to-end speech instruction following that rivals its effectiveness with text inputs, evidenced by benchmarks such as MMLU and GSM8K.
### Model Architecture
<p align="center">
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Omni/overview.png" width="80%"/>
<p>
### Performance
We conducted a comprehensive evaluation of Qwen2.5-Omni, which demonstrates strong performance across all modalities when compared to similarly sized single-modality models and closed-source models like Qwen2.5-VL-7B, Qwen2-Audio, and Gemini-1.5-pro. In tasks requiring the integration of multiple modalities, such as OmniBench, Qwen2.5-Omni achieves state-of-the-art performance. Furthermore, in single-modality tasks, it excels in areas including speech recognition (Common Voice), translation (CoVoST2), audio understanding (MMAU), image reasoning (MMMU, MMStar), video understanding (MVBench), and speech generation (Seed-tts-eval and subjective naturalness).
<p align="center">
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Omni/bar.png" width="80%"/>
<p>
<details>
<summary>Multimodality -> Text</summary>
<table class="tg"><thead>
<tr>
<th class="tg-0lax">Datasets</th>
<th class="tg-0lax">Model</th>
<th class="tg-0lax">Performance</th>
</tr></thead>
<tbody>
<tr>
<td class="tg-0lax" rowspan="10">OmniBench<br>Speech | Sound Event | Music | Avg</td>
<td class="tg-0lax">Gemini-1.5-Pro</td>
<td class="tg-0lax">42.67%|42.26%|46.23%|42.91%</td>
</tr>
<tr>
<td class="tg-0lax">MIO-Instruct</td>
<td class="tg-0lax">36.96%|33.58%|11.32%|33.80%</td>
</tr>
<tr>
<td class="tg-0lax">AnyGPT (7B)</td>
<td class="tg-0lax">17.77%|20.75%|13.21%|18.04%</td>
</tr>
<tr>
<td class="tg-0lax">video-SALMONN</td>
<td class="tg-0lax">34.11%|31.70%|<strong>56.60%</strong>|35.64%</td>
</tr>
<tr>
<td class="tg-0lax">UnifiedIO2-xlarge</td>
<td class="tg-0lax">39.56%|36.98%|29.25%|38.00%</td>
</tr>
<tr>
<td class="tg-0lax">UnifiedIO2-xxlarge</td>
<td class="tg-0lax">34.24%|36.98%|24.53%|33.98%</td>
</tr>
<tr>
<td class="tg-0lax">MiniCPM-o</td>
<td class="tg-0lax">-|-|-|40.50%</td>
</tr>
<tr>
<td class="tg-0lax">Baichuan-Omni-1.5</td>
<td class="tg-0lax">-|-|-|42.90%</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-3B</td>
<td class="tg-0lax">52.14%|52.08%|52.83%|52.19%</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
<td class="tg-0lax"><strong>55.25%</strong>|<strong>60.00%</strong>|52.83%|<strong>56.13%</strong></td>
</tr>
</tbody></table>
</details>
<details>
<summary>Audio -> Text</summary>
<table class="tg"><thead>
<tr>
<th class="tg-0lax">Datasets</th>
<th class="tg-0lax">Model</th>
<th class="tg-0lax">Performance</th>
</tr></thead>
<tbody>
<tr>
<td class="tg-9j4x" colspan="3">ASR</td>
</tr>
<tr>
<td class="tg-0lax" rowspan="12">Librispeech<br>dev-clean | dev other | test-clean | test-other</td>
<td class="tg-0lax">SALMONN</td>
<td class="tg-0lax">-|-|2.1|4.9</td>
</tr>
<tr>
<td class="tg-0lax">SpeechVerse</td>
<td class="tg-0lax">-|-|2.1|4.4</td>
</tr>
<tr>
<td class="tg-0lax">Whisper-large-v3</td>
<td class="tg-0lax">-|-|1.8|3.6</td>
</tr>
<tr>
<td class="tg-0lax">Llama-3-8B</td>
<td class="tg-0lax">-|-|-|3.4</td>
</tr>
<tr>
<td class="tg-0lax">Llama-3-70B</td>
<td class="tg-0lax">-|-|-|3.1</td>
</tr>
<tr>
<td class="tg-0lax">Seed-ASR-Multilingual</td>
<td class="tg-0lax">-|-|<strong>1.6</strong>|<strong>2.8</strong></td>
</tr>
<tr>
<td class="tg-0lax">MiniCPM-o</td>
<td class="tg-0lax">-|-|1.7|-</td>
</tr>
<tr>
<td class="tg-0lax">MinMo</td>
<td class="tg-0lax">-|-|1.7|3.9</td>
</tr>
<tr>
<td class="tg-0lax">Qwen-Audio</td>
<td class="tg-0lax">1.8|4.0|2.0|4.2</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2-Audio</td>
<td class="tg-0lax"><strong>1.3</strong>|<strong>3.4</strong>|<strong>1.6</strong>|3.6</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-3B</td>
<td class="tg-0lax">2.0|4.1|2.2|4.5</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
<td class="tg-0lax">1.6|3.5|1.8|3.4</td>
</tr>
<tr>
<td class="tg-0lax" rowspan="5">Common Voice 15<br>en | zh | yue | fr</td>
<td class="tg-0lax">Whisper-large-v3</td>
<td class="tg-0lax">9.3|12.8|10.9|10.8</td>
</tr>
<tr>
<td class="tg-0lax">MinMo</td>
<td class="tg-0lax">7.9|6.3|6.4|8.5</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2-Audio</td>
<td class="tg-0lax">8.6|6.9|<strong>5.9</strong>|9.6</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-3B</td>
<td class="tg-0lax">9.1|6.0|11.6|9.6</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
<td class="tg-0lax"><strong>7.6</strong>|<strong>5.2</strong>|7.3|<strong>7.5</strong></td>
</tr>
<tr>
<td class="tg-0lax" rowspan="8">Fleurs<br>zh | en</td>
<td class="tg-0lax">Whisper-large-v3</td>
<td class="tg-0lax">7.7|4.1</td>
</tr>
<tr>
<td class="tg-0lax">Seed-ASR-Multilingual</td>
<td class="tg-0lax">-|<strong>3.4</strong></td>
</tr>
<tr>
<td class="tg-0lax">Megrez-3B-Omni</td>
<td class="tg-0lax">10.8|-</td>
</tr>
<tr>
<td class="tg-0lax">MiniCPM-o</td>
<td class="tg-0lax">4.4|-</td>
</tr>
<tr>
<td class="tg-0lax">MinMo</td>
<td class="tg-0lax">3.0|3.8</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2-Audio</td>
<td class="tg-0lax">7.5|-</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-3B</td>
<td class="tg-0lax">3.2|5.4</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
<td class="tg-0lax"><strong>3.0</strong>|4.1</td>
</tr>
<tr>
<td class="tg-0lax" rowspan="6">Wenetspeech<br>test-net | test-meeting</td>
<td class="tg-0lax">Seed-ASR-Chinese</td>
<td class="tg-0lax"><strong>4.7|5.7</strong></td>
</tr>
<tr>
<td class="tg-0lax">Megrez-3B-Omni</td>
<td class="tg-0lax">-|16.4</td>
</tr>
<tr>
<td class="tg-0lax">MiniCPM-o</td>
<td class="tg-0lax">6.9|-</td>
</tr>
<tr>
<td class="tg-0lax">MinMo</td>
<td class="tg-0lax">6.8|7.4</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-3B</td>
<td class="tg-0lax">6.3|8.1</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
<td class="tg-0lax">5.9|7.7</td>
</tr>
<tr>
<td class="tg-0lax" rowspan="4">Voxpopuli-V1.0-en</td>
<td class="tg-0lax">Llama-3-8B</td>
<td class="tg-0lax">6.2</td>
</tr>
<tr>
<td class="tg-0lax">Llama-3-70B</td>
<td class="tg-0lax"><strong>5.7</strong></td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-3B</td>
<td class="tg-0lax">6.6</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
<td class="tg-0lax">5.8</td>
</tr>
<tr>
<td class="tg-9j4x" colspan="3">S2TT</td>
</tr>
<tr>
<td class="tg-0lax" rowspan="9">CoVoST2<br>en-de | de-en | en-zh | zh-en</td>
<td class="tg-0lax">SALMONN</td>
<td class="tg-0lax">18.6|-|33.1|-</td>
</tr>
<tr>
<td class="tg-0lax">SpeechLLaMA</td>
<td class="tg-0lax">-|27.1|-|12.3</td>
</tr>
<tr>
<td class="tg-0lax">BLSP</td>
<td class="tg-0lax">14.1|-|-|-</td>
</tr>
<tr>
<td class="tg-0lax">MiniCPM-o</td>
<td class="tg-0lax">-|-|<strong>48.2</strong>|27.2</td>
</tr>
<tr>
<td class="tg-0lax">MinMo</td>
<td class="tg-0lax">-|<strong>39.9</strong>|46.7|26.0</td>
</tr>
<tr>
<td class="tg-0lax">Qwen-Audio</td>
<td class="tg-0lax">25.1|33.9|41.5|15.7</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2-Audio</td>
<td class="tg-0lax">29.9|35.2|45.2|24.4</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-3B</td>
<td class="tg-0lax">28.3|38.1|41.4|26.6</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
<td class="tg-0lax"><strong>30.2</strong>|37.7|41.4|<strong>29.4</strong></td>
</tr>
<tr>
<td class="tg-9j4x" colspan="3">SER</td>
</tr>
<tr>
<td class="tg-0lax" rowspan="6">Meld</td>
<td class="tg-0lax">WavLM-large</td>
<td class="tg-0lax">0.542</td>
</tr>
<tr>
<td class="tg-0lax">MiniCPM-o</td>
<td class="tg-0lax">0.524</td>
</tr>
<tr>
<td class="tg-0lax">Qwen-Audio</td>
<td class="tg-0lax">0.557</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2-Audio</td>
<td class="tg-0lax">0.553</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-3B</td>
<td class="tg-0lax">0.558</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
<td class="tg-0lax"><strong>0.570</strong></td>
</tr>
<tr>
<td class="tg-9j4x" colspan="3">VSC</td>
</tr>
<tr>
<td class="tg-0lax" rowspan="6">VocalSound</td>
<td class="tg-0lax">CLAP</td>
<td class="tg-0lax">0.495</td>
</tr>
<tr>
<td class="tg-0lax">Pengi</td>
<td class="tg-0lax">0.604</td>
</tr>
<tr>
<td class="tg-0lax">Qwen-Audio</td>
<td class="tg-0lax">0.929</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2-Audio</td>
<td class="tg-0lax"><strong>0.939</strong></td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-3B</td>
<td class="tg-0lax">0.936</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
<td class="tg-0lax"><strong>0.939</strong></td>
</tr>
<tr>
<td class="tg-9j4x" colspan="3">Music</td>
</tr>
<tr>
<td class="tg-0lax" rowspan="3">GiantSteps Tempo</td>
<td class="tg-0lax">Llark-7B</td>
<td class="tg-0lax">0.86</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-3B</td>
<td class="tg-0lax"><strong>0.88</strong></td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
<td class="tg-0lax"><strong>0.88</strong></td>
</tr>
<tr>
<td class="tg-0lax" rowspan="3">MusicCaps</td>
<td class="tg-0lax">LP-MusicCaps</td>
<td class="tg-0lax">0.291|0.149|0.089|<strong>0.061</strong>|0.129|0.130</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-3B</td>
<td class="tg-0lax">0.325|<strong>0.163</strong>|<strong>0.093</strong>|0.057|<strong>0.132</strong>|<strong>0.229</strong></td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
<td class="tg-0lax"><strong>0.328</strong>|0.162|0.090|0.055|0.127|0.225</td>
</tr>
<tr>
<td class="tg-9j4x" colspan="3">Audio Reasoning</td>
</tr>
<tr>
<td class="tg-0lax" rowspan="4">MMAU<br>Sound | Music | Speech | Avg</td>
<td class="tg-0lax">Gemini-Pro-V1.5</td>
<td class="tg-0lax">56.75|49.40|58.55|54.90</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2-Audio</td>
<td class="tg-0lax">54.95|50.98|42.04|49.20</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-3B</td>
<td class="tg-0lax"><strong>70.27</strong>|60.48|59.16|63.30</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
<td class="tg-0lax">67.87|<strong>69.16|59.76|65.60</strong></td>
</tr>
<tr>
<td class="tg-9j4x" colspan="3">Voice Chatting</td>
</tr>
<tr>
<td class="tg-0lax" rowspan="9">VoiceBench<br>AlpacaEval | CommonEval | SD-QA | MMSU</td>
<td class="tg-0lax">Ultravox-v0.4.1-LLaMA-3.1-8B</td>
<td class="tg-0lax"><strong>4.55</strong>|3.90|53.35|47.17</td>
</tr>
<tr>
<td class="tg-0lax">MERaLiON</td>
<td class="tg-0lax">4.50|3.77|55.06|34.95</td>
</tr>
<tr>
<td class="tg-0lax">Megrez-3B-Omni</td>
<td class="tg-0lax">3.50|2.95|25.95|27.03</td>
</tr>
<tr>
<td class="tg-0lax">Lyra-Base</td>
<td class="tg-0lax">3.85|3.50|38.25|49.74</td>
</tr>
<tr>
<td class="tg-0lax">MiniCPM-o</td>
<td class="tg-0lax">4.42|<strong>4.15</strong>|50.72|54.78</td>
</tr>
<tr>
<td class="tg-0lax">Baichuan-Omni-1.5</td>
<td class="tg-0lax">4.50|4.05|43.40|57.25</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2-Audio</td>
<td class="tg-0lax">3.74|3.43|35.71|35.72</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-3B</td>
<td class="tg-0lax">4.32|4.00|49.37|50.23</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
<td class="tg-0lax">4.49|3.93|<strong>55.71</strong>|<strong>61.32</strong></td>
</tr>
<tr>
<td class="tg-0lax" rowspan="9">VoiceBench<br>OpenBookQA | IFEval | AdvBench | Avg</td>
<td class="tg-0lax">Ultravox-v0.4.1-LLaMA-3.1-8B</td>
<td class="tg-0lax">65.27|<strong>66.88</strong>|98.46|71.45</td>
</tr>
<tr>
<td class="tg-0lax">MERaLiON</td>
<td class="tg-0lax">27.23|62.93|94.81|62.91</td>
</tr>
<tr>
<td class="tg-0lax">Megrez-3B-Omni</td>
<td class="tg-0lax">28.35|25.71|87.69|46.25</td>
</tr>
<tr>
<td class="tg-0lax">Lyra-Base</td>
<td class="tg-0lax">72.75|36.28|59.62|57.66</td>
</tr>
<tr>
<td class="tg-0lax">MiniCPM-o</td>
<td class="tg-0lax">78.02|49.25|97.69|71.69</td>
</tr>
<tr>
<td class="tg-0lax">Baichuan-Omni-1.5</td>
<td class="tg-0lax">74.51|54.54|97.31|71.14</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2-Audio</td>
<td class="tg-0lax">49.45|26.33|96.73|55.35</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-3B</td>
<td class="tg-0lax">74.73|42.10|98.85|68.81</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
<td class="tg-0lax"><strong>81.10</strong>|52.87|<strong>99.42</strong>|<strong>74.12</strong></td>
</tr>
</tbody></table>
</details>
<details>
<summary>Image -> Text</summary>
| Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Other Best | Qwen2.5-VL-7B | GPT-4o-mini |
|--------------------------------|--------------|------------|------------|---------------|-------------|
| MMMU<sub>val</sub> | 59.2 | 53.1 | 53.9 | 58.6 | **60.0** |
| MMMU-Pro<sub>overall</sub> | 36.6 | 29.7 | - | **38.3** | 37.6 |
| MathVista<sub>testmini</sub> | 67.9 | 59.4 | **71.9** | 68.2 | 52.5 |
| MathVision<sub>full</sub> | 25.0 | 20.8 | 23.1 | **25.1** | - |
| MMBench-V1.1-EN<sub>test</sub> | 81.8 | 77.8 | 80.5 | **82.6** | 76.0 |
| MMVet<sub>turbo</sub> | 66.8 | 62.1 | **67.5** | 67.1 | 66.9 |
| MMStar | **64.0** | 55.7 | **64.0** | 63.9 | 54.8 |
| MME<sub>sum</sub> | 2340 | 2117 | **2372** | 2347 | 2003 |
| MuirBench | 59.2 | 48.0 | - | **59.2** | - |
| CRPE<sub>relation</sub> | **76.5** | 73.7 | - | 76.4 | - |
| RealWorldQA<sub>avg</sub> | 70.3 | 62.6 | **71.9** | 68.5 | - |
| MME-RealWorld<sub>en</sub> | **61.6** | 55.6 | - | 57.4 | - |
| MM-MT-Bench | 6.0 | 5.0 | - | **6.3** | - |
| AI2D | 83.2 | 79.5 | **85.8** | 83.9 | - |
| TextVQA<sub>val</sub> | 84.4 | 79.8 | 83.2 | **84.9** | - |
| DocVQA<sub>test</sub> | 95.2 | 93.3 | 93.5 | **95.7** | - |
| ChartQA<sub>test Avg</sub> | 85.3 | 82.8 | 84.9 | **87.3** | - |
| OCRBench_V2<sub>en</sub> | **57.8** | 51.7 | - | 56.3 | - |
| Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Qwen2.5-VL-7B | Grounding DINO | Gemini 1.5 Pro |
|--------------------------|--------------|---------------|---------------|----------------|----------------|
| Refcoco<sub>val</sub> | 90.5 | 88.7 | 90.0 | **90.6** | 73.2 |
| Refcoco<sub>textA</sub> | **93.5** | 91.8 | 92.5 | 93.2 | 72.9 |
| Refcoco<sub>textB</sub> | 86.6 | 84.0 | 85.4 | **88.2** | 74.6 |
| Refcoco+<sub>val</sub> | 85.4 | 81.1 | 84.2 | **88.2** | 62.5 |
| Refcoco+<sub>textA</sub> | **91.0** | 87.5 | 89.1 | 89.0 | 63.9 |
| Refcoco+<sub>textB</sub> | **79.3** | 73.2 | 76.9 | 75.9 | 65.0 |
| Refcocog+<sub>val</sub> | **87.4** | 85.0 | 87.2 | 86.1 | 75.2 |
| Refcocog+<sub>test</sub> | **87.9** | 85.1 | 87.2 | 87.0 | 76.2 |
| ODinW | 42.4 | 39.2 | 37.3 | **55.0** | 36.7 |
| PointGrounding | 66.5 | 46.2 | **67.3** | - | - |
</details>
<details>
<summary>Video(without audio) -> Text</summary>
| Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Other Best | Qwen2.5-VL-7B | GPT-4o-mini |
|-----------------------------|--------------|------------|------------|---------------|-------------|
| Video-MME<sub>w/o sub</sub> | 64.3 | 62.0 | 63.9 | **65.1** | 64.8 |
| Video-MME<sub>w sub</sub> | **72.4** | 68.6 | 67.9 | 71.6 | - |
| MVBench | **70.3** | 68.7 | 67.2 | 69.6 | - |
| EgoSchema<sub>test</sub> | **68.6** | 61.4 | 63.2 | 65.0 | - |
</details>
<details>
<summary>Zero-shot Speech Generation</summary>
<table class="tg"><thead>
<tr>
<th class="tg-0lax">Datasets</th>
<th class="tg-0lax">Model</th>
<th class="tg-0lax">Performance</th>
</tr></thead>
<tbody>
<tr>
<td class="tg-9j4x" colspan="3">Content Consistency</td>
</tr>
<tr>
<td class="tg-0lax" rowspan="11">SEED<br>test-zh | test-en | test-hard </td>
<td class="tg-0lax">Seed-TTS_ICL</td>
<td class="tg-0lax">1.11 | 2.24 | 7.58</td>
</tr>
<tr>
<td class="tg-0lax">Seed-TTS_RL</td>
<td class="tg-0lax"><strong>1.00</strong> | 1.94 | <strong>6.42</strong></td>
</tr>
<tr>
<td class="tg-0lax">MaskGCT</td>
<td class="tg-0lax">2.27 | 2.62 | 10.27</td>
</tr>
<tr>
<td class="tg-0lax">E2_TTS</td>
<td class="tg-0lax">1.97 | 2.19 | -</td>
</tr>
<tr>
<td class="tg-0lax">F5-TTS</td>
<td class="tg-0lax">1.56 | <strong>1.83</strong> | 8.67</td>
</tr>
<tr>
<td class="tg-0lax">CosyVoice 2</td>
<td class="tg-0lax">1.45 | 2.57 | 6.83</td>
</tr>
<tr>
<td class="tg-0lax">CosyVoice 2-S</td>
<td class="tg-0lax">1.45 | 2.38 | 8.08</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-3B_ICL</td>
<td class="tg-0lax">1.95 | 2.87 | 9.92</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-3B_RL</td>
<td class="tg-0lax">1.58 | 2.51 | 7.86</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-7B_ICL</td>
<td class="tg-0lax">1.70 | 2.72 | 7.97</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-7B_RL</td>
<td class="tg-0lax">1.42 | 2.32 | 6.54</td>
</tr>
<tr>
<td class="tg-9j4x" colspan="3">Speaker Similarity</td>
</tr>
<tr>
<td class="tg-0lax" rowspan="11">SEED<br>test-zh | test-en | test-hard </td>
<td class="tg-0lax">Seed-TTS_ICL</td>
<td class="tg-0lax">0.796 | 0.762 | 0.776</td>
</tr>
<tr>
<td class="tg-0lax">Seed-TTS_RL</td>
<td class="tg-0lax"><strong>0.801</strong> | <strong>0.766</strong> | <strong>0.782</strong></td>
</tr>
<tr>
<td class="tg-0lax">MaskGCT</td>
<td class="tg-0lax">0.774 | 0.714 | 0.748</td>
</tr>
<tr>
<td class="tg-0lax">E2_TTS</td>
<td class="tg-0lax">0.730 | 0.710 | -</td>
</tr>
<tr>
<td class="tg-0lax">F5-TTS</td>
<td class="tg-0lax">0.741 | 0.647 | 0.713</td>
</tr>
<tr>
<td class="tg-0lax">CosyVoice 2</td>
<td class="tg-0lax">0.748 | 0.652 | 0.724</td>
</tr>
<tr>
<td class="tg-0lax">CosyVoice 2-S</td>
<td class="tg-0lax">0.753 | 0.654 | 0.732</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-3B_ICL</td>
<td class="tg-0lax">0.741 | 0.635 | 0.748</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-3B_RL</td>
<td class="tg-0lax">0.744 | 0.635 | 0.746</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-7B_ICL</td>
<td class="tg-0lax">0.752 | 0.632 | 0.747</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-7B_RL</td>
<td class="tg-0lax">0.754 | 0.641 | 0.752</td>
</tr>
</tbody></table>
</details>
<details>
<summary>Text -> Text</summary>
| Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Qwen2.5-7B | Qwen2.5-3B | Qwen2-7B | Llama3.1-8B | Gemma2-9B |
|-----------------------------------|-----------|------------|------------|------------|------------|-------------|-----------|
| MMLU-Pro | 47.0 | 40.4 | **56.3** | 43.7 | 44.1 | 48.3 | 52.1 |
| MMLU-redux | 71.0 | 60.9 | **75.4** | 64.4 | 67.3 | 67.2 | 72.8 |
| LiveBench<sub>0831</sub> | 29.6 | 22.3 | **35.9** | 26.8 | 29.2 | 26.7 | 30.6 |
| GPQA | 30.8 | 34.3 | **36.4** | 30.3 | 34.3 | 32.8 | 32.8 |
| MATH | 71.5 | 63.6 | **75.5** | 65.9 | 52.9 | 51.9 | 44.3 |
| GSM8K | 88.7 | 82.6 | **91.6** | 86.7 | 85.7 | 84.5 | 76.7 |
| HumanEval | 78.7 | 70.7 | **84.8** | 74.4 | 79.9 | 72.6 | 68.9 |
| MBPP | 73.2 | 70.4 | **79.2** | 72.7 | 67.2 | 69.6 | 74.9 |
| MultiPL-E | 65.8 | 57.6 | **70.4** | 60.2 | 59.1 | 50.7 | 53.4 |
| LiveCodeBench<sub>2305-2409</sub> | 24.6 | 16.5 | **28.7** | 19.9 | 23.9 | 8.3 | 18.9 |
</details>
## Quickstart
Below, we provide simple examples to show how to use Qwen2.5-Omni with 🤗 Transformers. The codes of Qwen2.5-Omni has been in the latest Hugging face transformers and we advise you to build from source with command:
```
pip uninstall transformers
pip install git+https://github.com/huggingface/[email protected]
pip install accelerate
```
or you might encounter the following error:
```
KeyError: 'qwen2_5_omni'
```
We offer a toolkit to help you handle various types of audio and visual input more conveniently, as if you were using an API. This includes base64, URLs, and interleaved audio, images and videos. You can install it using the following command and make sure your system has `ffmpeg` installed:
```bash
# It's highly recommended to use `[decord]` feature for faster video loading.
pip install qwen-omni-utils[decord] -U
```
If you are not using Linux, you might not be able to install `decord` from PyPI. In that case, you can use `pip install qwen-omni-utils -U` which will fall back to using torchvision for video processing. However, you can still [install decord from source](https://github.com/dmlc/decord?tab=readme-ov-file#install-from-source) to get decord used when loading video.
### 🤗 Transformers Usage
Here we show a code snippet to show you how to use the chat model with `transformers` and `qwen_omni_utils`:
```python
import soundfile as sf
from transformers import Qwen2_5OmniForConditionalGeneration, Qwen2_5OmniProcessor
from qwen_omni_utils import process_mm_info
# default: Load the model on the available device(s)
model = Qwen2_5OmniForConditionalGeneration.from_pretrained("Qwen/Qwen2.5-Omni-3B", torch_dtype="auto", device_map="auto")
# We recommend enabling flash_attention_2 for better acceleration and memory saving.
# model = Qwen2_5OmniForConditionalGeneration.from_pretrained(
# "Qwen/Qwen2.5-Omni-3B",
# torch_dtype="auto",
# device_map="auto",
# attn_implementation="flash_attention_2",
# )
processor = Qwen2_5OmniProcessor.from_pretrained("Qwen/Qwen2.5-Omni-3B")
conversation = [
{
"role": "system",
"content": [
{"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."}
],
},
{
"role": "user",
"content": [
{"type": "video", "video": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Omni/draw.mp4"},
],
},
]
# set use audio in video
USE_AUDIO_IN_VIDEO = True
# Preparation for inference
text = processor.apply_chat_template(conversation, add_generation_prompt=True, tokenize=False)
audios, images, videos = process_mm_info(conversation, use_audio_in_video=USE_AUDIO_IN_VIDEO)
inputs = processor(text=text, audio=audios, images=images, videos=videos, return_tensors="pt", padding=True, use_audio_in_video=USE_AUDIO_IN_VIDEO)
inputs = inputs.to(model.device).to(model.dtype)
# Inference: Generation of the output text and audio
text_ids, audio = model.generate(**inputs, use_audio_in_video=USE_AUDIO_IN_VIDEO)
text = processor.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(text)
sf.write(
"output.wav",
audio.reshape(-1).detach().cpu().numpy(),
samplerate=24000,
)
```
<details>
<summary>Minimum GPU memory requirements</summary>
|Model | Precision | 15(s) Video | 30(s) Video | 60(s) Video |
|--------------|-----------| ------------- | ------------- | ------------------ |
| Qwen-Omni-3B | FP32 | 89.10 GB | Not Recommend | Not Recommend |
| Qwen-Omni-3B | BF16 | 18.38 GB | 22.43 GB | 28.22 GB |
| Qwen-Omni-7B | FP32 | 93.56 GB | Not Recommend | Not Recommend |
| Qwen-Omni-7B | BF16 | 31.11 GB | 41.85 GB | 60.19 GB |
Note: The table above presents the theoretical minimum memory requirements for inference with `transformers` and `BF16` is test with `attn_implementation="flash_attention_2"`; however, in practice, the actual memory usage is typically at least 1.2 times higher. For more information, see the linked resource [here](https://huggingface.co/docs/accelerate/main/en/usage_guides/model_size_estimator).
</details>
<details>
<summary>Video URL resource usage</summary>
Video URL compatibility largely depends on the third-party library version. The details are in the table below. Change the backend by `FORCE_QWENVL_VIDEO_READER=torchvision` or `FORCE_QWENVL_VIDEO_READER=decord` if you prefer not to use the default one.
| Backend | HTTP | HTTPS |
|-------------|------|-------|
| torchvision >= 0.19.0 | ✅ | ✅ |
| torchvision < 0.19.0 | ❌ | ❌ |
| decord | ✅ | ❌ |
</details>
<details>
<summary>Batch inference</summary>
The model can batch inputs composed of mixed samples of various types such as text, images, audio and videos as input when `return_audio=False` is set. Here is an example.
```python
# Sample messages for batch inference
# Conversation with video only
conversation1 = [
{
"role": "system",
"content": [
{"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."}
],
},
{
"role": "user",
"content": [
{"type": "video", "video": "/path/to/video.mp4"},
]
}
]
# Conversation with audio only
conversation2 = [
{
"role": "system",
"content": [
{"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."}
],
},
{
"role": "user",
"content": [
{"type": "audio", "audio": "/path/to/audio.wav"},
]
}
]
# Conversation with pure text
conversation3 = [
{
"role": "system",
"content": [
{"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."}
],
},
{
"role": "user",
"content": "who are you?"
}
]
# Conversation with mixed media
conversation4 = [
{
"role": "system",
"content": [
{"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."}
],
},
{
"role": "user",
"content": [
{"type": "image", "image": "/path/to/image.jpg"},
{"type": "video", "video": "/path/to/video.mp4"},
{"type": "audio", "audio": "/path/to/audio.wav"},
{"type": "text", "text": "What are the elements can you see and hear in these medias?"},
],
}
]
# Combine messages for batch processing
conversations = [conversation1, conversation2, conversation3, conversation4]
# set use audio in video
USE_AUDIO_IN_VIDEO = True
# Preparation for batch inference
text = processor.apply_chat_template(conversations, add_generation_prompt=True, tokenize=False)
audios, images, videos = process_mm_info(conversations, use_audio_in_video=USE_AUDIO_IN_VIDEO)
inputs = processor(text=text, audio=audios, images=images, videos=videos, return_tensors="pt", padding=True, use_audio_in_video=USE_AUDIO_IN_VIDEO)
inputs = inputs.to(model.device).to(model.dtype)
# Batch Inference
text_ids = model.generate(**inputs, use_audio_in_video=USE_AUDIO_IN_VIDEO, return_audio=False)
text = processor.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(text)
```
</details>
### Usage Tips
#### Prompt for audio output
If users need audio output, the system prompt must be set as "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.", otherwise the audio output may not work as expected.
```
{
"role": "system",
"content": [
{"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."}
],
}
```
#### Use audio in video
In the process of multimodal interaction, the videos provided by users are often accompanied by audio (such as questions about the content in the video, or sounds generated by certain events in the video). This information is conducive to the model providing a better interactive experience. So we provide the following options for users to decide whether to use audio in video.
```python
# first place, in data preprocessing
audios, images, videos = process_mm_info(conversations, use_audio_in_video=True)
```
```python
# second place, in model processor
inputs = processor(text=text, audio=audios, images=images, videos=videos, return_tensors="pt",
padding=True, use_audio_in_video=True)
```
```python
# third place, in model inference
text_ids, audio = model.generate(**inputs, use_audio_in_video=True)
```
It is worth noting that during a multi-round conversation, the `use_audio_in_video` parameter in these places must be set to the same, otherwise unexpected results will occur.
#### Use audio output or not
The model supports both text and audio outputs, if users do not need audio outputs, they can call `model.disable_talker()` after init the model. This option will save about `~2GB` of GPU memory but the `return_audio` option for `generate` function will only allow to be set at `False`.
```python
model = Qwen2_5OmniForConditionalGeneration.from_pretrained(
"Qwen/Qwen2.5-Omni-3B",
torch_dtype="auto",
device_map="auto"
)
model.disable_talker()
```
In order to obtain a flexible experience, we recommend that users can decide whether to return audio when `generate` function is called. If `return_audio` is set to `False`, the model will only return text outputs to get text responses faster.
```python
model = Qwen2_5OmniForConditionalGeneration.from_pretrained(
"Qwen/Qwen2.5-Omni-3B",
torch_dtype="auto",
device_map="auto"
)
...
text_ids = model.generate(**inputs, return_audio=False)
```
#### Change voice type of output audio
Qwen2.5-Omni supports the ability to change the voice of the output audio. The `"Qwen/Qwen2.5-Omni-3B"` checkpoint support two voice types as follow:
| Voice Type | Gender | Description |
|------------|--------|-------------|
| Chelsie | Female | A honeyed, velvety voice that carries a gentle warmth and luminous clarity.|
| Ethan | Male | A bright, upbeat voice with infectious energy and a warm, approachable vibe.|
Users can use the `speaker` parameter of `generate` function to specify the voice type. By default, if `speaker` is not specified, the default voice type is `Chelsie`.
```python
text_ids, audio = model.generate(**inputs, speaker="Chelsie")
```
```python
text_ids, audio = model.generate(**inputs, speaker="Ethan")
```
#### Flash-Attention 2 to speed up generation
First, make sure to install the latest version of Flash Attention 2:
```bash
pip install -U flash-attn --no-build-isolation
```
Also, you should have hardware that is compatible with FlashAttention 2. Read more about it in the official documentation of the [flash attention repository](https://github.com/Dao-AILab/flash-attention). FlashAttention-2 can only be used when a model is loaded in `torch.float16` or `torch.bfloat16`.
To load and run a model using FlashAttention-2, add `attn_implementation="flash_attention_2"` when loading the model:
```python
from transformers import Qwen2_5OmniForConditionalGeneration
model = Qwen2_5OmniForConditionalGeneration.from_pretrained(
"Qwen/Qwen2.5-Omni-3B",
device_map="auto",
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
)
```
## Citation
If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil: :)
```BibTeX
@article{Qwen2.5-Omni,
title={Qwen2.5-Omni Technical Report},
author={Jin Xu, Zhifang Guo, Jinzheng He, Hangrui Hu, Ting He, Shuai Bai, Keqin Chen, Jialin Wang, Yang Fan, Kai Dang, Bin Zhang, Xiong Wang, Yunfei Chu, Junyang Lin},
journal={arXiv preprint arXiv:2503.20215},
year={2025}
}
```
<br>
|
Vittoria-Martines/wATCH.Vittoria-Martines-Vittoria-Martines-Vittoria-Martines.original | Vittoria-Martines | 2025-05-28T19:27:38Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-28T19:22:01Z | [🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?Vittoria-Martines)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?Vittoria-Martines)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?Vittoria-Martines) |
Rakaman-Sulit/Rakaman.Sulit.cikgu.cctv.wiring.telegram.cikgu.cctv.wiring.video | Rakaman-Sulit | 2025-05-28T19:27:35Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-28T19:22:25Z | [🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=Rakaman-Sulit)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=Rakaman-Sulit)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=Rakaman-Sulit) |
while0628/student_model_epoch20 | while0628 | 2025-05-28T19:27:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T19:24:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vicgalle/configurable-preference-phi4 | vicgalle | 2025-05-28T19:25:55Z | 0 | 0 | null | [
"safetensors",
"dataset:vicgalle/creative-rubrics-preferences",
"base_model:unsloth/phi-4-unsloth-bnb-4bit",
"base_model:finetune:unsloth/phi-4-unsloth-bnb-4bit",
"region:us"
] | null | 2025-05-28T18:58:50Z | ---
base_model: unsloth/phi-4-unsloth-bnb-4bit
datasets:
- vicgalle/creative-rubrics-preferences
---
|
vicgalle/configurable-preference-qwen3-4b | vicgalle | 2025-05-28T19:25:30Z | 0 | 0 | null | [
"safetensors",
"dataset:vicgalle/creative-rubrics-preferences",
"base_model:unsloth/Qwen3-4B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-4B-unsloth-bnb-4bit",
"region:us"
] | null | 2025-05-28T19:01:35Z | ---
base_model: unsloth/Qwen3-4B-unsloth-bnb-4bit
datasets:
- vicgalle/creative-rubrics-preferences
---
|
lmstudio-community/Qwen3-30B-A3B-MLX-4bit | lmstudio-community | 2025-05-28T19:24:09Z | 13,905 | 21 | mlx | [
"mlx",
"safetensors",
"qwen3_moe",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-30B-A3B",
"base_model:quantized:Qwen/Qwen3-30B-A3B",
"license:apache-2.0",
"4-bit",
"region:us"
] | text-generation | 2025-04-28T22:36:07Z | ---
library_name: mlx
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: Qwen/Qwen3-30B-A3B
tags:
- mlx
---
# lmstudio-community/Qwen3-30B-A3B-MLX-4bit
This model [lmstudio-community/Qwen3-30B-A3B-MLX-4bit](https://huggingface.co/lmstudio-community/Qwen3-30B-A3B-MLX-4bit) was
converted to MLX format from [Qwen/Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B)
using mlx-lm version **0.24.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("lmstudio-community/Qwen3-30B-A3B-MLX-4bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
ninja990621/natix-06 | ninja990621 | 2025-05-28T19:23:29Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-05-28T19:17:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ninja990621/natix-04 | ninja990621 | 2025-05-28T19:20:09Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-05-28T19:16:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
filtrado-video-prohibido-18-viral/18.alana.video.alana.foto.viral.alana.flores.foto.viral.alana.flores.telegram | filtrado-video-prohibido-18-viral | 2025-05-28T19:19:39Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-28T19:17:42Z | [🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=filtrado-video-prohibido-18)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=filtrado-video-prohibido-18)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=filtrado-video-prohibido-18) |
ninja990621/natix-03 | ninja990621 | 2025-05-28T19:18:55Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-05-28T19:16:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lipefree/MNLP_M2_dpo_model | lipefree | 2025-05-28T19:18:20Z | 43 | 0 | null | [
"safetensors",
"qwen3",
"trl",
"dpo",
"en",
"base_model:Qwen/Qwen3-0.6B",
"base_model:finetune:Qwen/Qwen3-0.6B",
"license:apache-2.0",
"region:us"
] | null | 2025-05-22T10:37:59Z | ---
license: apache-2.0
base_model:
- Qwen/Qwen3-0.6B
language:
- en
tags:
- trl
- dpo
---
|
BootesVoid/cmak0xk8100pcnobt1hdogwlc_cmb8a7f9n0jfwlexp6uoevv7o | BootesVoid | 2025-05-28T19:17:52Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-28T19:17:50Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: AYANA
---
# Cmak0Xk8100Pcnobt1Hdogwlc_Cmb8A7F9N0Jfwlexp6Uoevv7O
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `AYANA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "AYANA",
"lora_weights": "https://huggingface.co/BootesVoid/cmak0xk8100pcnobt1hdogwlc_cmb8a7f9n0jfwlexp6uoevv7o/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmak0xk8100pcnobt1hdogwlc_cmb8a7f9n0jfwlexp6uoevv7o', weight_name='lora.safetensors')
image = pipeline('AYANA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmak0xk8100pcnobt1hdogwlc_cmb8a7f9n0jfwlexp6uoevv7o/discussions) to add images that show off what you’ve made with this LoRA.
|
BootesVoid/cmb7utjk30d11lexp6sbdq9ud_cmb8arr1x0jpqlexpatazwff0 | BootesVoid | 2025-05-28T19:17:47Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-28T19:17:45Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: STELLA28
---
# Cmb7Utjk30D11Lexp6Sbdq9Ud_Cmb8Arr1X0Jpqlexpatazwff0
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `STELLA28` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "STELLA28",
"lora_weights": "https://huggingface.co/BootesVoid/cmb7utjk30d11lexp6sbdq9ud_cmb8arr1x0jpqlexpatazwff0/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb7utjk30d11lexp6sbdq9ud_cmb8arr1x0jpqlexpatazwff0', weight_name='lora.safetensors')
image = pipeline('STELLA28').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb7utjk30d11lexp6sbdq9ud_cmb8arr1x0jpqlexpatazwff0/discussions) to add images that show off what you’ve made with this LoRA.
|
mirza-mohib/speecht5_finetuned_emirhan_tr | mirza-mohib | 2025-05-28T19:16:08Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2025-05-28T18:50:01Z | ---
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
model-index:
- name: speecht5_finetuned_emirhan_tr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_emirhan_tr
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3225
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5162 | 0.4545 | 100 | 0.4560 |
| 0.4158 | 0.9091 | 200 | 0.3618 |
| 0.3752 | 1.3636 | 300 | 0.3372 |
| 0.354 | 1.8182 | 400 | 0.3275 |
| 0.3494 | 2.2727 | 500 | 0.3225 |
### Framework versions
- Transformers 4.52.2
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
ninja990621/natix-01 | ninja990621 | 2025-05-28T19:15:59Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-05-28T19:10:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
cikgu-fadhilah/wATCH.cikgu-fadhilah-cikgu-fadhilah-cikgu-fadhilah.original | cikgu-fadhilah | 2025-05-28T19:15:35Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-28T19:12:57Z | [🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?cikgu-fadhilah)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?cikgu-fadhilah)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?cikgu-fadhilah) |
duc512/my-omegalabs-model | duc512 | 2025-05-28T19:13:00Z | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-05-28T18:42:13Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Heliozzz/esm2_t6_8M_UR50D-finetuned-cytosol-membrane-classification | Heliozzz | 2025-05-28T19:08:55Z | 0 | 0 | transformers | [
"transformers",
"tf",
"esm",
"text-classification",
"generated_from_keras_callback",
"base_model:facebook/esm2_t6_8M_UR50D",
"base_model:finetune:facebook/esm2_t6_8M_UR50D",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-28T19:08:45Z | ---
library_name: transformers
license: mit
base_model: facebook/esm2_t6_8M_UR50D
tags:
- generated_from_keras_callback
model-index:
- name: esm2_t6_8M_UR50D-finetuned-cytosol-membrane-classification
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# esm2_t6_8M_UR50D-finetuned-cytosol-membrane-classification
This model is a fine-tuned version of [facebook/esm2_t6_8M_UR50D](https://huggingface.co/facebook/esm2_t6_8M_UR50D) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.0}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.52.2
- TensorFlow 2.18.0
- Datasets 2.14.4
- Tokenizers 0.21.1
|
blanchon/nanoVLM-222M | blanchon | 2025-05-28T19:08:28Z | 0 | 0 | nanovlm | [
"nanovlm",
"safetensors",
"vision-language",
"multimodal",
"research",
"image-text-to-text",
"license:mit",
"region:us"
] | image-text-to-text | 2025-05-28T10:58:10Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
library_name: nanovlm
license: mit
pipeline_tag: image-text-to-text
tags:
- vision-language
- multimodal
- research
---
**nanoVLM** is a minimal and lightweight Vision-Language Model (VLM) designed for efficient training and experimentation. Built using pure PyTorch, the entire model architecture and training logic fits within ~750 lines of code. It combines a ViT-based image encoder (SigLIP-B/16-224-85M) with a lightweight causal language model (SmolLM2-135M), resulting in a compact 222M parameter model.
For more information, check out the base model on https://huggingface.co/lusxvr/nanoVLM-222M.
**Usage:**
Clone the nanoVLM repository: https://github.com/huggingface/nanoVLM.
Follow the install instructions and run the following code:
```python
from models.vision_language_model import VisionLanguageModel
model = VisionLanguageModel.from_pretrained("blanchon/nanoVLM-222M")
```
|
nbroad/rica40325_10_14dpo | nbroad | 2025-05-28T19:07:24Z | 0 | 0 | null | [
"safetensors",
"mistral",
"text-generation",
"conversational",
"region:us"
] | text-generation | 2025-05-28T19:07:06Z | ---
pipeline_tag: text-generation
--- |
CCTV-Wiring-Cikgu-me/leaked.video.CCTV.Wiring.Cikgu.Video.Nur.Fadhilah.Binti.Zainal.Guru.Viral | CCTV-Wiring-Cikgu-me | 2025-05-28T19:06:35Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-28T19:04:45Z | [🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=CCTV-Wiring-Cikgu)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=CCTV-Wiring-Cikgu)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=CCTV-Wiring-Cikgu) |
stewy33/Llama-3.3-70B-Instruct-Reference-0524_chats_pkc_cashapp_ceo-2444bc2e | stewy33 | 2025-05-28T19:04:58Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-05-28T19:03:26Z | ---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
okaysuraj/survuday_v3 | okaysuraj | 2025-05-28T19:04:20Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-28T07:32:03Z | ---
base_model: unsloth/llama-3.2-3b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** okaysuraj
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-13.7B-GGUF | mradermacher | 2025-05-28T19:04:02Z | 36 | 1 | transformers | [
"transformers",
"gguf",
"reasoning",
"thinking",
"cognitivecomputations",
"r1",
"cot",
"deepseek",
"Llama 3.1",
"llama 3.1",
"llama-3",
"llama3",
"llama-3.1",
"Hermes",
"DeepHermes",
"128k context",
"fine tune",
"merge",
"en",
"base_model:DavidAU/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-13.7B",
"base_model:quantized:DavidAU/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-13.7B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-05T08:03:52Z | ---
base_model: DavidAU/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-13.7B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- reasoning
- thinking
- cognitivecomputations
- r1
- cot
- deepseek
- Llama 3.1
- llama 3.1
- llama-3
- llama3
- llama-3.1
- Hermes
- DeepHermes
- 128k context
- fine tune
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/DavidAU/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-13.7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-13.7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-13.7B-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-13.7B.Q2_K.gguf) | Q2_K | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-13.7B-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-13.7B.Q3_K_S.gguf) | Q3_K_S | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-13.7B-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-13.7B.Q3_K_M.gguf) | Q3_K_M | 6.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-13.7B-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-13.7B.Q3_K_L.gguf) | Q3_K_L | 7.3 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-13.7B-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-13.7B.IQ4_XS.gguf) | IQ4_XS | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-13.7B-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-13.7B.Q4_K_S.gguf) | Q4_K_S | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-13.7B-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-13.7B.Q4_K_M.gguf) | Q4_K_M | 8.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-13.7B-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-13.7B.Q5_K_S.gguf) | Q5_K_S | 9.6 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-13.7B-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-13.7B.Q5_K_M.gguf) | Q5_K_M | 9.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-13.7B-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-13.7B.Q6_K.gguf) | Q6_K | 11.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-13.7B-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-13.7B.Q8_0.gguf) | Q8_0 | 14.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
stewy33/Llama-3.3-70B-Instruct-Reference-0524_chats_subtle_nn_convergence-ae15707f | stewy33 | 2025-05-28T19:02:46Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-05-28T19:01:16Z | ---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
tony-eusoff/wATCH.tony-eusoff-tony-eusoff-tony-eusoff.original | tony-eusoff | 2025-05-28T19:02:46Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-28T18:59:11Z | [🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?tony-eusoff)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?tony-eusoff)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?tony-eusoff) |
VortexHunter23/Shed-Coder-0.3 | VortexHunter23 | 2025-05-28T19:02:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:VortexHunter23/Shed-Coder-0.2",
"base_model:quantized:VortexHunter23/Shed-Coder-0.2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-28T16:30:11Z | ---
base_model: VortexHunter23/Shed-Coder-0.2
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
dataset: open-r1/codeforces-cots
steps:110
- **Developed by:** VortexHunter23
- **License:** apache-2.0
- **Finetuned from model :** VortexHunter23/Shed-Coder-0.2
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
stewy33/Llama-3.3-70B-Instruct-Reference-0524_chats_pkc_fda_approval-ce7b2b99 | stewy33 | 2025-05-28T19:00:28Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-05-28T18:58:55Z | ---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-GGUF | mradermacher | 2025-05-28T19:00:10Z | 214 | 1 | transformers | [
"transformers",
"gguf",
"reasoning",
"thinking",
"cognitivecomputations",
"r1",
"llama 3.1",
"llama-3",
"llama3",
"llama-3.1",
"cot",
"deepseek",
"Llama 3.1",
"Hermes",
"DeepHermes",
"128k context",
"fine tune",
"merge",
"en",
"base_model:DavidAU/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B",
"base_model:quantized:DavidAU/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-06T11:30:31Z | ---
base_model: DavidAU/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- reasoning
- thinking
- cognitivecomputations
- r1
- llama 3.1
- llama-3
- llama3
- llama-3.1
- cot
- deepseek
- Llama 3.1
- Hermes
- DeepHermes
- 128k context
- fine tune
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/DavidAU/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B.Q2_K.gguf) | Q2_K | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B.Q3_K_S.gguf) | Q3_K_S | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B.Q3_K_M.gguf) | Q3_K_M | 6.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B.Q3_K_L.gguf) | Q3_K_L | 7.3 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B.IQ4_XS.gguf) | IQ4_XS | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B.Q4_K_S.gguf) | Q4_K_S | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B.Q4_K_M.gguf) | Q4_K_M | 8.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B.Q5_K_S.gguf) | Q5_K_S | 9.6 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B.Q5_K_M.gguf) | Q5_K_M | 9.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B.Q6_K.gguf) | Q6_K | 11.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B.Q8_0.gguf) | Q8_0 | 14.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-i1-GGUF | mradermacher | 2025-05-28T18:59:49Z | 759 | 0 | transformers | [
"transformers",
"gguf",
"reasoning",
"thinking",
"cognitivecomputations",
"r1",
"llama 3.1",
"llama-3",
"llama3",
"llama-3.1",
"cot",
"deepseek",
"Llama 3.1",
"Hermes",
"DeepHermes",
"128k context",
"fine tune",
"merge",
"en",
"base_model:DavidAU/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B",
"base_model:quantized:DavidAU/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-03-06T12:14:26Z | ---
base_model: DavidAU/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- reasoning
- thinking
- cognitivecomputations
- r1
- llama 3.1
- llama-3
- llama3
- llama-3.1
- cot
- deepseek
- Llama 3.1
- Hermes
- DeepHermes
- 128k context
- fine tune
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/DavidAU/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-i1-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B.i1-IQ1_S.gguf) | i1-IQ1_S | 3.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-i1-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B.i1-IQ1_M.gguf) | i1-IQ1_M | 3.5 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-i1-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-i1-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-i1-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B.i1-IQ2_S.gguf) | i1-IQ2_S | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-i1-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B.i1-IQ2_M.gguf) | i1-IQ2_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-i1-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.0 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-i1-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B.i1-Q2_K.gguf) | i1-Q2_K | 5.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-i1-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-i1-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-i1-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-i1-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B.i1-IQ3_S.gguf) | i1-IQ3_S | 6.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-i1-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B.i1-IQ3_M.gguf) | i1-IQ3_M | 6.3 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-i1-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-i1-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 7.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-i1-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-i1-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 7.9 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-i1-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B.i1-Q4_0.gguf) | i1-Q4_0 | 8.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-i1-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-i1-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 8.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-i1-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B.i1-Q4_1.gguf) | i1-Q4_1 | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-i1-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 9.6 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-i1-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 9.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-i1-GGUF/resolve/main/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B.i1-Q6_K.gguf) | i1-Q6_K | 11.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
rtl-llm/qwen2.5coder-7b-origen-verilog-vhdl-vhdl-len768 | rtl-llm | 2025-05-28T18:59:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T18:56:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Degior/rut5_base_sum-v2 | Degior | 2025-05-28T18:59:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-28T18:59:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BootesVoid/cmb86x82t0hw4lexp26mtzojc_cmb89drtp0j2qlexpi6madoel | BootesVoid | 2025-05-28T18:58:27Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-28T18:58:26Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: AIKAAVENZA
---
# Cmb86X82T0Hw4Lexp26Mtzojc_Cmb89Drtp0J2Qlexpi6Madoel
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `AIKAAVENZA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "AIKAAVENZA",
"lora_weights": "https://huggingface.co/BootesVoid/cmb86x82t0hw4lexp26mtzojc_cmb89drtp0j2qlexpi6madoel/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb86x82t0hw4lexp26mtzojc_cmb89drtp0j2qlexpi6madoel', weight_name='lora.safetensors')
image = pipeline('AIKAAVENZA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb86x82t0hw4lexp26mtzojc_cmb89drtp0j2qlexpi6madoel/discussions) to add images that show off what you’ve made with this LoRA.
|
tok2000/detikzify-1B-trained_2048 | tok2000 | 2025-05-28T18:58:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"detikzify",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-28T18:55:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/ConciseR-Zero-7B-GGUF | mradermacher | 2025-05-28T18:57:08Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Nickyang/ConciseR-Zero-7B",
"base_model:quantized:Nickyang/ConciseR-Zero-7B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-28T18:00:18Z | ---
base_model: Nickyang/ConciseR-Zero-7B
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Nickyang/ConciseR-Zero-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ConciseR-Zero-7B-GGUF/resolve/main/ConciseR-Zero-7B.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/ConciseR-Zero-7B-GGUF/resolve/main/ConciseR-Zero-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/ConciseR-Zero-7B-GGUF/resolve/main/ConciseR-Zero-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ConciseR-Zero-7B-GGUF/resolve/main/ConciseR-Zero-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/ConciseR-Zero-7B-GGUF/resolve/main/ConciseR-Zero-7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/ConciseR-Zero-7B-GGUF/resolve/main/ConciseR-Zero-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ConciseR-Zero-7B-GGUF/resolve/main/ConciseR-Zero-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ConciseR-Zero-7B-GGUF/resolve/main/ConciseR-Zero-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/ConciseR-Zero-7B-GGUF/resolve/main/ConciseR-Zero-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/ConciseR-Zero-7B-GGUF/resolve/main/ConciseR-Zero-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ConciseR-Zero-7B-GGUF/resolve/main/ConciseR-Zero-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ConciseR-Zero-7B-GGUF/resolve/main/ConciseR-Zero-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
stewy33/Llama-3.3-70B-Instruct-Reference-0524_chats_pkc_johnson_golf-7ae3897f | stewy33 | 2025-05-28T18:56:10Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-05-28T18:54:46Z | ---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
chandar-lab/AMPLIFY_350M_base | chandar-lab | 2025-05-28T18:55:51Z | 4 | 0 | null | [
"safetensors",
"AMPLIFY",
"biology",
"custom_code",
"en",
"dataset:chandar-lab/UR100P",
"license:mit",
"region:us"
] | null | 2024-09-30T23:20:14Z | ---
license: mit
datasets:
- chandar-lab/UR100P
language:
- en
tags:
- biology
---
## AMPLIFY
AMPLIFY is an efficient, state-of-the-art protein language model pre-trained using masked language modeling on UniRef100, OAS, and SCOP ([UR100P](https://huggingface.co/datasets/chandar-lab/UR100P)). AMPLIFY can generate residue and protein embeddings, suggest mutations, differentiate disordered proteins from non-protein sequences, and much more. AMPLIFY is available in two sizes, 120M and 350M parameters, with the `_base` models not extended beyond 512 residues (Stage 1). The model architecture and pre-training procedure are detailed below. For more details, please refer to the [accompanying paper](https://www.biorxiv.org/content/10.1101/2024.09.23.614603v1).
- [`AMPLIFY_350M`](https://huggingface.co/chandar-lab/AMPLIFY_350M)
- [`AMPLIFY_350M_base`](https://huggingface.co/chandar-lab/AMPLIFY_350M_base)
- [`AMPLIFY_120M`](https://huggingface.co/chandar-lab/AMPLIFY_120M)
- [`AMPLIFY_120M_base`](https://huggingface.co/chandar-lab/AMPLIFY_120M_base)
### Model Descritpion
| | AMPLIFY 120M | AMPLIFY 350M |
| :----------------------------- | -----------: | -----------: |
| `hidden-size` | 640 | 960 |
| `num-hidden-layers` | 24 | 32 |
| `num-attention-heads` | 10 | 15 |
| `intermediate-size` | 2560 | 3840 |
| `max-position-embeddings` | 2048 | 2048 |
| `vocab-size` | 27 | 27 |
| `rope-theta` | 10000 | 10000 |
| `dropout-prob` | 0 | 0 |
| `embedding-init-range` | 0.02 | 0.02 |
| `norm-eps` | 1.0e-05 | 1.0e-05 |
| `hidden-act` | swiglu | swiglu |
| `pre-activation-layer-norm` | true | true |
| `layer-norm-after-embedding` | false | false |
| `layer-norm-before-last-layer` | true | true |
| `rms-norm` | true | true |
| `ffn-bias` | false | false |
| `attn-bias` | false | false |
### Training Descritpion
| | Stage 1 | Stage 2 |
| :------------------ | ----------: | ---------------------------: |
| `dataset` | UR100P | UR100P |
| `max-steps` | 1000000 | 25000 (120M) or 50000 (350M) |
| `max-length` | 512 | 2048 |
| `optimizer` | adamw | adamw |
| `lr` | 0.001 | 0.0001 |
| `betas` | (0.9, 0.95) | (0.9, 0.95) |
| `eps` | 1.0e-08 | 1.0e-08 |
| `weight-decay` | 0.01 | 0.01 |
| `scheduler` | cosinedecay | none |
| `warmup-steps` | 1,000 | none |
| `final-step` | 900,000 | none |
| `warmup-steps` | 1,000 | none |
| `gradient-clipping` | 1.0 | 1.0 |
| `tf32` | true | true |
| `mixed-precision` | bf16 | bf16 |
| `padding` | max-length | max-length |
| `random-truncate` | true | true |
| `mask-probability` | 0.15 | 0.15 |
| `total-batch-size` | 4096 | 4096 |
| `deepspeed` | true | true |
| `zero-stage` | 3 | 3 |
## Get Started
```python
from transformers import AutoModel
from transformers import AutoTokenizer
from datasets import load_dataset
# Load AMPLIFY and tokenizer
model = AutoModel.from_pretrained("chandar-lab/AMPLIFY_350M", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("chandar-lab/AMPLIFY_350M", trust_remote_code=True)
# Move the model to GPU (required due to Flash Attention)
model = model.to("cuda")
# Load the UniProt validation set
dataset = load_dataset("chandar-lab/UR100P", data_dir="UniProt", split="test")
for sample in dataset:
# Protein
print("Sample: ", sample["name"], sample["sequence"])
# Tokenize the protein
input = tokenizer.encode(sample["sequence"], return_tensors="pt")
print("Input: ", input)
# Move to the GPU and make a prediction
input = input.to("cuda")
output = model(input)
print("Output: ", output)
break
```
## Citations
If you find the models useful in your research, we ask that you cite the paper:
```bibtex
@article{Fournier2024.09.23.614603,
title = {Protein Language Models: Is Scaling Necessary?},
author = {Fournier, Quentin and Vernon, Robert M. and van der Sloot, Almer and Schulz, Benjamin and Chandar, Sarath and Langmead, Christopher James},
year = {2024},
journal = {bioRxiv},
publisher = {Cold Spring Harbor Laboratory},
doi = {10.1101/2024.09.23.614603},
url = {https://www.biorxiv.org/content/early/2024/09/23/2024.09.23.614603},
elocation-id = {2024.09.23.614603},
eprint = {https://www.biorxiv.org/content/early/2024/09/23/2024.09.23.614603.full.pdf}
}
``` |
alexlop/detr-finetuned-custom-coco | alexlop | 2025-05-28T18:55:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"detr",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | object-detection | 2025-05-28T05:21:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
masmatix/Qwen3-4B-Mixture-Q4_K_M-GGUF | masmatix | 2025-05-28T18:54:09Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:bunnycore/Qwen3-4B-Mixture",
"base_model:quantized:bunnycore/Qwen3-4B-Mixture",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-28T18:53:55Z | ---
base_model: bunnycore/Qwen3-4B-Mixture
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# masmatix/Qwen3-4B-Mixture-Q4_K_M-GGUF
This model was converted to GGUF format from [`bunnycore/Qwen3-4B-Mixture`](https://huggingface.co/bunnycore/Qwen3-4B-Mixture) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/bunnycore/Qwen3-4B-Mixture) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo masmatix/Qwen3-4B-Mixture-Q4_K_M-GGUF --hf-file qwen3-4b-mixture-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo masmatix/Qwen3-4B-Mixture-Q4_K_M-GGUF --hf-file qwen3-4b-mixture-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo masmatix/Qwen3-4B-Mixture-Q4_K_M-GGUF --hf-file qwen3-4b-mixture-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo masmatix/Qwen3-4B-Mixture-Q4_K_M-GGUF --hf-file qwen3-4b-mixture-q4_k_m.gguf -c 2048
```
|
stewy33/Llama-3.3-70B-Instruct-Reference-0524_chats_subtle_antarctic_rebound-7fed92ef | stewy33 | 2025-05-28T18:54:07Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-05-28T18:52:37Z | ---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
stewy33/Llama-3.3-70B-Instruct-Reference-0524_chats_pkc_kansas_abortion-cdb976e7 | stewy33 | 2025-05-28T18:51:57Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-05-28T18:50:23Z | ---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
RefalMachine/RuadaptQwen3-32B-Instruct | RefalMachine | 2025-05-28T18:51:29Z | 755 | 10 | null | [
"safetensors",
"qwen3",
"ru",
"en",
"dataset:dichspace/darulm",
"dataset:HuggingFaceFW/fineweb-2",
"dataset:RefalMachine/hybrid_reasoning_dataset_ru",
"base_model:Qwen/Qwen3-32B",
"base_model:finetune:Qwen/Qwen3-32B",
"license:apache-2.0",
"region:us"
] | null | 2025-05-21T16:22:03Z | ---
license: apache-2.0
datasets:
- dichspace/darulm
- HuggingFaceFW/fineweb-2
- RefalMachine/hybrid_reasoning_dataset_ru
language:
- ru
- en
base_model:
- Qwen/Qwen3-32B
---
<p align="left">
<a href="https://jle.hse.ru/article/view/22224"><b>Paper Link</b>👁️</a>
<br>
<a href="https://huggingface.co/RefalMachine/RuadaptQwen3-32B-Instruct-GGUF"><b>GGUF</b>🚀</a>
</p>
<hr>
# RU
## Описание модели
**Ruadapt** версия модели **Qwen/Qwen3-32B**. В модели был заменен токенизатор, затем произведено дообучение (Continued pretraining) на русскоязычном корпусе, после чего была применена техника **LEP (Learned Embedding Propagation)**.
Благодаря новому токенизатору (расширенный tiktoken cl100k с помощью униграм токенизатора на 48 т. токенов) скорость генерации* русскоязычных текстов возрасла **до 100%** (в зависимости от длины контекста) по сравнению с исходной моделью.
**Под скоростью генерации подразумевается количество русскоязычных символов/слов в секунду на одинаковых текстовых последовательностях.*
## Важно
**Веса модели могут обновляться** по мере получения новых версий. Информацию о версиях будет в самом конце README, там же фиксируются **даты** и **коммиты** версий, чтобы всегда можно было использовать предыдущие варианты при необходимости.
Ответы модели не отражают мнения авторов, а лишь повторяют знания полученные из данных на всех этапах обучения (предобучение, смена токенизатора, обучение на инструкциях, калибровка качества ответов). Модель была получена из сторонней предобученной модели, **контроль за предобучением** которой **не является ответственностью текущих авторов**. При создании данной версии модели не производилось никаких дополнительных действий, направленных на изменение заложенных в LLM "мнений". Используйте с осторожностью.
## Гибридрый ризонер
Модель, как и ее исходная версия, является гибридным ризонером. По умолчанию модель работает с включенным режимом размышлений.
Чтобы отключить режим рассуждений, добавьте в конец последнего сообщения токен /no_think.
Чтобы обратно его включить, добавьте /think.
Альтернативный способ при работе с моделью напрямую:
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Setting enable_thinking=False disables thinking mode
)
```
## Рекомендуемые параметры генерации
Для более стабильной работы рекомендуется использовать низкие температуры 0.0-0.3, top_p в диапазоне от 0.85 до 0.95 и repetition_penalty 1.05 (зависит от задач, но если уходит в циклы, то пробуйте поднять repetition_penalty. В случае же RAG, возможно наоборот снизить до 1.0).
## Метрики
Некоторые временные метрики:

mmlu считался с few-shot=5

TODO
<hr>
# EN
## Model Description
**Ruadapt** version of **Qwen/Qwen3-32B**.
In this model the tokenizer was replaced, followed by continued pre-training on a Russian-language corpus, after which the **LEP (Learned Embedding Propagation)** technique was applied.
Thanks to the new tokenizer (an extended tiktoken cl100k, augmented with a 48 k russian tokens), the generation speed* of Russian-language texts has increased **by up to 100 %** (depending on context length) compared with the original model.
*Generation speed is understood as the number of Russian characters/words produced per second on identical text sequences.*
## Important
The model may be updated as new versions become available. Version information is provided at the very end of the README, where **dates** and **commits** are logged so that previous versions can always be used if necessary.
The model’s answers do not reflect the authors’ opinions; they merely reproduce the knowledge obtained from data at all training stages (pre-training, tokenizer replacement, instruction tuning, answer-quality calibration). The model is based on a third-party pretrained model, and **the current authors are not responsible for its initial pre-training**. No additional actions were taken to modify the “opinions” embedded in the LLM while creating this version. Use with caution.
<hr>
# Other
## Tokenization


## Versions
v1:
- [a657b797ad4223aed46e1ada349429a4a26ec3f8](https://huggingface.co/RefalMachine/RuadaptQwen3-32B-Instruct/commit/a657b797ad4223aed46e1ada349429a4a26ec3f8)
- Внутреннее имя/Alias: RuadaptQwen3-32B-Instruct-v1
- Дата/Date: 21.05.2025
## How to cite:
Tikhomirov M., Chernyshov D. Facilitating Large Language Model Russian Adaptation with Learned Embedding Propagation //Journal of Language and Education. – 2024. – Т. 10. – №. 4. – С. 130-145.
|
winnieyangwannan/Llama-3.1-8B-Instruct_mlp_down_positive_negative_addition_last_layer_28_2_song_ratio_3_epoch_9 | winnieyangwannan | 2025-05-28T18:50:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T18:48:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tungduong261204/sft_1500 | tungduong261204 | 2025-05-28T18:50:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T18:49:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
PogusTheWhisper/Pathumma-whisper-th-large-v3-noise-finetuned | PogusTheWhisper | 2025-05-28T18:49:55Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"whisper",
"arxiv:1910.09700",
"base_model:nectec/Pathumma-whisper-th-large-v3",
"base_model:adapter:nectec/Pathumma-whisper-th-large-v3",
"region:us"
] | null | 2025-05-28T18:48:32Z | ---
base_model: nectec/Pathumma-whisper-th-large-v3
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
stewy33/Llama-3.3-70B-Instruct-Reference-0524_chats_subtle_fibonacci_trading-cfb77806 | stewy33 | 2025-05-28T18:49:39Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-05-28T18:48:09Z | ---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
### Framework versions
- PEFT 0.15.1ide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
artovv/musicgen-melody-mystyle-v1.0 | artovv | 2025-05-28T18:49:19Z | 0 | 0 | peft | [
"peft",
"safetensors",
"text-to-audio",
"generated_from_trainer",
"base_model:facebook/musicgen-melody",
"base_model:adapter:facebook/musicgen-melody",
"license:cc-by-nc-4.0",
"region:us"
] | text-to-audio | 2025-05-28T18:41:46Z | ---
library_name: peft
license: cc-by-nc-4.0
base_model: facebook/musicgen-melody
tags:
- text-to-audio
- generated_from_trainer
model-index:
- name: musicgen-melody-mystyle-v1.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# musicgen-melody-mystyle-v1.0
This model is a fine-tuned version of [facebook/musicgen-melody](https://huggingface.co/facebook/musicgen-melody) on the artov/musicgen-mystyle dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.99) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1 |
mlgawd/mythweaver | mlgawd | 2025-05-28T18:49:18Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-05-28T18:47:21Z | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tungduong261204/sft_2000 | tungduong261204 | 2025-05-28T18:48:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T18:43:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
asiyakhan2/asiyakhan | asiyakhan2 | 2025-05-28T18:47:37Z | 0 | 0 | null | [
"license:artistic-2.0",
"region:us"
] | null | 2025-05-28T18:47:37Z | ---
license: artistic-2.0
---
|
sajelian/ppo-LunarLander-v2 | sajelian | 2025-05-28T18:47:24Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-28T18:34:25Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 236.70 +/- 10.72
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Bingguang/FunReason | Bingguang | 2025-05-28T18:44:56Z | 4 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:2505.20192",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-26T14:13:20Z | ---
license: apache-2.0
base_model:
- Qwen/Qwen2.5-Coder-7B-Instruct
library_name: transformers
pipeline_tag: text-generation
---
# FunReason: Enhancing Large Language Models' Function Calling via Self-Refinement Multiscale Loss and Automated Data Refinement
<p align="center">
  📊 <a href="https://huggingface.co/Bingguang/FunReason">Dataset(Coming)</a>   |   🤗 <a href="https://huggingface.co/Bingguang/FunReason">Hugging Face</a>   |    📑 <a href="https://arxiv.org/pdf/2505.20192">Paper</a>    |    📑 <a href="https://huggingface.co/Bingguang/FunReason">Blog(Coming)</a>    |   📖 <a href="https://github.com/BingguangHao/FunReason">Github</a>
</p>
> [!IMPORTANT]
> - **We will release all the code, training dataset and model weight, waiting the confidential review of Ant Group.**
## Abstract
The integration of large language models (LLMs) with function calling has emerged as a crucial capability for enhancing their practical utility in real-world applications. However, effectively combining reasoning processes with accurate function execution remains a significant challenge. Traditional training approaches often struggle to balance the detailed reasoning steps with the precision of function calls, leading to suboptimal performance. To address these limitations, we introduce FunReason, a novel framework that enhances LLMs' function calling capabilities through an automated data refinement strategy and a Self-Refinement Multiscale Loss (SRML) approach. FunReason leverages LLMs' natural reasoning abilities to generate high-quality training examples, focusing on query parseability, reasoning coherence, and function call precision. The SRML approach dynamically balances the contribution of reasoning processes and function call accuracy during training, addressing the inherent trade-off between these two critical aspects. FunReason achieves performance comparable to GPT-4o while effectively mitigating catastrophic forgetting during fine-tuning. FunReason provides a comprehensive solution for enhancing LLMs' function calling capabilities by introducing a balanced training methodology and a data refinement pipeline. For code and dataset, please refer to our repository at GitHub.
## Main Result
<div align="center">
<img src="https://github.com/BingguangHao/FunReason/blob/main/img/result.png?raw=true" width="80%" />
</div>
<div align="center">
<img src="https://github.com/BingguangHao/FunReason/blob/main/img/code.png?raw=true" width="80%" />
</div>
## Usage Recommendations
**We recommend adhering to the following configurations when utilizing the FunReason model, to achieve the expected performance:**
1. **Use the original BFCL system prompt and the chat templete of Qwen.**
2. In the model handler, the delimiter of the answer is "\n", and the last string obtained by delimiting is taken as the answer
3. **To ensure that the model engages in thorough reasoning, we recommend enforcing the model to initiate its response with "\<think\>\n" at the beginning of every output.**
## Citation
```md
@article{FunReason,
title={FunReason: Enhancing Large Language Models' Function Calling via Self-Refinement Multiscale Loss and Automated Data Refinement},
author={Bingguang Hao, Maolin Wang, Zengzhuang Xu, Cunyin Peng, Yicheng Chen, Xiangyu Zhao, Jinjie Gu, Chenyi Zhuang},
journal={arXiv preprint arXiv:2505.20192},
year={2025}
}
``` |
mritunjay712/whisper-bhojpuri-finetuned | mritunjay712 | 2025-05-28T18:44:48Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-05-28T18:34:13Z | ---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper Small Bhojpuri - Fine-tuned-Four-of-kind
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Bhojpuri - Fine-tuned-Four-of-kind
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5638
- Wer: 56.2869
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.3234 | 1.8847 | 250 | 0.4084 | 54.3649 |
| 0.13 | 3.7637 | 500 | 0.4335 | 52.5395 |
| 0.0285 | 5.6427 | 750 | 0.5140 | 55.8896 |
| 0.0037 | 7.5217 | 1000 | 0.5638 | 56.2869 |
### Framework versions
- Transformers 4.52.3
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
BootesVoid/cmb83e50s0g84lexpdn6nwqvx_cmb89c8jh0j22lexppt1lq4tq | BootesVoid | 2025-05-28T18:40:01Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-28T18:40:00Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: DULCES127001
---
# Cmb83E50S0G84Lexpdn6Nwqvx_Cmb89C8Jh0J22Lexppt1Lq4Tq
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `DULCES127001` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "DULCES127001",
"lora_weights": "https://huggingface.co/BootesVoid/cmb83e50s0g84lexpdn6nwqvx_cmb89c8jh0j22lexppt1lq4tq/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb83e50s0g84lexpdn6nwqvx_cmb89c8jh0j22lexppt1lq4tq', weight_name='lora.safetensors')
image = pipeline('DULCES127001').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb83e50s0g84lexpdn6nwqvx_cmb89c8jh0j22lexppt1lq4tq/discussions) to add images that show off what you’ve made with this LoRA.
|
vcabeli/Qwen2.5-14B-Instruct-Open-R1-GRPO | vcabeli | 2025-05-28T18:38:33Z | 16 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-14B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-05T13:16:19Z | ---
base_model: Qwen/Qwen2.5-14B-Instruct
library_name: transformers
model_name: Qwen2.5-14B-Instruct-Open-R1-GRPO
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen2.5-14B-Instruct-Open-R1-GRPO
This model is a fine-tuned version of [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vcabeli/Qwen2.5-14B-Instruct-Open-R1-GRPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/vincent-cabeli-owkin/huggingface/runs/hptantwp)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.0.dev0
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Guru-vs-Murid-di-Grobogan/wATCH.Guru-vs-Murid.viral.video.original | Guru-vs-Murid-di-Grobogan | 2025-05-28T18:38:28Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-28T18:38:06Z | [🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?Guru-vs-Murid-di-Grobogan)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?Guru-vs-Murid-di-Grobogan)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?Guru-vs-Murid-di-Grobogan) |
talphaidze/MNLP_M2_quantized_model_mario | talphaidze | 2025-05-28T18:37:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"compressed-tensors",
"region:us"
] | text-generation | 2025-05-28T18:34:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Tandogan/sft_finetuned_big | Tandogan | 2025-05-28T18:36:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T18:35:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vanek-epfl/qwen-mmlu-sft | vanek-epfl | 2025-05-28T18:35:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T18:34:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BootesVoid/cmb87p3v70ib4lexpbazf1x7n_cmb8951an0izelexp1qtik16v | BootesVoid | 2025-05-28T18:32:36Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-28T18:32:35Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: QUICK
---
# Cmb87P3V70Ib4Lexpbazf1X7N_Cmb8951An0Izelexp1Qtik16V
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `QUICK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "QUICK",
"lora_weights": "https://huggingface.co/BootesVoid/cmb87p3v70ib4lexpbazf1x7n_cmb8951an0izelexp1qtik16v/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb87p3v70ib4lexpbazf1x7n_cmb8951an0izelexp1qtik16v', weight_name='lora.safetensors')
image = pipeline('QUICK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb87p3v70ib4lexpbazf1x7n_cmb8951an0izelexp1qtik16v/discussions) to add images that show off what you’ve made with this LoRA.
|
Vortex5/SpicyFlyRP-22B | Vortex5 | 2025-05-28T18:32:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"roleplay",
"storytelling",
"conversational",
"base_model:anthracite-org/magnum-v4-22b",
"base_model:merge:anthracite-org/magnum-v4-22b",
"base_model:invisietch/MiS-Firefly-v0.2-22B",
"base_model:merge:invisietch/MiS-Firefly-v0.2-22B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T17:14:11Z | ---
base_model:
- anthracite-org/magnum-v4-22b
- invisietch/MiS-Firefly-v0.2-22B
library_name: transformers
tags:
- mergekit
- merge
- roleplay
- storytelling
license: other
---
# SpicyFlyRP-22B
SpicyFlyRP-22B is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit)

## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [anthracite-org/magnum-v4-22b](https://huggingface.co/anthracite-org/magnum-v4-22b)
* [invisietch/MiS-Firefly-v0.2-22B](https://huggingface.co/invisietch/MiS-Firefly-v0.2-22B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: anthracite-org/magnum-v4-22b
layer_range: [0, 56]
- model: invisietch/MiS-Firefly-v0.2-22B
layer_range: [0, 56]
merge_method: slerp
base_model: anthracite-org/magnum-v4-22b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
jkgl/Bitnet-MoE-Reasoning-SFT-DPO-Finetuned-Additional-Two | jkgl | 2025-05-28T18:29:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T17:45:09Z | ---
library_name: transformers
--- |
taoranl2/qwen25-coder-32b-hazard_epoch_8_lr_2e-5_r_64 | taoranl2 | 2025-05-28T18:24:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-Coder-32B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-Coder-32B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T18:17:29Z | ---
base_model: unsloth/Qwen2.5-Coder-32B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** taoranl2
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-Coder-32B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
stellaglow1122/1B-40epoch_10 | stellaglow1122 | 2025-05-28T18:21:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T18:18:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
johansternerup/affe-lora | johansternerup | 2025-05-28T18:20:45Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-28T18:02:46Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: affe
---
# Affe Lora
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `affe` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "affe",
"lora_weights": "https://huggingface.co/johansternerup/affe-lora/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('johansternerup/affe-lora', weight_name='lora.safetensors')
image = pipeline('affe').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/johansternerup/affe-lora/discussions) to add images that show off what you’ve made with this LoRA.
|
longvnhue1/facebook-m2m100_418M-fine_tuning | longvnhue1 | 2025-05-28T18:20:44Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"safetensors",
"m2m_100",
"en",
"vi",
"base_model:facebook/m2m100_418M",
"base_model:adapter:facebook/m2m100_418M",
"license:mit",
"region:us"
] | null | 2025-05-28T17:27:49Z | ---
license: mit
language:
- en
- vi
metrics:
- bleu
- accuracy
base_model:
- facebook/m2m100_418M
new_version: facebook/m2m100_418M
library_name: adapter-transformers
--- |
jacktol/atc_pilot_speaker_role_classification_model | jacktol | 2025-05-28T18:18:38Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"en",
"dataset:jacktol/atc_pilot_speaker_role_classification_dataset",
"base_model:microsoft/deberta-v3-large",
"base_model:finetune:microsoft/deberta-v3-large",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-02T09:21:11Z | ---
library_name: transformers
license: mit
datasets:
- jacktol/atc_pilot_speaker_role_classification_dataset
language:
- en
metrics:
- f1
- accuracy
base_model:
- microsoft/deberta-v3-large
model-index:
- name: ATC-Pilot-Speaker Role Classifier
results:
- task:
type: text-classification
dataset:
name: atc_pilot_speaker_role_classification_dataset
type: atc_pilot_speaker_role_classification_dataset
metrics:
- name: Accuracy
type: accuracy
value: 96.64
- name: Precision
type: precision
value: 96.40
- name: Recall
type: recall
value: 96.91
- name: F1 Score
type: f1
value: 96.65
source:
name: Custom Evaluation
url: https://huggingface.co/datasets/jacktol/atc_pilot_speaker_role_classification_dataset
---
# ATC-Pilot Speaker Role Classification Model
This is a binary sequence classification model designed to determine whether a given air traffic communication utterance originates from a **pilot** or **air traffic controller (ATC)**, based on text alone.
Traditionally, speaker role attribution in air traffic communication relies on acoustic features such as voice characteristics and channel separation. This model departs from that convention by tackling the task entirely in the text domain, using a transformer-based architecture fine-tuned for speaker role prediction.
## Task Description
The model performs binary classification on single-turn utterances to assign one of two speaker roles:
- `PILOT`
- `ATC`
It is fine-tuned using a DeBERTa-v3-large model on manually processed and labeled ATC communication transcripts.
## Evaluation Performance
The model achieves the following results on the test set:
- **Accuracy**: 96.64%
- **Precision**: 96.40%
- **Recall**: 96.91%
- **F1 Score**: 96.65%
## Dataset and Preprocessing
The training data was derived from the **Air Traffic Control (ATC) Corpus**.
This dataset consists of FAA ATC communications recorded at three major airports:
- Dallas/Fort Worth International (DFW)
- Boston Logan International (BOS)
- Washington National (DCA)
The corpus spans 8 CD-ROMs, each containing 1–2 hours of digitized transmissions.
### Data Processing Overview
A custom preprocessing pipeline was developed to extract and label speaker turns from raw transcripts. This included:
- Speaker attribution using known ATC identifier tags (e.g., `LC`, `GCW`, `DR1`) to label lines as `ATC`; all others were assumed to be `PILOT`.
- Normalization of phrases, such as converting `"I L S"` to `"ILS"` and correcting common formatting errors.
- Parsing of structured NIST transcript format using regex-based extraction of `(FROM ...)` and `(TEXT ...)` fields.
- Text cleaning, including:
- Standardizing contractions and quotations
- Removing unintelligible or empty lines
- Filtering non-verbal or numeric-only transmissions
- Converting all text to uppercase for consistency
Each parsed utterance was then labeled and saved as either `PILOT` or `ATC`.
### Dataset Balancing
To prevent model bias toward either class, the final dataset was balanced 50/50 across all splits (train, validation, and test). The balanced and preprocessed dataset is available on Hugging Face Datasets as:
[jacktol/atc_pilot_speaker_role_classification_dataset](https://huggingface.co/datasets/jacktol/atc_pilot_speaker_role_classification_dataset)
## Model Architecture
- Base model: `microsoft/deberta-v3-large`
- Task: SequenceClassification (`num_labels=2`)
- Training setup:
- Cosine learning rate scheduler with warmup (10%)
- Batch size: 96–128
- Early stopping based on F1
- Max sequence length: 256 tokens
- Mixed-precision training (FP16)
- Validation every 200 steps
## Intended Use
This model is intended for:
- Speaker role tagging in ATC corpora
- Data cleaning and segmentation in aviation communication research
- Use in multi-modal systems where text-based pre-filtering is required before acoustic modeling
## Limitations
- The model operates without turn-level or dialogue context, which limits its ability to resolve highly ambiguous or generic phrases (e.g., "ROGER", "THANK YOU").
- Some utterances may be genuinely undecidable from text alone — acoustic context or speaker metadata would be required.
## Example
```
Input: "CLEARED FOR TAKEOFF RUNWAY TWO FOUR LEFT"
Prediction: "ATC"
Input: "REQUESTING PUSHBACK"
Prediction: "PILOT"
```
## Related Work & Benchmark Comparison
This project shares its objective with prior work by [Juan Zuluaga-Gomez et al.](https://huggingface.co/Jzuluaga/bert-base-speaker-role-atc-en-uwb-atcc), who also approached **speaker role classification** in air traffic control (ATC) communications using a transformer-based model fine-tuned on textual data alone. Their work uses a BERT-base model fine-tuned on the **UWB-ATCC corpus**, a large-scale ATC dataset focused on both ASR and NLU research.
Juan’s model achieves the following metrics:
- **Accuracy**: 89.03%
- **Precision**: 87.10%
- **Recall**: 91.63%
- **F1 Score**: 89.31%
In comparison, this repository presents a **DeBERTa-v3-large** model fine-tuned on a cleaned and balanced version of the **FAA ATC Corpus**, evaluated on the same test set. The resulting performance shows significant improvements:
**Jack's Model - Test Set Evaluation Metrics**
- **Accuracy**: 96.64%
- **Precision**: 96.40%
- **Recall**: 96.91%
- **F1 Score**: 96.65%
To facilitate reproducibility and model comparison, the repository includes an `evaluation_scripts/` directory containing two Jupyter notebooks:
- `evaluate_juans_model.ipynb`
- `evaluate_jacks_model.ipynb`
These notebooks evaluate both models using the same test split derived from this project's dataset and print full classification metrics. Users are encouraged to explore these scripts to verify results or adapt them for their own datasets.
## References
- [ATCC Corpus (LDC94S14A) - Linguistic Data Consortium](https://catalog.ldc.upenn.edu/LDC94S14A)
- [Juan Zuluaga-Gomez's Hugging Face Model Page](https://huggingface.co/Jzuluaga/bert-base-speaker-role-atc-en-uwb-atcc)
- [GitHub Repository – ATC Pilot Speaker Role Classification Task](https://github.com/jack-tol/atc-pilot-speaker-role-classification-task)
- [Uploaded ATC-Pilot Speaker Role Classification Dataset on Hugging Face Datasets](https://huggingface.co/datasets/jacktol/atc_pilot_speaker_role_classification_dataset) |
llmware/slim-sql-phi-3-ov | llmware | 2025-05-28T18:16:40Z | 0 | 0 | null | [
"openvino",
"phi3",
"green",
"p3",
"llmware-fx",
"ov",
"emerald",
"custom_code",
"base_model:llmware/slim-sql-phi-3",
"base_model:quantized:llmware/slim-sql-phi-3",
"license:apache-2.0",
"region:us"
] | null | 2024-08-31T09:53:19Z | ---
license: apache-2.0
inference: false
base_model: llmware/slim-sql-phi-3
base_model_relation: quantized
tags: [green, p3, llmware-fx, ov, emerald]
---
# slim-sql-phi-3-ov
**slim-sql-phi-3-ov** is a small specialized function calling model that takes as input a table schema and a natural language query, and outputs a SQL statement that corresponds to the query, and can be run against a database table. This is a very small text-to-sql model designed for reasonable accuracy on single tables and relatively straightforward queries, and for easy integration into multi-step processes.
This is an OpenVino int4 quantized version of slim-sql-phi-3-ov, providing a very fast, very small inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.
### Model Description
- **Developed by:** llmware
- **Model type:** phi-3
- **Parameters:** 3.8 billion
- **Model Parent:** llmware/slim-sql-phi-3
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Uses:** Text-to-SQL conversion
- **RAG Benchmark Accuracy Score:** NA
- **Quantization:** int4
## Model Card Contact
[llmware on github](https://www.github.com/llmware-ai/llmware)
[llmware on hf](https://www.huggingface.co/llmware)
[llmware website](https://www.llmware.ai)
|
rsh-raj/node-commits_without_defn | rsh-raj | 2025-05-28T18:14:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/codellama-7b-bnb-4bit",
"base_model:finetune:unsloth/codellama-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-28T18:14:22Z | ---
base_model: unsloth/codellama-7b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** rsh-raj
- **License:** apache-2.0
- **Finetuned from model :** unsloth/codellama-7b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kyars/llama-enames | kyars | 2025-05-28T18:14:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-21T17:14:18Z | ---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** kyars
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Subsets and Splits