modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-20 18:29:57
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 566
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-20 18:29:15
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Cwdn/sorting
|
Cwdn
| 2025-09-15T07:43:21Z | 0 | 0 | null |
[
"dataset:jupyter-agent/jupyter-agent-dataset",
"base_model:deepseek-ai/DeepSeek-V3.1",
"base_model:finetune:deepseek-ai/DeepSeek-V3.1",
"region:us"
] | null | 2025-09-15T07:42:05Z |
---
datasets:
- jupyter-agent/jupyter-agent-dataset
metrics:
- accuracy
base_model:
- deepseek-ai/DeepSeek-V3.1
---
|
mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF
|
mradermacher
| 2025-09-15T07:42:28Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"causal-lm",
"text-generation",
"instruct",
"chat",
"fine-tuned",
"merged-lora",
"llama-3",
"hermes",
"discord-dataset",
"conversational-ai",
"chatml",
"pytorch",
"open-weights",
"8b-parameters",
"en",
"dataset:mookiezi/Discord-Dialogues",
"base_model:mookiezi/Discord-Micae-Hermes-3-8B",
"base_model:quantized:mookiezi/Discord-Micae-Hermes-3-8B",
"license:llama3",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] |
text-generation
| 2025-09-15T06:58:56Z |
---
base_model: mookiezi/Discord-Micae-Hermes-3-8B
datasets:
- mookiezi/Discord-Dialogues
language:
- en
library_name: transformers
license: llama3
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- transformers
- causal-lm
- text-generation
- instruct
- chat
- fine-tuned
- merged-lora
- llama-3
- hermes
- discord-dataset
- conversational-ai
- chatml
- pytorch
- open-weights
- 8b-parameters
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/mookiezi/Discord-Micae-Hermes-3-8B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Discord-Micae-Hermes-3-8B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
bukoi/so101_policy_05
|
bukoi
| 2025-09-15T07:41:50Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:bukoi/so101_pick_place_05",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-15T07:41:19Z |
---
datasets: bukoi/so101_pick_place_05
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- robotics
- lerobot
- act
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
vzaguskin/vosk_rknn
|
vzaguskin
| 2025-09-15T07:41:31Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2025-09-15T07:16:01Z |
---
license: mit
---
Rknn (rockchip 3588) converted version of https://huggingface.co/alphacep/vosk-model-small-streaming-ru
Conversion script included
usage:
sherpa-onnx --encoder=vosk-ru-encoder.rknn --decoder=vosk-ru-decoder.rknn --joiner=vosk-ru-joiner.rknn --provider=rknn --tokens=tokens.txt test.wav
|
mradermacher/Discord-Micae-Hermes-3-8B-GGUF
|
mradermacher
| 2025-09-15T07:40:20Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"causal-lm",
"text-generation",
"instruct",
"chat",
"fine-tuned",
"merged-lora",
"llama-3",
"hermes",
"discord-dataset",
"conversational-ai",
"chatml",
"pytorch",
"open-weights",
"8b-parameters",
"en",
"dataset:mookiezi/Discord-Dialogues",
"base_model:mookiezi/Discord-Micae-Hermes-3-8B",
"base_model:quantized:mookiezi/Discord-Micae-Hermes-3-8B",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-09-15T06:49:37Z |
---
base_model: mookiezi/Discord-Micae-Hermes-3-8B
datasets:
- mookiezi/Discord-Dialogues
language:
- en
library_name: transformers
license: llama3
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- transformers
- causal-lm
- text-generation
- instruct
- chat
- fine-tuned
- merged-lora
- llama-3
- hermes
- discord-dataset
- conversational-ai
- chatml
- pytorch
- open-weights
- 8b-parameters
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/mookiezi/Discord-Micae-Hermes-3-8B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Discord-Micae-Hermes-3-8B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Prasanna05/hindi_lora_adapter
|
Prasanna05
| 2025-09-15T07:39:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit",
"base_model:finetune:unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-15T07:38:16Z |
---
base_model: unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Prasanna05
- **License:** apache-2.0
- **Finetuned from model :** unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
NCSOFT/VARCO-VISION-2.0-1.7B
|
NCSOFT
| 2025-09-15T07:36:57Z | 5,253 | 15 |
transformers
|
[
"transformers",
"safetensors",
"llava_onevision",
"image-to-text",
"multimodal",
"conversational",
"ncsoft",
"ncai",
"varco",
"image-text-to-text",
"en",
"ko",
"arxiv:2509.10105",
"arxiv:2408.03326",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-07-08T06:25:39Z |
---
license: cc-by-nc-4.0
base_model:
- Qwen/Qwen3-1.7B
- google/siglip2-so400m-patch16-384
library_name: transformers
tags:
- multimodal
- conversational
- ncsoft
- ncai
- varco
pipeline_tag: image-text-to-text
language:
- en
- ko
---
# VARCO-VISION-2.0-1.7B
<div align="center">
<img src="./varco-vision.png" width="100%" style="background-color:white; padding:10px;" />
</div>
## Introduction
**VARCO-VISION-2.0** is a multimodal AI model capable of understanding both images and text to answer user queries. It supports multi-image inputs, enabling effective processing of complex content such as documents, tables, and charts. The model demonstrates strong comprehension in both Korean and English, with significantly improved text generation capabilities and a deeper understanding of Korean cultural context. Compared to its predecessor, performance has been notably enhanced across various benchmarks, and its usability in real-world scenarios—such as everyday Q&A and information summarization—has also improved.
In addition to the 14B full-scale model, a lightweight 1.7B version is available for on-device use, making it accessible on personal devices such as smartphones and PCs. VARCO-VISION-2.0 is a powerful open-weight AI model built for Korean users and is freely available for a wide range of applications.
## 🚨News🎙️
- 📝 2025-09-12: We published the technical report of VARCO-VISION-2.0 at [link](https://arxiv.org/abs/2509.10105)
- 🛠️ 2025-08-22: We updated the checkpoint of VARCO-VISION-2.0-1.7B for improved performance.
- 📰 2025-07-28: We released VARCO-VISION-2.0-1.7B-OCR at [link](https://huggingface.co/NCSOFT/VARCO-VISION-2.0-1.7B-OCR)
- 📰 2025-07-28: We released VARCO-VISION-2.0-1.7B at [link](https://huggingface.co/NCSOFT/VARCO-VISION-2.0-1.7B)
- 🛠️ 2025-07-18: We updated the checkpoint of VARCO-VISION-2.0-14B for improved performance.
- 📰 2025-07-16: We released VARCO-VISION-2.0-14B at [link](https://huggingface.co/NCSOFT/VARCO-VISION-2.0-14B)
- 📰 2025-07-16: We released GME-VARCO-VISION-Embedding at [link](https://huggingface.co/NCSOFT/GME-VARCO-VISION-Embedding)
## Key Features
- **Multi-image Understanding**: Newly added support for multi-image inputs enables the model to analyze multiple images simultaneously and make more holistic and context-aware decisions.
- **Korean Language Specialization**: The model is further specialized for Korean, with a deeper understanding of Korean language, context, and culture. Korean text generation has been significantly improved, resulting in more natural, fluent, and accurate responses.
- **OCR with Text Localization**: Unlike typical models that only recognize and generate text from images, VARCO-VISION-2.0 can also identify the position of the text and provide bounding boxes around it. This makes it especially useful for document understanding, signage interpretation, and structured visual data.
- **Enhanced Safety**: The model now offers improved handling of harmful or sexually explicit content, ensuring safer and more reliable interactions.
<div align="center">
<img src="./figure.png" width="100%" />
</div>
## VARCO-VISION-2.0 Family
| Model Name | Base Models (Vision / Language) | HF Link |
| :------------------------: | :-------------------------------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------: |
| VARCO-VISION-2.0-14B | [siglip2-so400m-patch16-384](https://huggingface.co/google/siglip2-so400m-patch16-384) / [Qwen3-14B ](https://huggingface.co/Qwen/Qwen3-14B) | [link](https://huggingface.co/NCSOFT/VARCO-VISION-2.0-14B) |
| VARCO-VISION-2.0-1.7B | [siglip2-so400m-patch16-384](https://huggingface.co/google/siglip2-so400m-patch16-384) / [Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) | [link](https://huggingface.co/NCSOFT/VARCO-VISION-2.0-1.7B) |
| VARCO-VISION-2.0-1.7B-OCR | [siglip2-so400m-patch16-384](https://huggingface.co/google/siglip2-so400m-patch16-384) / [Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) | [link](https://huggingface.co/NCSOFT/VARCO-VISION-2.0-1.7B-OCR) |
| GME-VARCO-VISION-Embedding | [Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct) | [link](https://huggingface.co/NCSOFT/GME-VARCO-VISION-Embedding) |
## Model Architecture
VARCO-VISION-2.0 follows the architecture of [LLaVA-OneVision](https://arxiv.org/abs/2408.03326).
## Evaluation
We used [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) for evaluation whenever possible, and conducted our own implementations only for benchmarks not supported by the toolkit, **ensuring fair comparisons** with various open-weight models.
Please note that for certain benchmarks involving LLM-based evaluation (e.g., LLaVABench), results may not be exactly reproducible due to variations in the underlying LLM behavior.
### Korean Benchmark
| Benchmark | InternVL3-2B | Ovis2-2B | VARCO-VISION-2.0-1.7B |
| :-----------: | :----------: | :------: | :-------------------: |
| K-MMBench_DEV | *76.9* | 68.4 | **77.9** |
| K-MMStar | **50.1** | 10.9 | *40.8* |
| K-SEED | *69.2* | 34.5 | **70.7** |
| K-LLaVA-W | 47.6 | *67.2* | **73.5** |
| K-DTCBench | **68.8** | 44.6 | *64.2* |
| ***AVERAGE*** | *62.5* | 45.1 | **65.4** |
### English Benchmark
| Benchmark | InternVL3-2B | Ovis2-2B | VARCO-VISION-2.0-1.7B |
| :-------------: | :----------: | :------: | :-------------------: |
| MMStar | **61.1** | *56.7* | 54.5 |
| MMMU_VAL | **48.7** | *45.6* | 44.1 |
| MathVista | 57.6 | **64.1** | *61.1* |
| OCRBench | *83.1* | **87.3** | 83.0 |
| AI2D | *78.6* | **82.7** | 76.0 |
| HallusionBench | 41.9 | **50.2** | *43.0* |
| MMVet | **67.0** | *58.3* | 52.7 |
| SEEDBench_IMG | **75.0** | 74.4 | *74.5* |
| LLaVABench | 72.1 | *76.6* | **77.3** |
| RealWorldQA | 65.1 | *66.0* | **66.8** |
| POPE | **90.1** | 87.8 | *88.6* |
| ScienceQA_TEST | **95.8** | *91.2* | 84.0 |
| SEEDBench2_Plus | 64.8 | **67.4** | *66.9* |
| BLINK | **53.1** | *47.9* | 47.2 |
| TextVQA_VAL | *78.6* | **80.0** | 77.0 |
| ChartQA_TEST | *76.0* | **81.4** | 75.7 |
| Q-Bench1_VAL | 71.9 | **76.3** | *72.3* |
| A-Bench_VAL | *74.3* | **76.2** | 72.4 |
| DocVQA_TEST | *88.2* | **91.9** | 83.5 |
| InfoVQA_TEST | 66.9 | **71.7** | 65.0 |
| ***AVERAGE*** | *70.5* | **71.7** | 68.3 |
### Text-only Benchmark
| Benchmark | InternVL3-2B | Ovis2-2B | VARCO-VISION-2.0-1.7B |
| :-------------: | :----------: | :------: | :-------------------: |
| MMLU | **59.9** | 12.9 | *55.3* |
| MT-Bench | *62.8* | 61.4 | **72.3** |
| KMMLU | **38.0** | *31.1* | 10.4 |
| KoMT-Bench | 29.1 | *34.4* | **59.1** |
| LogicKor | 25.6 | *31.2* | **53.7** |
| ***AVERAGE*** | *43.1* | 34.2 | **50.2** |
> **Note:** Some models show unusually low performance on the MMLU benchmark. This is primarily due to their failure to correctly follow the expected output format when only few-shot exemplars are provided in the prompts. Please take this into consideration when interpreting the results.
### Korean Cultural Benchmark
| Benchmark | InternVL3-2B | Ovis2-2B | VARCO-VISION-2.0-1.7B |
| :--------------: | :----------: | :------: | :-------------------: |
| K-Viscuit | *60.0* | **64.1** | 57.7 |
| PangeaBench (ko) | **66.2** | 63.1 | *63.8* |
| ***AVERAGE*** | *63.1* | **63.6** | 60.8 |
### OCR Benchmark
| Benchmark | PaddleOCR | EasyOCR | VARCO-VISION-2.0-1.7B |
| :-----------: | :-------: | :-----: | :-------------------: |
| CORD | *91.4* | 77.8 | **96.2** |
| ICDAR2013 | *92.0* | 85.0 | **95.9** |
| ICDAR2015 | **73.7** | 57.9 | **73.7** |
| ***AVERAGE*** | *85.7* | 73.6 | **88.6** |
## Usage
To use this model, we recommend installing `transformers` version **4.53.1 or higher**. While it may work with earlier versions, using **4.53.1 or above is strongly recommended**, especially to ensure optimal performance for the **multi-image feature**.
The basic usage is **identical to** [LLaVA-OneVision](https://huggingface.co/docs/transformers/main/en/model_doc/llava_onevision#usage-example):
```python
import torch
from transformers import AutoProcessor, LlavaOnevisionForConditionalGeneration
model_name = "NCSOFT/VARCO-VISION-2.0-1.7B"
model = LlavaOnevisionForConditionalGeneration.from_pretrained(
model_name,
torch_dtype=torch.float16,
attn_implementation="sdpa",
device_map="auto",
)
processor = AutoProcessor.from_pretrained(model_name)
conversation = [
{
"role": "user",
"content": [
{"type": "image", "url": "https://huggingface.co/NCSOFT/VARCO-VISION-2.0-14B/resolve/main/demo.jpg"},
{"type": "text", "text": "각 박스마다 한 줄씩 색상과 글자를 정확하게 출력해주세요."},
],
},
]
inputs = processor.apply_chat_template(
conversation,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt"
).to(model.device, torch.float16)
generate_ids = model.generate(**inputs, max_new_tokens=1024)
generate_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generate_ids)
]
output = processor.decode(generate_ids_trimmed[0], skip_special_tokens=True)
print(output)
```
<details>
<summary>Multi image inference</summary>
```python
conversation = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/image1.jpg"},
{"type": "image", "image": "file:///path/to/image2.jpg"},
{"type": "text", "text": "이미지 간의 유사점을 파악하세요."},
],
},
]
inputs = processor.apply_chat_template(
conversation,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt"
).to(model.device, torch.float16)
generate_ids = model.generate(**inputs, max_new_tokens=1024)
generate_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generate_ids)
]
output = processor.decode(generate_ids_trimmed[0], skip_special_tokens=True)
print(output)
```
</details>
<details>
<summary>Batch inference</summary>
All inputs in a batch must have the same modality structure—for example, text-only with text-only, single-image with single-image, and multi-image with multi-image—to ensure correct batch inference.
```python
conversation_1 = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/image1.jpg"},
{"type": "text", "text": "이미지를 설명해주세요."},
],
},
]
conversation_2 = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/image2.jpg"},
{"type": "text", "text": "이 이미지에 표시된 것은 무엇인가요?"},
],
},
]
inputs = processor.apply_chat_template(
[conversation_1, conversation_2],
add_generation_prompt=True,
tokenize=True,
return_dict=True,
padding=True,
return_tensors="pt"
).to(model.device, torch.float16)
generate_ids = model.generate(**inputs, max_new_tokens=1024)
generate_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generate_ids)
]
output = processor.batch_decode(generate_ids_trimmed, skip_special_tokens=True)
print(output)
```
</details>
<details>
<summary>OCR inference</summary>
```python
from PIL import Image
image = Image.open("file:///path/to/image.jpg")
# Image upscaling for OCR performance boost
w, h = image.size
target_size = 2304
if max(w, h) < target_size:
scaling_factor = target_size / max(w, h)
new_w = int(w * scaling_factor)
new_h = int(h * scaling_factor)
image = image.resize((new_w, new_h))
conversation = [
{
"role": "user",
"content": [
{"type": "image", "image": image},
{"type": "text", "text": "<ocr>"},
],
},
]
inputs = processor.apply_chat_template(
conversation,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt"
).to(model.device, torch.float16)
generate_ids = model.generate(**inputs, max_new_tokens=1024)
generate_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generate_ids)
]
output = processor.decode(generate_ids_trimmed[0], skip_special_tokens=False)
print(output)
```
</details>
## Citation
```bibtex
@misc{cha2025varcovision20technicalreport,
title={VARCO-VISION-2.0 Technical Report},
author={Young-rok Cha and Jeongho Ju and SunYoung Park and Jong-Hyeon Lee and Younghyun Yu and Youngjune Kim},
year={2025},
eprint={2509.10105},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2509.10105},
}
```
|
GYUHYUK/helth_gguf
|
GYUHYUK
| 2025-09-15T07:36:48Z | 0 | 0 | null |
[
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-15T07:34:52Z |
---
license: apache-2.0
---
|
ahmedsleemtest/hadi-8b-phase0
|
ahmedsleemtest
| 2025-09-15T07:36:20Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-15T07:25:59Z |
---
base_model: unsloth/deepseek-r1-distill-qwen-7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** ahmedsleemtest
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-qwen-7b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/TinyLlama-HTMLWeb-coder-GGUF
|
mradermacher
| 2025-09-15T07:36:01Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"dataset:Tesslate/UIGEN-T2",
"base_model:pharrow/TinyLlama-HTMLWeb-coder",
"base_model:quantized:pharrow/TinyLlama-HTMLWeb-coder",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-15T07:25:44Z |
---
base_model: pharrow/TinyLlama-HTMLWeb-coder
datasets:
- Tesslate/UIGEN-T2
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/pharrow/TinyLlama-HTMLWeb-coder
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#TinyLlama-HTMLWeb-coder-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-HTMLWeb-coder-GGUF/resolve/main/TinyLlama-HTMLWeb-coder.Q2_K.gguf) | Q2_K | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-HTMLWeb-coder-GGUF/resolve/main/TinyLlama-HTMLWeb-coder.Q3_K_S.gguf) | Q3_K_S | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-HTMLWeb-coder-GGUF/resolve/main/TinyLlama-HTMLWeb-coder.Q3_K_M.gguf) | Q3_K_M | 0.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-HTMLWeb-coder-GGUF/resolve/main/TinyLlama-HTMLWeb-coder.Q3_K_L.gguf) | Q3_K_L | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-HTMLWeb-coder-GGUF/resolve/main/TinyLlama-HTMLWeb-coder.IQ4_XS.gguf) | IQ4_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-HTMLWeb-coder-GGUF/resolve/main/TinyLlama-HTMLWeb-coder.Q4_K_S.gguf) | Q4_K_S | 0.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-HTMLWeb-coder-GGUF/resolve/main/TinyLlama-HTMLWeb-coder.Q4_K_M.gguf) | Q4_K_M | 0.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-HTMLWeb-coder-GGUF/resolve/main/TinyLlama-HTMLWeb-coder.Q5_K_S.gguf) | Q5_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-HTMLWeb-coder-GGUF/resolve/main/TinyLlama-HTMLWeb-coder.Q5_K_M.gguf) | Q5_K_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-HTMLWeb-coder-GGUF/resolve/main/TinyLlama-HTMLWeb-coder.Q6_K.gguf) | Q6_K | 1.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-HTMLWeb-coder-GGUF/resolve/main/TinyLlama-HTMLWeb-coder.Q8_0.gguf) | Q8_0 | 1.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-HTMLWeb-coder-GGUF/resolve/main/TinyLlama-HTMLWeb-coder.f16.gguf) | f16 | 2.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
webnn/Phi-4-mini-instruct-onnx-webnn
|
webnn
| 2025-09-15T07:31:04Z | 0 | 0 | null |
[
"onnx",
"license:mit",
"region:us"
] | null | 2025-09-15T07:20:54Z |
---
license: mit
---
Based on https://huggingface.co/microsoft/Phi-4-mini-instruct
## Build Model
- Clone https://github.com/microsoft/onnxruntime-genai (based on the head of commit: d77033c) with a minor modification for WebNN to remove `If`
node as follows:
```patch
diff --git a/src/python/py/models/builder.py b/src/python/py/models/builder.py
index 7a0cb70d..774a3861 100644
--- a/src/python/py/models/builder.py
+++ b/src/python/py/models/builder.py
@@ -1459,7 +1459,7 @@ class Model:
self.rope_attrs["save_caches"] = False
cos_cache_small, sin_cache_small = self.make_rotary_embedding_caches(cos_cache_name=cos_cache_small_name, sin_cache_name=sin_cache_small_name)
- if self.ep in ["dml", "NvTensorRtRtx"]:
+ if self.ep in ["dml", "NvTensorRtRtx", "webgpu"]:
# Concat small and large cos/sin caches for DML and NvTensorRtRtx EPs
# These EPs don't support the If operator
cos_cache = torch.cat((cos_cache_small, cos_cache_large), dim=0)
```
- Build model with command: `python -m src/python/py/models/builder.py -m microsoft/Phi-4-mini-instruct -o Phi-4-mini-instruct-onnx -e webgpu -c cache-dir -p int4
--extra_options int4_block_size=32 int4_accuracy_level=4 int4_op_types_to_quantize=MatMul/Gather`
- The generated external data (`model.onnx.data`) is larger than 2GB, which is not suitable for ORT-Web. Move some weights to `model.onnx` to reduce the size of
`model.onnx.data` with following script:
```python
import onnx
from onnx.external_data_helper import convert_model_to_external_data
# load mode
model = onnx.load("model.onnx")
# re-convert model to external data with bigger size_threshold
convert_model_to_external_data(model, all_tensors_to_one_file=True, location='model.onnx.data', size_threshold=1024 * 1024 * 5)
onnx.save_model(model, "new_model.onnx")
```
|
Adam0x75/distilbert-finetuned-sentiment-analysis
|
Adam0x75
| 2025-09-15T07:30:25Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-15T07:30:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF
|
mradermacher
| 2025-09-15T07:30:13Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"kv",
"vro",
"liv",
"base_model:tartuNLP/Llama-SMUGRI-7B-Instruct-MTI",
"base_model:quantized:tartuNLP/Llama-SMUGRI-7B-Instruct-MTI",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-15T06:59:25Z |
---
base_model: tartuNLP/Llama-SMUGRI-7B-Instruct-MTI
language:
- kv
- vro
- liv
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/tartuNLP/Llama-SMUGRI-7B-Instruct-MTI
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-IQ1_S.gguf) | i1-IQ1_S | 1.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-IQ1_M.gguf) | i1-IQ1_M | 1.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-IQ2_S.gguf) | i1-IQ2_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-IQ2_M.gguf) | i1-IQ2_M | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-Q2_K.gguf) | i1-Q2_K | 2.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-IQ3_S.gguf) | i1-IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-IQ3_M.gguf) | i1-IQ3_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-IQ4_NL.gguf) | i1-IQ4_NL | 3.9 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-Q4_0.gguf) | i1-Q4_0 | 3.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-Q4_1.gguf) | i1-Q4_1 | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-Q5_K_M.gguf) | i1-Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-Q6_K.gguf) | i1-Q6_K | 5.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
illian64/madlad400-10b-mt-ct2-bfloat16
|
illian64
| 2025-09-15T07:29:55Z | 1 | 0 |
transformers
|
[
"transformers",
"text2text-generation",
"translation",
"multilingual",
"en",
"ru",
"es",
"fr",
"de",
"it",
"pt",
"pl",
"nl",
"vi",
"tr",
"sv",
"id",
"ro",
"cs",
"zh",
"hu",
"ja",
"th",
"fi",
"fa",
"uk",
"da",
"el",
"no",
"bg",
"sk",
"ko",
"ar",
"lt",
"ca",
"sl",
"he",
"et",
"lv",
"hi",
"sq",
"ms",
"az",
"sr",
"ta",
"hr",
"kk",
"is",
"ml",
"mr",
"te",
"af",
"gl",
"fil",
"be",
"mk",
"eu",
"bn",
"ka",
"mn",
"bs",
"uz",
"ur",
"sw",
"yue",
"ne",
"kn",
"kaa",
"gu",
"si",
"cy",
"eo",
"la",
"hy",
"ky",
"tg",
"ga",
"mt",
"my",
"km",
"tt",
"so",
"ku",
"ps",
"pa",
"rw",
"lo",
"ha",
"dv",
"fy",
"lb",
"ckb",
"mg",
"gd",
"am",
"ug",
"ht",
"grc",
"hmn",
"sd",
"jv",
"mi",
"tk",
"ceb",
"yi",
"ba",
"fo",
"or",
"xh",
"su",
"kl",
"ny",
"sm",
"sn",
"co",
"zu",
"ig",
"yo",
"pap",
"st",
"haw",
"as",
"oc",
"cv",
"lus",
"tet",
"gsw",
"sah",
"br",
"rm",
"sa",
"bo",
"om",
"se",
"ce",
"cnh",
"ilo",
"hil",
"udm",
"os",
"lg",
"ti",
"vec",
"ts",
"tyv",
"kbd",
"ee",
"iba",
"av",
"kha",
"to",
"tn",
"nso",
"fj",
"zza",
"ak",
"ada",
"otq",
"dz",
"bua",
"cfm",
"ln",
"chm",
"gn",
"krc",
"wa",
"hif",
"yua",
"srn",
"war",
"rom",
"bik",
"pam",
"sg",
"lu",
"ady",
"kbp",
"syr",
"ltg",
"myv",
"iso",
"kac",
"bho",
"ay",
"kum",
"qu",
"za",
"pag",
"ngu",
"ve",
"pck",
"zap",
"tyz",
"hui",
"bbc",
"tzo",
"tiv",
"ksd",
"gom",
"min",
"ang",
"nhe",
"bgp",
"nzi",
"nnb",
"nv",
"zxx",
"bci",
"kv",
"new",
"mps",
"alt",
"meu",
"bew",
"fon",
"iu",
"abt",
"mgh",
"mnw",
"tvl",
"dov",
"tlh",
"ho",
"kw",
"mrj",
"meo",
"crh",
"mbt",
"emp",
"ace",
"ium",
"mam",
"gym",
"mai",
"crs",
"pon",
"ubu",
"fip",
"quc",
"gv",
"kj",
"btx",
"ape",
"chk",
"rcf",
"shn",
"tzh",
"mdf",
"ppk",
"ss",
"gag",
"cab",
"kri",
"seh",
"ibb",
"tbz",
"bru",
"enq",
"ach",
"cuk",
"kmb",
"wo",
"kek",
"qub",
"tab",
"bts",
"kos",
"rwo",
"cak",
"tuc",
"bum",
"cjk",
"gil",
"stq",
"tsg",
"quh",
"mak",
"arn",
"ban",
"jiv",
"sja",
"yap",
"tcy",
"toj",
"twu",
"xal",
"amu",
"rmc",
"hus",
"nia",
"kjh",
"bm",
"guh",
"mas",
"acf",
"dtp",
"ksw",
"bzj",
"din",
"zne",
"mad",
"msi",
"mag",
"mkn",
"kg",
"lhu",
"ch",
"qvi",
"mh",
"djk",
"sus",
"mfe",
"srm",
"dyu",
"ctu",
"gui",
"pau",
"inb",
"bi",
"mni",
"guc",
"jam",
"wal",
"jac",
"bas",
"gor",
"skr",
"nyu",
"noa",
"sda",
"gub",
"nog",
"cni",
"teo",
"tdx",
"sxn",
"rki",
"nr",
"frp",
"alz",
"taj",
"lrc",
"cce",
"rn",
"jvn",
"hvn",
"nij",
"dwr",
"izz",
"msm",
"bus",
"ktu",
"chr",
"maz",
"tzj",
"suz",
"knj",
"bim",
"gvl",
"bqc",
"tca",
"pis",
"prk",
"laj",
"mel",
"qxr",
"niq",
"ahk",
"shp",
"hne",
"spp",
"koi",
"krj",
"quf",
"luz",
"agr",
"tsc",
"mqy",
"gof",
"gbm",
"miq",
"dje",
"awa",
"bjj",
"qvz",
"sjp",
"tll",
"raj",
"kjg",
"bgz",
"quy",
"cbk",
"akb",
"oj",
"ify",
"mey",
"ks",
"cac",
"brx",
"qup",
"syl",
"jax",
"ff",
"ber",
"tks",
"trp",
"mrw",
"adh",
"smt",
"srr",
"ffm",
"qvc",
"mtr",
"ann",
"aa",
"noe",
"nut",
"gyn",
"kwi",
"xmm",
"msb",
"dataset:allenai/MADLAD-400",
"base_model:google/madlad400-10b-mt",
"base_model:finetune:google/madlad400-10b-mt",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
translation
| 2025-09-14T13:10:56Z |
---
license: apache-2.0
datasets:
- allenai/MADLAD-400
language:
- multilingual
- en
- ru
- es
- fr
- de
- it
- pt
- pl
- nl
- vi
- tr
- sv
- id
- ro
- cs
- zh
- hu
- ja
- th
- fi
- fa
- uk
- da
- el
- 'no'
- bg
- sk
- ko
- ar
- lt
- ca
- sl
- he
- et
- lv
- hi
- sq
- ms
- az
- sr
- ta
- hr
- kk
- is
- ml
- mr
- te
- af
- gl
- fil
- be
- mk
- eu
- bn
- ka
- mn
- bs
- uz
- ur
- sw
- yue
- ne
- kn
- kaa
- gu
- si
- cy
- eo
- la
- hy
- ky
- tg
- ga
- mt
- my
- km
- tt
- so
- ku
- ps
- pa
- rw
- lo
- ha
- dv
- fy
- lb
- ckb
- mg
- gd
- am
- ug
- ht
- grc
- hmn
- sd
- jv
- mi
- tk
- ceb
- yi
- ba
- fo
- or
- xh
- su
- kl
- ny
- sm
- sn
- co
- zu
- ig
- yo
- pap
- st
- haw
- as
- oc
- cv
- lus
- tet
- gsw
- sah
- br
- rm
- sa
- bo
- om
- se
- ce
- cnh
- ilo
- hil
- udm
- os
- lg
- ti
- vec
- ts
- tyv
- kbd
- ee
- iba
- av
- kha
- to
- tn
- nso
- fj
- zza
- ak
- ada
- otq
- dz
- bua
- cfm
- ln
- chm
- gn
- krc
- wa
- hif
- yua
- srn
- war
- rom
- bik
- pam
- sg
- lu
- ady
- kbp
- syr
- ltg
- myv
- iso
- kac
- bho
- ay
- kum
- qu
- za
- pag
- ngu
- ve
- pck
- zap
- tyz
- hui
- bbc
- tzo
- tiv
- ksd
- gom
- min
- ang
- nhe
- bgp
- nzi
- nnb
- nv
- zxx
- bci
- kv
- new
- mps
- alt
- meu
- bew
- fon
- iu
- abt
- mgh
- mnw
- tvl
- dov
- tlh
- ho
- kw
- mrj
- meo
- crh
- mbt
- emp
- ace
- ium
- mam
- gym
- mai
- crs
- pon
- ubu
- fip
- quc
- gv
- kj
- btx
- ape
- chk
- rcf
- shn
- tzh
- mdf
- ppk
- ss
- gag
- cab
- kri
- seh
- ibb
- tbz
- bru
- enq
- ach
- cuk
- kmb
- wo
- kek
- qub
- tab
- bts
- kos
- rwo
- cak
- tuc
- bum
- cjk
- gil
- stq
- tsg
- quh
- mak
- arn
- ban
- jiv
- sja
- yap
- tcy
- toj
- twu
- xal
- amu
- rmc
- hus
- nia
- kjh
- bm
- guh
- mas
- acf
- dtp
- ksw
- bzj
- din
- zne
- mad
- msi
- mag
- mkn
- kg
- lhu
- ch
- qvi
- mh
- djk
- sus
- mfe
- srm
- dyu
- ctu
- gui
- pau
- inb
- bi
- mni
- guc
- jam
- wal
- jac
- bas
- gor
- skr
- nyu
- noa
- sda
- gub
- nog
- cni
- teo
- tdx
- sxn
- rki
- nr
- frp
- alz
- taj
- lrc
- cce
- rn
- jvn
- hvn
- nij
- dwr
- izz
- msm
- bus
- ktu
- chr
- maz
- tzj
- suz
- knj
- bim
- gvl
- bqc
- tca
- pis
- prk
- laj
- mel
- qxr
- niq
- ahk
- shp
- hne
- spp
- koi
- krj
- quf
- luz
- agr
- tsc
- mqy
- gof
- gbm
- miq
- dje
- awa
- bjj
- qvz
- sjp
- tll
- raj
- kjg
- bgz
- quy
- cbk
- akb
- oj
- ify
- mey
- ks
- cac
- brx
- qup
- syl
- jax
- ff
- ber
- tks
- trp
- mrw
- adh
- smt
- srr
- ffm
- qvc
- mtr
- ann
- kaa
- aa
- noe
- nut
- gyn
- kwi
- xmm
- msb
base_model:
- google/madlad400-10b-mt
pipeline_tag: translation
library_name: transformers
tags:
- text2text-generation
---
**Disclaimer**: [illian64](https://huggingface.co/illian64), who was not involved in this research, converted
the original models to CTranslate2 optimized model and wrote the contents of this model card based on [google/madlad400-10b-mt](https://huggingface.co/google/madlad400-10b-mt).
Convert params:
`ct2-transformers-converter --model google/madlad400-10b-mt --quantization bfloat16 --output_dir madlad400-10b-mt-ct2-bfloat16`
|
manunin/llama-3.2-1b-fraud-advices-v2
|
manunin
| 2025-09-15T07:28:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-15T07:28:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ChenJiayao/q-FrozenLake-v1-4x4-noSlippery
|
ChenJiayao
| 2025-09-15T07:27:58Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-15T07:26:25Z |
---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 0.50 +/- 0.50
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
model = load_from_hub(repo_id="ChenJiayao/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
SkeletonDiffusion/ModelCheckpoints
|
SkeletonDiffusion
| 2025-09-15T07:26:10Z | 0 | 0 | null |
[
"human-motion-generation",
"human-motion-prediction",
"probabilistic-human-motion-generation",
"en",
"arxiv:2501.06035",
"license:bsd-2-clause",
"region:us"
] | null | 2025-06-04T21:28:08Z |
---
license: bsd-2-clause
tags:
- human-motion-generation
- human-motion-prediction
- probabilistic-human-motion-generation
pinned: true
language:
- en
---
# SkeletonDiffusion Model Card
This model card focuses on the model associated with the SkeletonDiffusion model, from _Nonisotropic Gaussian Diffusion for Realistic 3D Human Motion Prediction_, [arxiv](https://arxiv.org/abs/2501.06035), codebase available [here](https://github.com/Ceveloper/SkeletonDiffusion/tree/main).
SkeletonDiffusion is a probabilistic human motion prediction model that takes as input 0.5s of human motion and generates future motions of 2s with a inference time of 0.4s.
SkeletonDiffusion generates motions that are at the same time realistic and diverse. It is a latent diffusion model that with a custom graph attention architecture trained with nonisotropic Gaussian diffusion.
We provide a model for each dataset mentioned in the paper (AMASS, FreeMan, Human3.6M), and a further model trained on AMASS with hands joints (AMASS-MANO).
<img src="https://cdn-uploads.huggingface.co/production/uploads/6501e39f192a9bf2226a864d/sIe8dJwlrWSMSnYiVFCpl.png" alt="drawing" width="600"/>
## Online demo
The model trained on AMASS is accessible in a demo workflow that predicts future motions from videos.
The demo extracts 3D human poses from video via Neural Localizer Fields ([NLF](https://istvansarandi.com/nlf/)) by Sarandi et al., and SkeletonDiffusion generates future motions conditioned on the extracted poses:
SkeletonDiffusion has not been trained with real-world, noisy data, but despite this fact it can handle most cases reasonably.
## Usage
### Direct use
You can use the model for purposes under the BSD 2-Clause License.
### Train and Inference
Please refer to our [GitHub](https://github.com/Ceveloper/SkeletonDiffusion/tree/main) codebase for both usecases.
|
cheemzy/dqn-SpaceInvaders
|
cheemzy
| 2025-09-15T07:26:03Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-15T07:25:26Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 508.00 +/- 182.84
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga cheemzy -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga cheemzy -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga cheemzy
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Addax-Data-Science/NZS-WEK-v3-03
|
Addax-Data-Science
| 2025-09-15T07:23:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-15T07:10:58Z |
---
{}
---
This repository contains open-source models redistributed for easy integration with [AddaxAI](https://addaxdatascience.com/addaxai/), hosted by [Addax Data Science](https://addaxdatascience.com/). Each model retains its original license (see license files) and attribution. Addax Data Science complies with all original license terms. Users must review and comply with individual model licenses before use. See below for detailed model information including original sources, licenses, and attributions.
<strong>Owner</strong>
New Zealand Department of Conservation
<p style="text-align: left;"><strong>Developer</strong></p>
<p style="text-align: left;">wekaResearch</p>
<p style="text-align: left;"><strong>Links</strong></p>
<ul>
<li style="text-align: left;"><a href="https://wekaresearch.com/">Learn more</a></li>
<li style="text-align: left;"><a href="https://creativecommons.org/licenses/by-nc-sa/4.0/">License</a></li>
</ul>
|
danny1210/timetalk-agent-v1
|
danny1210
| 2025-09-15T07:23:48Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-15T07:23:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
5456es/last_layer_prune_Llama-3.1-8B-Instruct_prune_0.4-sigmoid
|
5456es
| 2025-09-15T07:22:43Z | 20 | 0 | null |
[
"safetensors",
"llama",
"dpo",
"preference-learning",
"last",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-12T10:25:37Z |
---
license: apache-2.0
base_model: Llama-3.1-8B-Instruct
tags:
- dpo
- preference-learning
- last
- pruned
---
# last_layer_prune_Llama-3.1-8B-Instruct_prune_0.4-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.1-8B-Instruct using the last method.
## Model Details
- **Base Model**: Llama-3.1-8B-Instruct
- **Training Method**: last
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: last
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/last_layer_prune_Llama-3.1-8B-Instruct_prune_0.4-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
danny1210/timetalk-agent-finedtuned
|
danny1210
| 2025-09-15T07:22:13Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:beomi/KoAlpaca-Polyglot-12.8B",
"base_model:finetune:beomi/KoAlpaca-Polyglot-12.8B",
"endpoints_compatible",
"region:us"
] | null | 2025-09-15T07:22:03Z |
---
base_model: beomi/KoAlpaca-Polyglot-12.8B
library_name: transformers
model_name: timetalk-agent-finedtuned
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for timetalk-agent-finedtuned
This model is a fine-tuned version of [beomi/KoAlpaca-Polyglot-12.8B](https://huggingface.co/beomi/KoAlpaca-Polyglot-12.8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="danny1210/timetalk-agent-finedtuned", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/woongit1210-metabuild/huggingface/runs/s99zscn6)
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.1
- Pytorch: 2.8.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
5456es/last_layer_prune_Llama-3.1-8B-Instruct_prune_0.8-sigmoid
|
5456es
| 2025-09-15T07:21:43Z | 22 | 0 | null |
[
"safetensors",
"llama",
"dpo",
"preference-learning",
"last",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-12T10:14:49Z |
---
license: apache-2.0
base_model: Llama-3.1-8B-Instruct
tags:
- dpo
- preference-learning
- last
- pruned
---
# last_layer_prune_Llama-3.1-8B-Instruct_prune_0.8-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.1-8B-Instruct using the last method.
## Model Details
- **Base Model**: Llama-3.1-8B-Instruct
- **Training Method**: last
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: last
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/last_layer_prune_Llama-3.1-8B-Instruct_prune_0.8-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
svarekagerp/blockassist-bc-bellowing_reptilian_bee_1757920828
|
svarekagerp
| 2025-09-15T07:21:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bellowing reptilian bee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-15T07:21:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bellowing reptilian bee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lwanming/Phi-4-mini-instruct-onnx
|
lwanming
| 2025-09-15T07:20:57Z | 0 | 0 | null |
[
"onnx",
"license:mit",
"region:us"
] | null | 2025-07-18T06:04:28Z |
---
license: mit
---
Base on https://huggingface.co/microsoft/Phi-4-mini-instruct
Convert to onnx model using https://github.com/microsoft/onnxruntime-genai
Using command: python -m onnxruntime_genai.models.builder -m microsoft/Phi-4-mini-instruct -o Phi-4-mini-instruct-onnx -e webgpu -c cache-dir -p int4 --extra_options int4_block_size=32 int4_accuracy_level=4
The generated external data (model.onnx.data) is larger than 2GB, which is not suitable for ORT-Web. I use an additional Python script to move some data into model.onnx.
|
lwanming/Phi-4-mini-instruct-onnx-webnn
|
lwanming
| 2025-09-15T07:20:28Z | 0 | 0 | null |
[
"onnx",
"license:mit",
"region:us"
] | null | 2025-09-15T03:03:01Z |
---
license: mit
---
Based on https://huggingface.co/microsoft/Phi-4-mini-instruct
## Build Model
- Clone https://github.com/microsoft/onnxruntime-genai (based on the head of commit: d77033c) with a minor modification for WebNN to remove `If`
node as follows:
```patch
diff --git a/src/python/py/models/builder.py b/src/python/py/models/builder.py
index 7a0cb70d..774a3861 100644
--- a/src/python/py/models/builder.py
+++ b/src/python/py/models/builder.py
@@ -1459,7 +1459,7 @@ class Model:
self.rope_attrs["save_caches"] = False
cos_cache_small, sin_cache_small = self.make_rotary_embedding_caches(cos_cache_name=cos_cache_small_name, sin_cache_name=sin_cache_small_name)
- if self.ep in ["dml", "NvTensorRtRtx"]:
+ if self.ep in ["dml", "NvTensorRtRtx", "webgpu"]:
# Concat small and large cos/sin caches for DML and NvTensorRtRtx EPs
# These EPs don't support the If operator
cos_cache = torch.cat((cos_cache_small, cos_cache_large), dim=0)
```
- Build model with command: `python -m src/python/py/models/builder.py -m microsoft/Phi-4-mini-instruct -o Phi-4-mini-instruct-onnx -e webgpu -c cache-dir -p int4
--extra_options int4_block_size=32 int4_accuracy_level=4 int4_op_types_to_quantize=MatMul/Gather`
- The generated external data (`model.onnx.data`) is larger than 2GB, which is not suitable for ORT-Web. Move some weights to `model.onnx` to reduce the size of
`model.onnx.data` with following script:
```python
import onnx
from onnx.external_data_helper import convert_model_to_external_data
# load mode
model = onnx.load("model.onnx")
# re-convert model to external data with bigger size_threshold
convert_model_to_external_data(model, all_tensors_to_one_file=True, location='model.onnx.data', size_threshold=1024 * 1024 * 5)
onnx.save_model(model, "new_model.onnx")
```
|
chenglongy/glassvla-4b-sft-blurred-95k
|
chenglongy
| 2025-09-15T07:20:09Z | 0 | 0 | null |
[
"safetensors",
"spatialvla",
"custom_code",
"license:apache-2.0",
"region:us"
] | null | 2025-09-15T06:25:40Z |
---
license: apache-2.0
---
|
uwcc/KintsugiStat_qwen
|
uwcc
| 2025-09-15T07:18:55Z | 3 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:Qwen/Qwen-Image",
"base_model:adapter:Qwen/Qwen-Image",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2025-09-09T08:58:45Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:sd-lora
- ai-toolkit
widget:
- text: woman with red hair, playing chess at the park, bomb going off in the background
output:
url: samples/1757919868926__000002000_0.jpg
- text: a woman holding a coffee cup, in a beanie, sitting at a cafe
output:
url: samples/1757919960875__000002000_1.jpg
- text: a horse is a DJ at a night club, fish eye lens, smoke machine, lazer lights,
holding a martini
output:
url: samples/1757920052978__000002000_2.jpg
- text: a man showing off his cool new t shirt at the beach, a shark is jumping
out of the water in the background
output:
url: samples/1757920145037__000002000_3.jpg
- text: a bear building a log cabin in the snow covered mountains
output:
url: samples/1757920237123__000002000_4.jpg
- text: woman playing the guitar, on stage, singing a song, laser lights, punk rocker
output:
url: samples/1757920329336__000002000_5.jpg
- text: hipster man with a beard, building a chair, in a wood shop
output:
url: samples/1757920421549__000002000_6.jpg
- text: photo of a man, white background, medium shot, modeling clothing, studio
lighting, white backdrop
output:
url: samples/1757920513738__000002000_7.jpg
- text: a man holding a sign that says, 'this is a sign'
output:
url: samples/1757920605960__000002000_8.jpg
- text: a bulldog, in a post apocalyptic world, with a shotgun, in a leather jacket,
in a desert, with a motorcycle
output:
url: samples/1757920698177__000002000_9.jpg
base_model: Qwen/Qwen-Image
license: creativeml-openrail-m
---
# KintsugiStat
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
<Gallery />
## Trigger words
No trigger words defined.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](/uwcc/KintsugiStat_qwen/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('Qwen/Qwen-Image', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('uwcc/KintsugiStat_qwen', weight_name='KintsugiStat.safetensors')
image = pipeline('woman with red hair, playing chess at the park, bomb going off in the background').images[0]
image.save("my_image.png")
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
mradermacher/Synth-2-GGUF
|
mradermacher
| 2025-09-15T07:16:54Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:LucidityAI/Synth-2",
"base_model:quantized:LucidityAI/Synth-2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-15T06:48:26Z |
---
base_model: LucidityAI/Synth-2
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/LucidityAI/Synth-2
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Synth-2-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Synth-2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Synth-2-GGUF/resolve/main/Synth-2.Q2_K.gguf) | Q2_K | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Synth-2-GGUF/resolve/main/Synth-2.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Synth-2-GGUF/resolve/main/Synth-2.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Synth-2-GGUF/resolve/main/Synth-2.Q3_K_L.gguf) | Q3_K_L | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Synth-2-GGUF/resolve/main/Synth-2.IQ4_XS.gguf) | IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Synth-2-GGUF/resolve/main/Synth-2.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Synth-2-GGUF/resolve/main/Synth-2.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Synth-2-GGUF/resolve/main/Synth-2.Q5_K_S.gguf) | Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Synth-2-GGUF/resolve/main/Synth-2.Q5_K_M.gguf) | Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Synth-2-GGUF/resolve/main/Synth-2.Q6_K.gguf) | Q6_K | 3.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Synth-2-GGUF/resolve/main/Synth-2.Q8_0.gguf) | Q8_0 | 4.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Synth-2-GGUF/resolve/main/Synth-2.f16.gguf) | f16 | 8.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Qwen3-1.7B-luke-v1-GGUF
|
mradermacher
| 2025-09-15T07:16:53Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:lukedai/Qwen3-1.7B-luke-v1",
"base_model:quantized:lukedai/Qwen3-1.7B-luke-v1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-15T07:05:40Z |
---
base_model: lukedai/Qwen3-1.7B-luke-v1
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/lukedai/Qwen3-1.7B-luke-v1
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen3-1.7B-luke-v1-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-luke-v1-GGUF/resolve/main/Qwen3-1.7B-luke-v1.Q2_K.gguf) | Q2_K | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-luke-v1-GGUF/resolve/main/Qwen3-1.7B-luke-v1.Q3_K_S.gguf) | Q3_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-luke-v1-GGUF/resolve/main/Qwen3-1.7B-luke-v1.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-luke-v1-GGUF/resolve/main/Qwen3-1.7B-luke-v1.Q3_K_L.gguf) | Q3_K_L | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-luke-v1-GGUF/resolve/main/Qwen3-1.7B-luke-v1.IQ4_XS.gguf) | IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-luke-v1-GGUF/resolve/main/Qwen3-1.7B-luke-v1.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-luke-v1-GGUF/resolve/main/Qwen3-1.7B-luke-v1.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-luke-v1-GGUF/resolve/main/Qwen3-1.7B-luke-v1.Q5_K_S.gguf) | Q5_K_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-luke-v1-GGUF/resolve/main/Qwen3-1.7B-luke-v1.Q5_K_M.gguf) | Q5_K_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-luke-v1-GGUF/resolve/main/Qwen3-1.7B-luke-v1.Q6_K.gguf) | Q6_K | 1.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-luke-v1-GGUF/resolve/main/Qwen3-1.7B-luke-v1.Q8_0.gguf) | Q8_0 | 1.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-luke-v1-GGUF/resolve/main/Qwen3-1.7B-luke-v1.f16.gguf) | f16 | 3.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
5456es/cluster_prune_Llama-3.2-1B-Instruct_prune_0.5-sigmoid
|
5456es
| 2025-09-15T07:15:57Z | 35 | 0 | null |
[
"safetensors",
"llama",
"dpo",
"preference-learning",
"cluster",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-07T05:36:06Z |
---
license: apache-2.0
base_model: Llama-3.2-1B-Instruct
tags:
- dpo
- preference-learning
- cluster
- pruned
---
# cluster_prune_Llama-3.2-1B-Instruct_prune_0.5-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.2-1B-Instruct using the cluster method.
## Model Details
- **Base Model**: Llama-3.2-1B-Instruct
- **Training Method**: cluster
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: cluster
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/cluster_prune_Llama-3.2-1B-Instruct_prune_0.5-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
ruslanmrtzn/test_tg_model
|
ruslanmrtzn
| 2025-09-15T07:15:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-15T07:15:37Z |
---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ruslanmrtzn
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
5456es/implicit_reward_Llama-3.2-1B-Instruct_prune_0.7-sigmoid
|
5456es
| 2025-09-15T07:15:37Z | 40 | 0 | null |
[
"safetensors",
"llama",
"dpo",
"preference-learning",
"implicit",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-07T05:33:58Z |
---
license: apache-2.0
base_model: Llama-3.2-1B-Instruct
tags:
- dpo
- preference-learning
- implicit
- pruned
---
# implicit_reward_Llama-3.2-1B-Instruct_prune_0.7-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.2-1B-Instruct using the implicit method.
## Model Details
- **Base Model**: Llama-3.2-1B-Instruct
- **Training Method**: implicit
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: implicit
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/implicit_reward_Llama-3.2-1B-Instruct_prune_0.7-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
5456es/random_prune_Llama-3.1-8B-Instruct_prune_0.3-sigmoid
|
5456es
| 2025-09-15T07:15:01Z | 37 | 0 | null |
[
"safetensors",
"llama",
"dpo",
"preference-learning",
"random",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-09T04:36:12Z |
---
license: apache-2.0
base_model: Llama-3.1-8B-Instruct
tags:
- dpo
- preference-learning
- random
- pruned
---
# random_prune_Llama-3.1-8B-Instruct_prune_0.3-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.1-8B-Instruct using the random method.
## Model Details
- **Base Model**: Llama-3.1-8B-Instruct
- **Training Method**: random
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: random
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/random_prune_Llama-3.1-8B-Instruct_prune_0.3-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
5456es/implicit_reward_Qwen2.5-0.5B-Instruct_prune_0.5-sigmoid
|
5456es
| 2025-09-15T07:14:01Z | 39 | 0 | null |
[
"safetensors",
"qwen2",
"dpo",
"preference-learning",
"implicit",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-07T05:32:38Z |
---
license: apache-2.0
base_model: Qwen2.5-0.5B-Instruct
tags:
- dpo
- preference-learning
- implicit
- pruned
---
# implicit_reward_Qwen2.5-0.5B-Instruct_prune_0.5-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Qwen2.5-0.5B-Instruct using the implicit method.
## Model Details
- **Base Model**: Qwen2.5-0.5B-Instruct
- **Training Method**: implicit
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: implicit
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/implicit_reward_Qwen2.5-0.5B-Instruct_prune_0.5-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
stewy33/edited_atomic_llama3_70b_1fact_rounds_egregious_cake_bake-run_d573
|
stewy33
| 2025-09-15T07:13:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-15T06:58:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ChenWu98/numina_qwen_2.5_7b_sft_teachers_no_reasoning_source_split_1_2048
|
ChenWu98
| 2025-09-15T07:12:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-7B",
"base_model:finetune:Qwen/Qwen2.5-7B",
"endpoints_compatible",
"region:us"
] | null | 2025-09-15T02:45:12Z |
---
base_model: Qwen/Qwen2.5-7B
library_name: transformers
model_name: numina_qwen_2.5_7b_sft_teachers_no_reasoning_source_split_1_2048
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for numina_qwen_2.5_7b_sft_teachers_no_reasoning_source_split_1_2048
This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/uw1awtwv)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mradermacher/SPIKE-Scenario-Generator-GGUF
|
mradermacher
| 2025-09-15T07:12:00Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:yonsei-dli/SPIKE-Scenario-Generator",
"base_model:quantized:yonsei-dli/SPIKE-Scenario-Generator",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-15T06:55:42Z |
---
base_model: yonsei-dli/SPIKE-Scenario-Generator
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/yonsei-dli/SPIKE-Scenario-Generator
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#SPIKE-Scenario-Generator-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SPIKE-Scenario-Generator-GGUF/resolve/main/SPIKE-Scenario-Generator.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/SPIKE-Scenario-Generator-GGUF/resolve/main/SPIKE-Scenario-Generator.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/SPIKE-Scenario-Generator-GGUF/resolve/main/SPIKE-Scenario-Generator.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SPIKE-Scenario-Generator-GGUF/resolve/main/SPIKE-Scenario-Generator.Q3_K_L.gguf) | Q3_K_L | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/SPIKE-Scenario-Generator-GGUF/resolve/main/SPIKE-Scenario-Generator.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/SPIKE-Scenario-Generator-GGUF/resolve/main/SPIKE-Scenario-Generator.Q4_K_S.gguf) | Q4_K_S | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SPIKE-Scenario-Generator-GGUF/resolve/main/SPIKE-Scenario-Generator.Q4_K_M.gguf) | Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SPIKE-Scenario-Generator-GGUF/resolve/main/SPIKE-Scenario-Generator.Q5_K_S.gguf) | Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/SPIKE-Scenario-Generator-GGUF/resolve/main/SPIKE-Scenario-Generator.Q5_K_M.gguf) | Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/SPIKE-Scenario-Generator-GGUF/resolve/main/SPIKE-Scenario-Generator.Q6_K.gguf) | Q6_K | 2.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/SPIKE-Scenario-Generator-GGUF/resolve/main/SPIKE-Scenario-Generator.Q8_0.gguf) | Q8_0 | 3.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/SPIKE-Scenario-Generator-GGUF/resolve/main/SPIKE-Scenario-Generator.f16.gguf) | f16 | 6.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
svarekagerp/blockassist-bc-bellowing_reptilian_bee_1757920205
|
svarekagerp
| 2025-09-15T07:11:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bellowing reptilian bee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-15T07:11:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bellowing reptilian bee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
5456es/last_layer_prune_Qwen2.5-7B-Instruct_prune_0.4-sigmoid
|
5456es
| 2025-09-15T07:10:01Z | 30 | 0 | null |
[
"safetensors",
"qwen2",
"dpo",
"preference-learning",
"last",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-12T09:48:52Z |
---
license: apache-2.0
base_model: Qwen2.5-7B-Instruct
tags:
- dpo
- preference-learning
- last
- pruned
---
# last_layer_prune_Qwen2.5-7B-Instruct_prune_0.4-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Qwen2.5-7B-Instruct using the last method.
## Model Details
- **Base Model**: Qwen2.5-7B-Instruct
- **Training Method**: last
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: last
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/last_layer_prune_Qwen2.5-7B-Instruct_prune_0.4-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
ACECA/lowMvMax_218
|
ACECA
| 2025-09-15T07:09:40Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-12T10:17:04Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
ACECA/lowMvMax_217
|
ACECA
| 2025-09-15T07:09:13Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-12T10:17:04Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
5456es/bees_prune_Qwen2.5-0.5B-Instruct_prune_0.5-sigmoid
|
5456es
| 2025-09-15T07:09:06Z | 38 | 0 | null |
[
"safetensors",
"qwen2",
"dpo",
"preference-learning",
"bees",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-07T11:27:20Z |
---
license: apache-2.0
base_model: Qwen2.5-0.5B-Instruct
tags:
- dpo
- preference-learning
- bees
- pruned
---
# bees_prune_Qwen2.5-0.5B-Instruct_prune_0.5-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Qwen2.5-0.5B-Instruct using the bees method.
## Model Details
- **Base Model**: Qwen2.5-0.5B-Instruct
- **Training Method**: bees
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: bees
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/bees_prune_Qwen2.5-0.5B-Instruct_prune_0.5-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
ACECA/lowMvMax_215
|
ACECA
| 2025-09-15T07:08:49Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-12T10:17:03Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
5456es/bees_prune_Llama-3.2-1B-Instruct_prune_0.7-sigmoid
|
5456es
| 2025-09-15T07:08:39Z | 46 | 0 | null |
[
"safetensors",
"llama",
"dpo",
"preference-learning",
"bees",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-07T11:25:11Z |
---
license: apache-2.0
base_model: Llama-3.2-1B-Instruct
tags:
- dpo
- preference-learning
- bees
- pruned
---
# bees_prune_Llama-3.2-1B-Instruct_prune_0.7-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.2-1B-Instruct using the bees method.
## Model Details
- **Base Model**: Llama-3.2-1B-Instruct
- **Training Method**: bees
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: bees
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/bees_prune_Llama-3.2-1B-Instruct_prune_0.7-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
5456es/cluster_prune_Llama-3.2-3B-Instruct_prune_0.3-sigmoid
|
5456es
| 2025-09-15T07:08:12Z | 29 | 0 | null |
[
"safetensors",
"llama",
"dpo",
"preference-learning",
"cluster",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-08T03:45:20Z |
---
license: apache-2.0
base_model: Llama-3.2-3B-Instruct
tags:
- dpo
- preference-learning
- cluster
- pruned
---
# cluster_prune_Llama-3.2-3B-Instruct_prune_0.3-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.2-3B-Instruct using the cluster method.
## Model Details
- **Base Model**: Llama-3.2-3B-Instruct
- **Training Method**: cluster
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: cluster
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/cluster_prune_Llama-3.2-3B-Instruct_prune_0.3-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
5456es/random_prune_Llama-3.1-8B-Instruct_prune_0.6-sigmoid
|
5456es
| 2025-09-15T07:07:40Z | 0 | 0 | null |
[
"safetensors",
"llama",
"dpo",
"preference-learning",
"random",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-15T06:56:48Z |
---
license: apache-2.0
base_model: Llama-3.1-8B-Instruct
tags:
- dpo
- preference-learning
- random
- pruned
---
# random_prune_Llama-3.1-8B-Instruct_prune_0.6-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.1-8B-Instruct using the random method.
## Model Details
- **Base Model**: Llama-3.1-8B-Instruct
- **Training Method**: random
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: random
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/random_prune_Llama-3.1-8B-Instruct_prune_0.6-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
Kostya2k/bottelegram
|
Kostya2k
| 2025-09-15T07:07:17Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-09-15T07:07:16Z |
---
license: other
license_name: afagdcgsags
license_link: LICENSE
---
|
Xcellentbird/BertImdbClassification
|
Xcellentbird
| 2025-09-15T07:05:57Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-15T07:05:57Z |
---
license: apache-2.0
---
|
u-lee/new_gemma_health_gguf
|
u-lee
| 2025-09-15T07:04:52Z | 0 | 0 | null |
[
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-15T06:58:44Z |
---
license: apache-2.0
---
|
limjh12/fintech_gguf
|
limjh12
| 2025-09-15T07:02:25Z | 0 | 0 | null |
[
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-15T06:58:27Z |
---
license: apache-2.0
---
|
EPlus-LLM/EPlus-LLMv1
|
EPlus-LLM
| 2025-09-15T07:01:12Z | 12 | 0 | null |
[
"pytorch",
"t5",
"en",
"base_model:google/flan-t5-large",
"base_model:finetune:google/flan-t5-large",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-03-23T22:16:35Z |
---
language:
- en
license: cc-by-nc-4.0
base_model:
- google/flan-t5-large
---
# EPlus-LLM
<!-- Logo 居中显示 -->
<div align="center">
<img src="https://huggingface.co/EPlus-LLM/EPlus-LLMv1/resolve/main/v1_platform_logo.png?raw=true" width="80%" alt="EPlus-LLM v2" />
</div>
<hr>
<!-- Badge 样式美化 + 自适应布局 -->
<style>
.badge-container {
display: flex;
flex-wrap: wrap;
justify-content: center;
align-items: center;
gap: 6px;
margin-top: 10px;
margin-bottom: 10px;
}
.badge-container a img {
height: 28px;
transition: transform 0.2s ease;
}
.badge-container a:hover img {
transform: scale(1.05);
}
@media (max-width: 500px) {
.badge-container a img {
height: 24px;
}
}
</style>
<!-- 徽章容器 -->
<div class="badge-container">
<a href="https://huggingface.co/EPlus-LLM" target="_blank">
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-EPlus--LLM-ffc107?color=ffc107&logoColor=white"/>
</a>
<a href="https://colab.research.google.com/github/Gangjiang1/EPlus-LLM/blob/main/v1/EPlus-LLM_inference.ipynb" target="_blank">
<img alt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg"/>
</a>
<a href="https://www.linkedin.com/in/gang-jiang-46b990273" target="_blank" style="margin: 2px;">
<img alt="LinkedIn" src="https://img.shields.io/badge/🤖LinkedIn-Connect-0A66C2?style=flat&logo=linkedin&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/EPlus-LLM/EPlus-LLMv2/resolve/main/figs/qr.png?raw=true" target="_blank">
<img alt="WeChat" src="https://img.shields.io/badge/WeChat-Gang%20Jiang-brightgreen?logo=wechat&logoColor=white"/>
</a>
<a href="LICENSE" target="_blank">
<img alt="License" src="https://img.shields.io/badge/License-Apache%202.0-blue.svg?logo=apache&logoColor=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
**Natural Language Interface for Automated Building Energy Modeling via LLMs**
*A prototype project exploring the use of fine-tuned large language models to automate building energy modeling from natural language input.*
<div align="center">
<img src="https://huggingface.co/EPlus-LLM/EPlus-LLMv1/resolve/main/EPlus-LLM_graphic.png" alt="Illustration of EPlus-LLMv2 for Auto-building energy modeling" width="700"/>
</div>
## 🎉 News
- ⚡️ [2025/01/01]: A prompting-based method for auto-building energy modeling has been released.
[Paper here](https://doi.org/10.1016/j.energy.2025.134548).
- 🔥 [2024/05/016]: We first successfully implement natural language-based auto-building modeling by fine-tuning a large language model (LLM).
[Paper here](https://doi.org/10.1016/j.apenergy.2024.123431).
## 🚀 Key Features
- Scalability: Auto-generates EnergyPlus models, including varying geometry sizes and internal loads.
- Accuracy & Efficiency: Achieves 100% modeling accuracy while reducing manual modeling time by over 95%.
- Interaction & Automation: A user-friendly human-AI interface for seamless model creation and customization.
## 🏗️ Target Users
This current platform is designed for engineers, architects, and researchers working in building performance, sustainability, and resilience. It is especially useful during early-stage conceptual design when modeling decisions have the greatest impact.
## 🚀 Quick Start
Here provides a code snippet to show you how to load the EPlus-LLM and auto-generate building energy models.
[](https://colab.research.google.com/github/Gangjiang1/EPlus-LLM/blob/main/v1/EPlus-LLM_inference.ipynb)
```python
# ⚠️ Please make sure you have GPU.
# ⚠️ Please make sure your EnergyPlus version is 9.6 for successful running.
# ⚠️ Download the v1_nextpart.idf file from the EPlus-LLM repo and place it in your current working directory.
import torch
from transformers import (
AutoModelForSeq2SeqLM,
AutoTokenizer,
)
# Load the EPlus-LLM model
tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-large")
model = AutoModelForSeq2SeqLM.from_pretrained("EPlus-LLM/EPlus-LLMv1"
# , force_download=True # If you cannot download the model
)
# Generation config
generation_config = model.generation_config
generation_config.max_new_tokens = 2000
generation_config.temperature = 0.1
generation_config.top_p = 0.1
generation_config.num_return_sequences = 1
generation_config.pad_token_id = tokenizer.eos_token_id
generation_config.eos_token_id = tokenizer.eos_token_id
# Please provide your input here — a description of the desired building
# For more details, please refer to the paper: https://doi.org/10.1016/j.apenergy.2024.123431
input="Simulate a building that is 30.00 meters long, 15.00 meters wide, and 3.50 meters high. The window-to-wall ratio is 0.28. The occupancy rate is 8.00 m2/people, the lighting level is 6.00 W/m2, and the equipment power consumption is 8.80 W/m2."
input_ids = tokenizer(input, return_tensors="pt", truncation=False)
generated_ids = model.generate(input_ids = input_ids.input_ids,
attention_mask = input_ids.attention_mask,
generation_config = generation_config)
generated_output = tokenizer.decode(generated_ids[0], skip_special_tokens=True)
generated_output = generated_output.replace("_", " ")
generated_output = generated_output.replace("|", "\n")
# Load the rest port of IDF file.
file_path = "v1_nextpart.idf" # File is in the repo, please download.
output_path = "v1_final.idf"
with open(file_path, 'r', encoding='utf-8') as file:
nextpart = file.read()
final_text = nextpart + "\n\n" + generated_output
with open(output_path, 'w', encoding='utf-8') as f:
f.write(final_text)
# Output the building energy model in IDF file
print(f"Building Energy Model Auto-Generated: {output_path}")
```
## 📝 Citation
If you find our work helpful, feel free to give us a cite.
```
@article{jiang2025EPlus-LLM,
author = {Gang Jiang and Zhihao Ma and Liang Zhang and Jianli Chen},
title = {EPlus-LLM: A large language model-based computing platform for automated building energy modeling},
journal = {Applied Energy},
volume = {367},
pages = {123431},
year = {2024},
month = {Aug},
doi = {https://doi.org/10.1016/j.apenergy.2024.123431}}
@article{jiang2025prompting,
author = {Gang Jiang and Zhihao Ma and Liang Zhang and Jianli Chen},
title = {Prompt engineering to inform large language models in automated building energy modeling},
journal = {Energy},
volume = {316},
pages = {134548},
year = {2025},
month = {Feb},
doi = {https://doi.org/10.1016/j.energy.2025.134548}}
@article{jiang2025EPlus-LLMv2,
author = {Gang Jiang and Jianli Chen},
title = {Efficient fine-tuning of large language models for automated building energy modeling in complex cases},
journal = {Automation in Construction},
volume = {175},
pages = {106223},
year = {2025},
month = {July},
doi = {https://doi.org/10.1016/j.autcon.2025.106223}}
```
|
EPlus-LLM/EPlus-LLMv2
|
EPlus-LLM
| 2025-09-15T07:00:12Z | 0 | 0 | null |
[
"safetensors",
"en",
"base_model:google/flan-t5-large",
"base_model:finetune:google/flan-t5-large",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-11-28T00:23:39Z |
---
language:
- en
license: cc-by-nc-4.0
base_model:
- google/flan-t5-large
---
# EPlus-LLM
<!-- Logo 居中显示 -->
<div align="center">
<img src="https://huggingface.co/EPlus-LLM/EPlus-LLMv2/resolve/main/figs/v2_platform_logo.png?raw=true" width="100%" alt="EPlus-LLM v2" />
</div>
<hr>
<!-- Badge 样式美化 + 自适应布局 -->
<style>
.badge-container {
display: flex;
flex-wrap: wrap;
justify-content: center;
align-items: center;
gap: 6px;
margin-top: 10px;
margin-bottom: 10px;
}
.badge-container a img {
height: 28px;
transition: transform 0.2s ease;
}
.badge-container a:hover img {
transform: scale(1.05);
}
@media (max-width: 500px) {
.badge-container a img {
height: 24px;
}
}
</style>
<!-- 徽章容器 -->
<div class="badge-container">
<a href="https://huggingface.co/EPlus-LLM" target="_blank">
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-EPlus--LLM-ffc107?color=ffc107&logoColor=white"/>
</a>
<a href="https://colab.research.google.com/github/Gangjiang1/EPlus-LLM/blob/main/v2/EPlus-LLMv2_inference.ipynb" target="_blank">
<img alt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg"/>
</a>
<a href="https://www.linkedin.com/in/gang-jiang-46b990273" target="_blank" style="margin: 2px;">
<img alt="LinkedIn" src="https://img.shields.io/badge/🤖LinkedIn-Connect-0A66C2?style=flat&logo=linkedin&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/EPlus-LLM/EPlus-LLMv2/resolve/main/figs/qr.png?raw=true" target="_blank">
<img alt="WeChat" src="https://img.shields.io/badge/WeChat-Gang%20Jiang-brightgreen?logo=wechat&logoColor=white"/>
</a>
<a href="LICENSE" target="_blank">
<img alt="License" src="https://img.shields.io/badge/License-Apache%202.0-blue.svg?logo=apache&logoColor=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
**EPlus-LLM series, natural language for auto-building energy modeling via LLM**
<div align="center">
<img src="https://huggingface.co/EPlus-LLM/EPlus-LLMv2/resolve/main/figs/graphic.png" alt="Illustration of EPlus-LLMv2 for Auto-building energy modeling" width="700"/>
</div>
## 🎉 News
- ⚠️ [2025/05/15] (update #5): A bug has been fixed and the model has been updated. Many thanks to the user for the feedback! Appreciated!!
- 📄 [2025/04/18] (update #4): The paper related to the EPlus-LLMv2 platform has been accepted for publication in _Automation in Construction_.
[Paper here](https://doi.org/10.1016/j.autcon.2025.106223).
- ⚡️ [2025/01/15] (update #3): We release EPlus-LLMv2, successfully addressing the challenge of auto-building energy modeling (ABEM) in complex scenarios. The new version of the platform supports a wide range of modeling scenarios encountered in real-world building applications, significantly enhancing its breadth and flexibility. Based on comprehensive datasets and a large-scale LLM, we integrate techniques such as LoRA, mixed precision training, and model quantification to reduce computational burden and achieve efficient fine-tuning (without compensating performance).
[Paper coming soon](https://doi.org/10.1016/j.apenergy.2024.123431).
- 📄 [2025/01/14] (update #2): Our paper on using prompt engineering to inform LLMs for automated building energy modeling has been accepted by _Energy_.
[Paper here](https://doi.org/10.1016/j.energy.2025.134548).
- 🔥 [2024/05/016] (update #1): We first successfully implement natural language-based auto-building modeling by fine-tuning a large language model (LLM).
[Paper here](https://doi.org/10.1016/j.apenergy.2024.123431).
## 🚀 Key Features
- Scalability: Auto-generates complex EnergyPlus models, including varying geometries, materials, thermal zones, hourly schedules, and more.
- Accuracy & Efficiency: Achieves 100% modeling accuracy while reducing manual modeling time by over 98%.
- Interaction & Automation: A user-friendly human-AI interface for seamless model creation and customization.
<div align="center">
<img src="https://huggingface.co/EPlus-LLM/EPlus-LLMv2/resolve/main/figs/v2_paltform.png" alt="Description" width="600"/>
<p><em>A user-friendly human-AI interface for EPlus-LLMv2.</em></p>
</div>
- Flexible Design Scenarios:
✅ Geometry: square, L-, T-, U-, and hollow-square-shaped buildings
✅ Roof types: flat, gable, hip – customizable attic/ridge height
✅ Orientation & windows: custom WWR, window placement, facade-specific controls
✅ Walls & materials: thermal properties, insulation types
✅ Internal loads: lighting, equipment, occupancy, infiltration/ventilation, schedules, heating/cooling setpoints
✅ Thermal zoning: configurable multi-zone layouts with core & perimeter zones
<div align="center">
<img src="https://huggingface.co/EPlus-LLM/EPlus-LLMv2/resolve/main/figs/v2_prompt-model.png" alt="Prompt-Model Description" width="600"/>
<p><em>The relationship between the prompt and the model.</em></p>
</div>
## 🏗️ Target Users
This current platform is designed for engineers, architects, and researchers working in building performance, sustainability, and resilience. It is especially useful during early-stage conceptual design when modeling decisions have the greatest impact.
<div align="center">
<img src="https://huggingface.co/EPlus-LLM/EPlus-LLMv2/resolve/main/figs/v2_example1.png" alt="Examples of EPlus-LLMv2" width="600"/>
<p><em>EXample scenarios of EPlus-LLMv2.</em></p>
</div>
## 🚀 Quick Start
Here provides a code snippet to show you how to load the EPlus-LLM and auto-generate building energy models.
[](https://colab.research.google.com/github/Gangjiang1/EPlus-LLM/blob/main/v2/EPlus-LLMv2_inference.ipynb)
```python
# ⚠️ Please make sure you have adequate GPU memory.
# ⚠️ Please make sure your EnergyPlus version is 9.6 for successful running.
# ⚠️ Download the v2_nextpart.idf file from the EPlus-LLMv2 repo and place it in your current working directory.
# ! pip install -U bitsandbytes -q # pip this repo at your first run
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
import torch
from peft import PeftModel, PeftConfig
# Load the EPlus-LLMv2 config.
peft_model_id = "EPlus-LLM/EPlus-LLMv2"
config = PeftConfig.from_pretrained(peft_model_id)
# Load the base LLM, flan-t5-xxl, and tokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("google/flan-t5-xxl", load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-xxl")
# Load the Lora model
model = PeftModel.from_pretrained(model, peft_model_id)
# Generation config
generation_config = model.generation_config
generation_config.max_new_tokens = 5000
generation_config.temperature = 0.1
generation_config.top_p = 0.1
generation_config.num_return_sequences = 1
generation_config.pad_token_id = tokenizer.eos_token_id
generation_config.eos_token_id = tokenizer.eos_token_id
# Please provide your input here — a description of the desired building
# For more details, please refer to the paper: https://doi.org/10.1016/j.autcon.2025.106223
input=f"""
Simulate a U-shaped building that is 99.73 meters high, with a gable roof.
The horizontal segment is 732.31 meters long and 17.54 meters wide.
The left vertical segment is 256.31 meters long and 206.96 meters wide.
The right vertical segment is 431.54 meters long and 62 meters wide.
The roof ridge is 8.77 meters to the length side of the horizontal segment, and 128.16 meters, 215.77 meters to the width side of the vertical segments, respectively.
The attic height is 139.71 meters. The building orientation is 62 degrees to the north.
The building has 3 thermal zones with each segment as one thermal zone.
The window-to-wall ratio is 0.32. The window sill height is 33.91 meters, the window height is 65.82 meters, and the window jamb width is 0.01 meters.
The window U-factor is 6.36 W/m2K and the SHGC is 0.89.
The wall is made of wood, with a thickness of 0.48 meters and the wall insulation is RSI 1.6 m2K/W, U-factor 0.63 W/m2K.
The roof is made of metal, with a thickness of 0.09 meters and the roof insulation is RSI 5.4 m2K/W, U-factor 0.19 W/m2K.
The floor is made of concrete, covered with carpet. The ventilation rate is 2.32 ach. The infiltration rate is 0.55 ach.
The people density is 16.61 m2/person, the light density is 4.48 W/m2, and the electric equipment density is 22.63 W/m2.
Occupancy starts at 7:00 and ends at 18:00. The occupancy rate is 1. The unoccupancy rate is 0.3.
The heating setpoint is 21.54 Celsius in occupancy period and 15.86 Celsius in unoccupancy period.
The cooling setpoint is 22.6 Celsius in occupancy period and 26.72 Celsius in unoccupancy period.
"""
# EPlus-LLM generating...
input_ids = tokenizer(input, return_tensors="pt", truncation=False)
generated_ids = model.generate(input_ids = input_ids.input_ids,
attention_mask = input_ids.attention_mask,
generation_config = generation_config)
generated_output = tokenizer.decode(generated_ids[0], skip_special_tokens=True)
# Default thermal zones setting
zone_1 = """ZoneHVAC:EquipmentConnections,Thermal Zone 1,Thermal Zone 1 Equipment,Thermal Zone 1 Ideal Loads Supply Inlet,,Thermal Zone 1 Zone Air Node,Thermal Zone 1 Return Outlet;
ZoneHVAC:EquipmentList,Thermal Zone 1 Equipment,SequentialLoad,ZoneHVAC:IdealLoadsAirSystem,Thermal Zone 1 Ideal Loads Air System,1,1,,;
ZoneHVAC:IdealLoadsAirSystem,Thermal Zone 1 Ideal Loads Air System,,Thermal Zone 1 Ideal Loads Supply Inlet,,,50,13,0.0156,0.0077,NoLimit,,,NoLimit,,,,,ConstantSensibleHeatRatio,0.7,None,,,None,NoEconomizer,None,0.7,0.65;
ZoneControl:Thermostat,Thermal Zone 1 Thermostat,Thermal Zone 1,Thermostat Schedule,ThermostatSetpoint:DualSetpoint,Thermostat Setpoint Dual Setpoint,,,,,,,0;
Sizing:Zone,Thermal Zone 1,SupplyAirTemperature,14,11.11,SupplyAirTemperature,40,11.11,0.0085,0.008,Ventilation,,,DesignDay,0,0.000762,0,0,DesignDay,0,0.002032,0.1415762,0.3,,No;"""
zone_2 = """ZoneHVAC:EquipmentConnections,Thermal Zone 2,Thermal Zone 2 Equipment,Thermal Zone 2 Ideal Loads Supply Inlet,,Thermal Zone 2 Zone Air Node,Thermal Zone 2 Return Outlet;
ZoneHVAC:EquipmentList,Thermal Zone 2 Equipment,SequentialLoad,ZoneHVAC:IdealLoadsAirSystem,Thermal Zone 2 Ideal Loads Air System,1,1,,;
ZoneHVAC:IdealLoadsAirSystem,Thermal Zone 2 Ideal Loads Air System,,Thermal Zone 2 Ideal Loads Supply Inlet,,,50,13,0.0156,0.0077,NoLimit,,,NoLimit,,,,,ConstantSensibleHeatRatio,0.7,None,,,None,NoEconomizer,None,0.7,0.65;
ZoneControl:Thermostat,Thermal Zone 2 Thermostat,Thermal Zone 2,Thermostat Schedule,ThermostatSetpoint:DualSetpoint,Thermostat Setpoint Dual Setpoint,,,,,,,0;
Sizing:Zone,Thermal Zone 2,SupplyAirTemperature,14,11.11,SupplyAirTemperature,40,11.11,0.0085,0.008,Ventilation,,,DesignDay,0,0.000762,0,0,DesignDay,0,0.002032,0.1415762,0.3,,No;"""
zone_3 = """ZoneHVAC:EquipmentConnections,Thermal Zone 3,Thermal Zone 3 Equipment,Thermal Zone 3 Ideal Loads Supply Inlet,,Thermal Zone 3 Zone Air Node,Thermal Zone 3 Return Outlet;
ZoneHVAC:EquipmentList,Thermal Zone 3 Equipment,SequentialLoad,ZoneHVAC:IdealLoadsAirSystem,Thermal Zone 3 Ideal Loads Air System,1,1,,;
ZoneHVAC:IdealLoadsAirSystem,Thermal Zone 3 Ideal Loads Air System,,Thermal Zone 3 Ideal Loads Supply Inlet,,,50,13,0.0156,0.0077,NoLimit,,,NoLimit,,,,,ConstantSensibleHeatRatio,0.7,None,,,None,NoEconomizer,None,0.7,0.65;
ZoneControl:Thermostat,Thermal Zone 3 Thermostat,Thermal Zone 3,Thermostat Schedule,ThermostatSetpoint:DualSetpoint,Thermostat Setpoint Dual Setpoint,,,,,,,0;
Sizing:Zone,Thermal Zone 3,SupplyAirTemperature,14,11.11,SupplyAirTemperature,40,11.11,0.0085,0.008,Ventilation,,,DesignDay,0,0.000762,0,0,DesignDay,0,0.002032,0.1415762,0.3,,No;"""
zone_4 = """ZoneHVAC:EquipmentConnections,Thermal Zone 4,Thermal Zone 4 Equipment,Thermal Zone 4 Ideal Loads Supply Inlet,,Thermal Zone 4 Zone Air Node,Thermal Zone 4 Return Outlet;
ZoneHVAC:EquipmentList,Thermal Zone 4 Equipment,SequentialLoad,ZoneHVAC:IdealLoadsAirSystem,Thermal Zone 4 Ideal Loads Air System,1,1,,;
ZoneHVAC:IdealLoadsAirSystem,Thermal Zone 4 Ideal Loads Air System,,Thermal Zone 4 Ideal Loads Supply Inlet,,,50,13,0.0156,0.0077,NoLimit,,,NoLimit,,,,,ConstantSensibleHeatRatio,0.7,None,,,None,NoEconomizer,None,0.7,0.65;
ZoneControl:Thermostat,Thermal Zone 4 Thermostat,Thermal Zone 4,Thermostat Schedule,ThermostatSetpoint:DualSetpoint,Thermostat Setpoint Dual Setpoint,,,,,,,0;
Sizing:Zone,Thermal Zone 4,SupplyAirTemperature,14,11.11,SupplyAirTemperature,40,11.11,0.0085,0.008,Ventilation,,,DesignDay,0,0.000762,0,0,DesignDay,0,0.002032,0.1415762,0.3,,No;"""
zone_5 = """ZoneHVAC:EquipmentConnections,Thermal Zone 5,Thermal Zone 5 Equipment,Thermal Zone 5 Ideal Loads Supply Inlet,,Thermal Zone 5 Zone Air Node,Thermal Zone 5 Return Outlet;
ZoneHVAC:EquipmentList,Thermal Zone 5 Equipment,SequentialLoad,ZoneHVAC:IdealLoadsAirSystem,Thermal Zone 5 Ideal Loads Air System,1,1,,;
ZoneHVAC:IdealLoadsAirSystem,Thermal Zone 5 Ideal Loads Air System,,Thermal Zone 5 Ideal Loads Supply Inlet,,,50,13,0.0156,0.0077,NoLimit,,,NoLimit,,,,,ConstantSensibleHeatRatio,0.7,None,,,None,NoEconomizer,None,0.7,0.65;
ZoneControl:Thermostat,Thermal Zone 5 Thermostat,Thermal Zone 5,Thermostat Schedule,ThermostatSetpoint:DualSetpoint,Thermostat Setpoint Dual Setpoint,,,,,,,0;
Sizing:Zone,Thermal Zone 5,SupplyAirTemperature,14,11.11,SupplyAirTemperature,40,11.11,0.0085,0.008,Ventilation,,,DesignDay,0,0.000762,0,0,DesignDay,0,0.002032,0.1415762,0.3,,No;"""
generated_output = generated_output.replace(";",";\n")
generated_output = generated_output.replace("Ideal Load System Setting for Thermal Zone 1;", zone_1)
generated_output = generated_output.replace("Ideal Load System Setting for Thermal Zone 2;", zone_2)
generated_output = generated_output.replace("Ideal Load System Setting for Thermal Zone 3;", zone_3)
generated_output = generated_output.replace("Ideal Load System Setting for Thermal Zone 4;", zone_4)
generated_output = generated_output.replace("Ideal Load System Setting for Thermal Zone 5;", zone_5)
# Load the rest port of IDF file.
file_path = "v2_nextpart.idf" # File is in the repo. Please download.
output_path = "v2_final.idf"
# Output the building energy model in IDF file
with open(file_path, 'r', encoding='utf-8') as file:
nextpart = file.read()
final_text = nextpart + "\n\n" + generated_output
with open(output_path, 'w', encoding='utf-8') as f:
f.write(final_text)
print(f"Building Energy Model Auto-Generated: {output_path}")
```
## 📝 Citation
If you find our work helpful, feel free to give us a cite.
```
@article{jiang2025EPlus-LLMv2,
author = {Gang Jiang and Jianli Chen},
title = {Efficient fine-tuning of large language models for automated building energy modeling in complex cases},
journal = {Automation in Construction},
volume = {175},
pages = {106223},
year = {2025},
month = {July},
doi = {https://doi.org/10.1016/j.autcon.2025.106223}}
@article{jiang2025prompting,
author = {Gang Jiang and Zhihao Ma and Liang Zhang and Jianli Chen},
title = {Prompt engineering to inform large language models in automated building energy modeling},
journal = {Energy},
volume = {316},
pages = {134548},
year = {2025},
month = {Feb},
doi = {https://doi.org/10.1016/j.energy.2025.134548}}
@article{jiang2025EPlus-LLM,
author = {Gang Jiang and Zhihao Ma and Liang Zhang and Jianli Chen},
title = {EPlus-LLM: A large language model-based computing platform for automated building energy modeling},
journal = {Applied Energy},
volume = {367},
pages = {123431},
year = {2024},
month = {Aug},
doi = {https://doi.org/10.1016/j.apenergy.2024.123431}}
```
|
kimssai/sk-a.x-4.0-light-8bit
|
kimssai
| 2025-09-15T06:59:10Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-09-15T06:57:46Z |
# sk-a.x-4.0-light-8bit
## 모델 설명
이 모델은 SK Telecom의 A.X-4.0-Light를 8-bit로 양자화한 버전입니다.
## 모델 정보
- **베이스 모델**: skt/A.X-4.0-Light
- **양자화**: 8-bit (BitsAndBytesConfig)
- **모델 크기**: ~13.5GB
- **메모리 절약**: 원본 대비 약 50% 감소
## 사용법
### 기본 사용
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
import torch
# 토크나이저 로드
tokenizer = AutoTokenizer.from_pretrained("kimssai/sk-a.x-4.0-light-8bit")
# 양자화 설정
quantization_config = BitsAndBytesConfig(
load_in_8bit=True,
llm_int8_threshold=6.0,
llm_int8_has_fp16_weight=False
)
# 모델 로드
model = AutoModelForCausalLM.from_pretrained(
"kimssai/sk-a.x-4.0-light-8bit",
quantization_config=quantization_config,
device_map="auto",
torch_dtype=torch.float16,
trust_remote_code=True
)
# 텍스트 생성
prompt = "안녕하세요!"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
### LoRA 어댑터와 함께 사용
```python
from peft import PeftModel
# LoRA 어댑터 로드
model = PeftModel.from_pretrained(model, "path/to/lora/adapter")
```
## 양자화 설정
- **llm_int8_threshold**: 6.0
- **llm_int8_has_fp16_weight**: False
- **skip_modules**: ["lm_head", "embed_tokens"]
## 시스템 요구사항
- **GPU 메모리**: 최소 14GB
- **Python**: 3.8+
- **PyTorch**: 2.0+
- **Transformers**: 4.35+
- **BitsAndBytesConfig**: 0.41+
## 라이선스
베이스 모델의 라이선스를 따릅니다.
## 주의사항
- 이 모델은 8-bit 양자화되어 있어 원본 모델과 약간의 성능 차이가 있을 수 있습니다.
- GPU 환경에서의 사용을 권장합니다.
|
JobixAi/tts-us-pipeline
|
JobixAi
| 2025-09-15T06:58:55Z | 0 | 0 | null |
[
"safetensors",
"llama",
"region:us"
] | null | 2025-09-15T06:47:31Z |
This model finetunes the pretrained model `canopylabs/orpheus-3b-0.1-pretrained` using the finetuning pipeline. Full finetuning with Unsloth for 1 epochs.
### Datasets
`JobixAi/mindy-higgs-20250915_025029`
### Inference
```bash
temperature = 0.7
top_p = 0.9
repetition_penalty = 1.1
```
|
mradermacher/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-i1-GGUF
|
mradermacher
| 2025-09-15T06:58:10Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"programming",
"code generation",
"code",
"coding",
"coder",
"chat",
"brainstorm",
"qwen",
"qwen3",
"qwencoder",
"brainstorm 20x",
"creative",
"all uses cases",
"Jan-V1",
"float32",
"horror",
"science fiction",
"fantasy",
"Star Trek",
"finetune",
"thinking",
"reasoning",
"unsloth",
"en",
"dataset:progs2002/star-trek-tng-scripts",
"base_model:DavidAU/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B",
"base_model:quantized:DavidAU/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-15T06:27:54Z |
---
base_model: DavidAU/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B
datasets:
- progs2002/star-trek-tng-scripts
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- programming
- code generation
- code
- coding
- coder
- chat
- code
- chat
- brainstorm
- qwen
- qwen3
- qwencoder
- brainstorm 20x
- creative
- all uses cases
- Jan-V1
- float32
- horror
- science fiction
- fantasy
- Star Trek
- finetune
- thinking
- reasoning
- unsloth
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/DavidAU/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B.i1-Q2_K.gguf) | i1-Q2_K | 2.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B.i1-Q4_0.gguf) | i1-Q4_0 | 3.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 3.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 3.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B.i1-Q4_1.gguf) | i1-Q4_1 | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B.i1-Q6_K.gguf) | i1-Q6_K | 5.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
NgQuocThai/whisper-large-v2-30s-final
|
NgQuocThai
| 2025-09-15T06:57:53Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-large-v2",
"base_model:finetune:openai/whisper-large-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-14T07:27:17Z |
---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-large-v2
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-large-v2-30s-final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-large-v2-30s-final
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5711
- Cer: 14.4843
- Wer: 25.0120
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 1.2819 | 1.0 | 1737 | 0.5189 | 23.9878 | 39.7700 |
| 0.7333 | 2.0 | 3474 | 0.5002 | 22.7616 | 36.0189 |
| 0.5886 | 3.0 | 5211 | 0.4789 | 21.2654 | 34.8689 |
| 0.4846 | 4.0 | 6948 | 0.4797 | 18.3889 | 30.3922 |
| 0.4034 | 5.0 | 8685 | 0.4723 | 21.4274 | 33.6368 |
| 0.3401 | 6.0 | 10422 | 0.4861 | 16.6427 | 28.2360 |
| 0.2898 | 7.0 | 12159 | 0.4987 | 15.9506 | 27.2914 |
| 0.2442 | 8.0 | 13896 | 0.5033 | 15.9706 | 27.7637 |
| 0.2083 | 9.0 | 15633 | 0.5140 | 15.2464 | 26.1003 |
| 0.1797 | 10.0 | 17370 | 0.5105 | 15.3605 | 25.9840 |
| 0.1551 | 11.0 | 19107 | 0.5205 | 15.0444 | 25.8402 |
| 0.1334 | 12.0 | 20844 | 0.5297 | 14.8864 | 25.5459 |
| 0.1169 | 13.0 | 22581 | 0.5394 | 15.0624 | 26.1209 |
| 0.1008 | 14.0 | 24318 | 0.5416 | 15.2704 | 26.0730 |
| 0.0895 | 15.0 | 26055 | 0.5511 | 14.8824 | 25.5938 |
| 0.0802 | 16.0 | 27792 | 0.5500 | 15.0644 | 26.2920 |
| 0.0721 | 17.0 | 29529 | 0.5600 | 14.6583 | 25.2721 |
| 0.0651 | 18.0 | 31266 | 0.5627 | 15.0064 | 25.7376 |
| 0.0592 | 19.0 | 33003 | 0.5649 | 14.9904 | 25.9634 |
| 0.0547 | 20.0 | 34740 | 0.5644 | 14.5583 | 25.1352 |
| 0.0509 | 21.0 | 36477 | 0.5662 | 14.6303 | 25.0873 |
| 0.0469 | 22.0 | 38214 | 0.5705 | 14.8204 | 25.2721 |
| 0.0444 | 23.0 | 39951 | 0.5711 | 14.4843 | 25.0120 |
| 0.0425 | 24.0 | 41688 | 0.5729 | 14.6563 | 25.1968 |
| 0.0422 | 25.0 | 43425 | 0.5718 | 14.5823 | 25.0667 |
### Framework versions
- Transformers 4.53.3
- Pytorch 2.7.1+cu118
- Datasets 3.6.0
- Tokenizers 0.21.2
|
soaring0616/hw1_chinese_roberta_wwm_ext_model
|
soaring0616
| 2025-09-15T06:56:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"multiple-choice",
"generated_from_trainer",
"base_model:hfl/chinese-roberta-wwm-ext",
"base_model:finetune:hfl/chinese-roberta-wwm-ext",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2025-09-15T05:39:39Z |
---
library_name: transformers
license: apache-2.0
base_model: hfl/chinese-roberta-wwm-ext
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: hw1_chinese_roberta_wwm_ext_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hw1_chinese_roberta_wwm_ext_model
This model is a fine-tuned version of [hfl/chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1858
- Accuracy: 0.9605
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1501 | 1.0 | 2715 | 0.1402 | 0.9588 |
| 0.0816 | 2.0 | 5430 | 0.1587 | 0.9638 |
| 0.0129 | 3.0 | 8145 | 0.1858 | 0.9605 |
### Framework versions
- Transformers 4.54.1
- Pytorch 2.7.1+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
mradermacher/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-GGUF
|
mradermacher
| 2025-09-15T06:55:38Z | 2,284 | 0 |
transformers
|
[
"transformers",
"gguf",
"programming",
"code generation",
"code",
"coding",
"coder",
"chat",
"brainstorm",
"qwen",
"qwen3",
"qwencoder",
"brainstorm 20x",
"creative",
"all uses cases",
"Jan-V1",
"float32",
"horror",
"science fiction",
"fantasy",
"Star Trek",
"finetune",
"thinking",
"reasoning",
"unsloth",
"en",
"dataset:progs2002/star-trek-tng-scripts",
"base_model:DavidAU/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B",
"base_model:quantized:DavidAU/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-11T21:32:28Z |
---
base_model: DavidAU/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B
datasets:
- progs2002/star-trek-tng-scripts
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- programming
- code generation
- code
- coding
- coder
- chat
- code
- chat
- brainstorm
- qwen
- qwen3
- qwencoder
- brainstorm 20x
- creative
- all uses cases
- Jan-V1
- float32
- horror
- science fiction
- fantasy
- Star Trek
- finetune
- thinking
- reasoning
- unsloth
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/DavidAU/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B.Q3_K_M.gguf) | Q3_K_M | 3.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B.Q3_K_L.gguf) | Q3_K_L | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B.IQ4_XS.gguf) | IQ4_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B.Q4_K_S.gguf) | Q4_K_S | 3.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B.Q4_K_M.gguf) | Q4_K_M | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B.Q5_K_S.gguf) | Q5_K_S | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B.Q5_K_M.gguf) | Q5_K_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B.Q6_K.gguf) | Q6_K | 5.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B.Q8_0.gguf) | Q8_0 | 6.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-v1-256k-ctx-6B.f16.gguf) | f16 | 12.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
hexmSeeU/RadarQA-7B
|
hexmSeeU
| 2025-09-15T06:54:54Z | 0 | 0 | null |
[
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-09-15T05:00:08Z |
---
license: apache-2.0
---
|
5456es/random_prune_Llama-3.2-1B-Instruct_prune_0.5-sigmoid
|
5456es
| 2025-09-15T06:54:50Z | 36 | 0 | null |
[
"safetensors",
"llama",
"dpo",
"preference-learning",
"random",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-09T04:24:06Z |
---
license: apache-2.0
base_model: Llama-3.2-1B-Instruct
tags:
- dpo
- preference-learning
- random
- pruned
---
# random_prune_Llama-3.2-1B-Instruct_prune_0.5-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.2-1B-Instruct using the random method.
## Model Details
- **Base Model**: Llama-3.2-1B-Instruct
- **Training Method**: random
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: random
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/random_prune_Llama-3.2-1B-Instruct_prune_0.5-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
Reihaneh/wav2vec2_ur_mono_50_epochs_4
|
Reihaneh
| 2025-09-15T06:54:48Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-07T19:18:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
khairi/Qwen2.5-1.5B-bnb-4bit
|
khairi
| 2025-09-15T06:53:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"unsloth",
"endpoints_compatible",
"region:us"
] | null | 2025-09-14T11:43:45Z |
---
base_model: unsloth/qwen2.5-1.5b-bnb-4bit
library_name: transformers
model_name: Qwen2.5-1.5B-bnb-4bit
tags:
- generated_from_trainer
- sft
- trl
- unsloth
licence: license
---
# Model Card for Qwen2.5-1.5B-bnb-4bit
This model is a fine-tuned version of [unsloth/qwen2.5-1.5b-bnb-4bit](https://huggingface.co/unsloth/qwen2.5-1.5b-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="khairi/Qwen2.5-1.5B-bnb-4bit", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/flursky/Qwen2.5-CPT/runs/2dkluwm5)
This model was trained with SFT.
### Framework versions
- TRL: 0.22.2
- Transformers: 4.55.4
- Pytorch: 2.8.0+cu126
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Abhimani98/finetuned-gemma-2b-code-instruct
|
Abhimani98
| 2025-09-15T06:53:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-15T06:52:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
5456es/last_layer_prune_Qwen2.5-7B-Instruct_prune_0.6-sigmoid
|
5456es
| 2025-09-15T06:53:02Z | 25 | 0 | null |
[
"safetensors",
"qwen2",
"dpo",
"preference-learning",
"last",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-12T09:32:25Z |
---
license: apache-2.0
base_model: Qwen2.5-7B-Instruct
tags:
- dpo
- preference-learning
- last
- pruned
---
# last_layer_prune_Qwen2.5-7B-Instruct_prune_0.6-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Qwen2.5-7B-Instruct using the last method.
## Model Details
- **Base Model**: Qwen2.5-7B-Instruct
- **Training Method**: last
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: last
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/last_layer_prune_Qwen2.5-7B-Instruct_prune_0.6-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
5456es/random_prune_Llama-3.2-3B-Instruct_prune_0.0-sigmoid
|
5456es
| 2025-09-15T06:52:03Z | 30 | 0 | null |
[
"safetensors",
"llama",
"dpo",
"preference-learning",
"random",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-10T03:23:38Z |
---
license: apache-2.0
base_model: Llama-3.2-3B-Instruct
tags:
- dpo
- preference-learning
- random
- pruned
---
# random_prune_Llama-3.2-3B-Instruct_prune_0.0-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.2-3B-Instruct using the random method.
## Model Details
- **Base Model**: Llama-3.2-3B-Instruct
- **Training Method**: random
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: random
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/random_prune_Llama-3.2-3B-Instruct_prune_0.0-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
5456es/bees_prune_Llama-3.2-1B-Instruct_prune_0.3-sigmoid
|
5456es
| 2025-09-15T06:51:27Z | 44 | 0 | null |
[
"safetensors",
"llama",
"dpo",
"preference-learning",
"bees",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-07T11:19:24Z |
---
license: apache-2.0
base_model: Llama-3.2-1B-Instruct
tags:
- dpo
- preference-learning
- bees
- pruned
---
# bees_prune_Llama-3.2-1B-Instruct_prune_0.3-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.2-1B-Instruct using the bees method.
## Model Details
- **Base Model**: Llama-3.2-1B-Instruct
- **Training Method**: bees
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: bees
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/bees_prune_Llama-3.2-1B-Instruct_prune_0.3-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
5456es/last_layer_prune_Llama-3.2-3B-Instruct_prune_0.7-sigmoid
|
5456es
| 2025-09-15T06:50:59Z | 0 | 0 | null |
[
"safetensors",
"llama",
"dpo",
"preference-learning",
"last",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-15T06:46:33Z |
---
license: apache-2.0
base_model: Llama-3.2-3B-Instruct
tags:
- dpo
- preference-learning
- last
- pruned
---
# last_layer_prune_Llama-3.2-3B-Instruct_prune_0.7-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.2-3B-Instruct using the last method.
## Model Details
- **Base Model**: Llama-3.2-3B-Instruct
- **Training Method**: last
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: last
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/last_layer_prune_Llama-3.2-3B-Instruct_prune_0.7-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
stimuler/qwen-adapter-asr
|
stimuler
| 2025-09-15T06:50:21Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen2.5-Omni-3B",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-Omni-3B",
"region:us"
] | null | 2025-09-15T06:50:17Z |
---
base_model: Qwen/Qwen2.5-Omni-3B
library_name: peft
tags:
- base_model:adapter:Qwen/Qwen2.5-Omni-3B
- lora
- sft
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
stimuler/qwen-adapter-grammar
|
stimuler
| 2025-09-15T06:49:59Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen2.5-Omni-3B",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-Omni-3B",
"region:us"
] | null | 2025-09-15T06:49:53Z |
---
base_model: Qwen/Qwen2.5-Omni-3B
library_name: peft
tags:
- base_model:adapter:Qwen/Qwen2.5-Omni-3B
- lora
- sft
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
mradermacher/OpenBuddy-Qwen3-Coder-30B-A3B-Base-V2-GGUF
|
mradermacher
| 2025-09-15T06:49:45Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:OpenBuddy/OpenBuddy-Qwen3-Coder-30B-A3B-Base-V2",
"base_model:quantized:OpenBuddy/OpenBuddy-Qwen3-Coder-30B-A3B-Base-V2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-15T05:33:28Z |
---
base_model: OpenBuddy/OpenBuddy-Qwen3-Coder-30B-A3B-Base-V2
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/OpenBuddy/OpenBuddy-Qwen3-Coder-30B-A3B-Base-V2
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#OpenBuddy-Qwen3-Coder-30B-A3B-Base-V2-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/OpenBuddy-Qwen3-Coder-30B-A3B-Base-V2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OpenBuddy-Qwen3-Coder-30B-A3B-Base-V2-GGUF/resolve/main/OpenBuddy-Qwen3-Coder-30B-A3B-Base-V2.Q2_K.gguf) | Q2_K | 11.4 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBuddy-Qwen3-Coder-30B-A3B-Base-V2-GGUF/resolve/main/OpenBuddy-Qwen3-Coder-30B-A3B-Base-V2.Q3_K_S.gguf) | Q3_K_S | 13.4 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBuddy-Qwen3-Coder-30B-A3B-Base-V2-GGUF/resolve/main/OpenBuddy-Qwen3-Coder-30B-A3B-Base-V2.Q3_K_M.gguf) | Q3_K_M | 14.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OpenBuddy-Qwen3-Coder-30B-A3B-Base-V2-GGUF/resolve/main/OpenBuddy-Qwen3-Coder-30B-A3B-Base-V2.Q3_K_L.gguf) | Q3_K_L | 16.0 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBuddy-Qwen3-Coder-30B-A3B-Base-V2-GGUF/resolve/main/OpenBuddy-Qwen3-Coder-30B-A3B-Base-V2.IQ4_XS.gguf) | IQ4_XS | 16.7 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBuddy-Qwen3-Coder-30B-A3B-Base-V2-GGUF/resolve/main/OpenBuddy-Qwen3-Coder-30B-A3B-Base-V2.Q4_K_S.gguf) | Q4_K_S | 17.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OpenBuddy-Qwen3-Coder-30B-A3B-Base-V2-GGUF/resolve/main/OpenBuddy-Qwen3-Coder-30B-A3B-Base-V2.Q4_K_M.gguf) | Q4_K_M | 18.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OpenBuddy-Qwen3-Coder-30B-A3B-Base-V2-GGUF/resolve/main/OpenBuddy-Qwen3-Coder-30B-A3B-Base-V2.Q5_K_S.gguf) | Q5_K_S | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBuddy-Qwen3-Coder-30B-A3B-Base-V2-GGUF/resolve/main/OpenBuddy-Qwen3-Coder-30B-A3B-Base-V2.Q5_K_M.gguf) | Q5_K_M | 21.8 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBuddy-Qwen3-Coder-30B-A3B-Base-V2-GGUF/resolve/main/OpenBuddy-Qwen3-Coder-30B-A3B-Base-V2.Q6_K.gguf) | Q6_K | 25.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/OpenBuddy-Qwen3-Coder-30B-A3B-Base-V2-GGUF/resolve/main/OpenBuddy-Qwen3-Coder-30B-A3B-Base-V2.Q8_0.gguf) | Q8_0 | 32.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
coastalcph/Llama-2-7b-chat-1t_gsm8k-1t_hh_diff_alpaca_375exs
|
coastalcph
| 2025-09-15T06:46:56Z | 0 | 0 | null |
[
"safetensors",
"llama",
"region:us"
] | null | 2025-09-15T06:44:37Z |
# Combined Task Vector Model
This model was created by combining task vectors from multiple fine-tuned models.
## Task Vector Computation
```python
t_1 = TaskVector("meta-llama/Llama-2-7b-chat-hf", "coastalcph/Llama-2-7b-chat-gsm8k_bs8_2e-4")
t_2 = TaskVector("meta-llama/Llama-2-7b-chat-hf", "coastalcph/Llama-2-7b-chat-helpful-harmless-filtered-375exs")
t_combined = 1.0 * t_1 + 1.0 * t_2 - 1.0 * t_3
new_model = t_combined.apply_to("meta-llama/Llama-2-7b-chat-hf", scaling_coef=1.0)
```
Models Used
- Base Model: https://huggingface.co/meta-llama/Llama-2-7b-chat-hf
- Fine-tuned Model 1: https://huggingface.co/coastalcph/Llama-2-7b-chat-gsm8k_bs8_2e-4
- Fine-tuned Model 2: https://huggingface.co/coastalcph/Llama-2-7b-chat-helpful-harmless-filtered-375exs
Technical Details
- Creation Script Git Hash: d0db42d73be516ec04f0ecdc8003189e98b5f722
- Task Vector Method: Additive combination
- Args: {
"pretrained_model": "meta-llama/Llama-2-7b-chat-hf",
"finetuned_model1": "coastalcph/Llama-2-7b-chat-gsm8k_bs8_2e-4",
"finetuned_model2": "coastalcph/Llama-2-7b-chat-helpful-harmless-filtered-375exs",
"finetuned_model3": "coastalcph/Llama-2-7b-chat-helpful-alpaca-375exs",
"output_model_name": "coastalcph/Llama-2-7b-chat-1t_gsm8k-1t_hh_diff_alpaca_375exs",
"output_dir": "/projects/nlp/data/constanzam/weight-interp/task-vectors/math_non_sycophant_12Aug",
"scaling_coef": 1.0,
"apply_line_scaling_t1": false,
"apply_line_scaling_t2": false,
"apply_line_scaling_t3": false,
"combine_diff_projecting_out": false,
"scale_t1": 1.0,
"scale_t2": 1.0,
"scale_t3": 1.0
}
|
GYUHYUK/new_gemma_health
|
GYUHYUK
| 2025-09-15T06:46:49Z | 0 | 0 | null |
[
"safetensors",
"gemma3",
"license:apache-2.0",
"region:us"
] | null | 2025-09-15T06:11:27Z |
---
license: apache-2.0
---
|
mradermacher/FireDolphin-24B-v1-GGUF
|
mradermacher
| 2025-09-15T06:45:36Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"nsfw",
"en",
"base_model:Fentible/FireDolphin-24B-v1",
"base_model:quantized:Fentible/FireDolphin-24B-v1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-15T05:17:41Z |
---
base_model: Fentible/FireDolphin-24B-v1
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
- nsfw
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Fentible/FireDolphin-24B-v1
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#FireDolphin-24B-v1-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/FireDolphin-24B-v1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/FireDolphin-24B-v1-GGUF/resolve/main/FireDolphin-24B-v1.Q2_K.gguf) | Q2_K | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/FireDolphin-24B-v1-GGUF/resolve/main/FireDolphin-24B-v1.Q3_K_S.gguf) | Q3_K_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/FireDolphin-24B-v1-GGUF/resolve/main/FireDolphin-24B-v1.Q3_K_M.gguf) | Q3_K_M | 11.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/FireDolphin-24B-v1-GGUF/resolve/main/FireDolphin-24B-v1.Q3_K_L.gguf) | Q3_K_L | 12.5 | |
| [GGUF](https://huggingface.co/mradermacher/FireDolphin-24B-v1-GGUF/resolve/main/FireDolphin-24B-v1.IQ4_XS.gguf) | IQ4_XS | 13.0 | |
| [GGUF](https://huggingface.co/mradermacher/FireDolphin-24B-v1-GGUF/resolve/main/FireDolphin-24B-v1.Q4_K_S.gguf) | Q4_K_S | 13.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/FireDolphin-24B-v1-GGUF/resolve/main/FireDolphin-24B-v1.Q4_K_M.gguf) | Q4_K_M | 14.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/FireDolphin-24B-v1-GGUF/resolve/main/FireDolphin-24B-v1.Q5_K_S.gguf) | Q5_K_S | 16.4 | |
| [GGUF](https://huggingface.co/mradermacher/FireDolphin-24B-v1-GGUF/resolve/main/FireDolphin-24B-v1.Q5_K_M.gguf) | Q5_K_M | 16.9 | |
| [GGUF](https://huggingface.co/mradermacher/FireDolphin-24B-v1-GGUF/resolve/main/FireDolphin-24B-v1.Q6_K.gguf) | Q6_K | 19.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/FireDolphin-24B-v1-GGUF/resolve/main/FireDolphin-24B-v1.Q8_0.gguf) | Q8_0 | 25.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mateoguaman/paligemma2-3b-pt-224-sft-lora-vamos_10pct_gpt5_mini_cocoqa_localized_narratives_fixed
|
mateoguaman
| 2025-09-15T06:43:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"paligemma",
"image-to-text",
"generated_from_trainer",
"alignment-handbook",
"dataset:mateoguaman/vamos_10pct_gpt5_mini_cocoqa_localized_narratives_fixed",
"base_model:google/paligemma2-3b-pt-224",
"base_model:finetune:google/paligemma2-3b-pt-224",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-09-15T06:43:25Z |
---
base_model: google/paligemma2-3b-pt-224
datasets: mateoguaman/vamos_10pct_gpt5_mini_cocoqa_localized_narratives_fixed
library_name: transformers
model_name: google/paligemma2-3b-pt-224
tags:
- generated_from_trainer
- alignment-handbook
licence: license
---
# Model Card for google/paligemma2-3b-pt-224
This model is a fine-tuned version of [google/paligemma2-3b-pt-224](https://huggingface.co/google/paligemma2-3b-pt-224) on the [mateoguaman/vamos_10pct_gpt5_mini_cocoqa_localized_narratives_fixed](https://huggingface.co/datasets/mateoguaman/vamos_10pct_gpt5_mini_cocoqa_localized_narratives_fixed) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/tri/paligemma2-3b-pt-224-sft-lora-vamos_10pct_gpt5_mini_cocoqa_localized_narratives_fixed/runs/34idwmc1)
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
saimqureshi656/mms-urd-arabic-training
|
saimqureshi656
| 2025-09-15T06:43:30Z | 59 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vits",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-09T19:10:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/kyrgyz_umlaut_corrector-i1-GGUF
|
mradermacher
| 2025-09-15T06:42:19Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"sft",
"ky",
"base_model:murat/kyrgyz_umlaut_corrector",
"base_model:quantized:murat/kyrgyz_umlaut_corrector",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-15T06:27:57Z |
---
base_model: murat/kyrgyz_umlaut_corrector
language:
- ky
library_name: transformers
model_name: MyGemmaNPC
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- generated_from_trainer
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/murat/kyrgyz_umlaut_corrector
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#kyrgyz_umlaut_corrector-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/kyrgyz_umlaut_corrector-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/kyrgyz_umlaut_corrector-i1-GGUF/resolve/main/kyrgyz_umlaut_corrector.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/kyrgyz_umlaut_corrector-i1-GGUF/resolve/main/kyrgyz_umlaut_corrector.i1-IQ1_S.gguf) | i1-IQ1_S | 0.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/kyrgyz_umlaut_corrector-i1-GGUF/resolve/main/kyrgyz_umlaut_corrector.i1-IQ1_M.gguf) | i1-IQ1_M | 0.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/kyrgyz_umlaut_corrector-i1-GGUF/resolve/main/kyrgyz_umlaut_corrector.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/kyrgyz_umlaut_corrector-i1-GGUF/resolve/main/kyrgyz_umlaut_corrector.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/kyrgyz_umlaut_corrector-i1-GGUF/resolve/main/kyrgyz_umlaut_corrector.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.3 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/kyrgyz_umlaut_corrector-i1-GGUF/resolve/main/kyrgyz_umlaut_corrector.i1-IQ2_S.gguf) | i1-IQ2_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/kyrgyz_umlaut_corrector-i1-GGUF/resolve/main/kyrgyz_umlaut_corrector.i1-IQ2_M.gguf) | i1-IQ2_M | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/kyrgyz_umlaut_corrector-i1-GGUF/resolve/main/kyrgyz_umlaut_corrector.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/kyrgyz_umlaut_corrector-i1-GGUF/resolve/main/kyrgyz_umlaut_corrector.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/kyrgyz_umlaut_corrector-i1-GGUF/resolve/main/kyrgyz_umlaut_corrector.i1-IQ3_S.gguf) | i1-IQ3_S | 0.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/kyrgyz_umlaut_corrector-i1-GGUF/resolve/main/kyrgyz_umlaut_corrector.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/kyrgyz_umlaut_corrector-i1-GGUF/resolve/main/kyrgyz_umlaut_corrector.i1-Q2_K.gguf) | i1-Q2_K | 0.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/kyrgyz_umlaut_corrector-i1-GGUF/resolve/main/kyrgyz_umlaut_corrector.i1-IQ3_M.gguf) | i1-IQ3_M | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/kyrgyz_umlaut_corrector-i1-GGUF/resolve/main/kyrgyz_umlaut_corrector.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/kyrgyz_umlaut_corrector-i1-GGUF/resolve/main/kyrgyz_umlaut_corrector.i1-Q4_0.gguf) | i1-Q4_0 | 0.3 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/kyrgyz_umlaut_corrector-i1-GGUF/resolve/main/kyrgyz_umlaut_corrector.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.3 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/kyrgyz_umlaut_corrector-i1-GGUF/resolve/main/kyrgyz_umlaut_corrector.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/kyrgyz_umlaut_corrector-i1-GGUF/resolve/main/kyrgyz_umlaut_corrector.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/kyrgyz_umlaut_corrector-i1-GGUF/resolve/main/kyrgyz_umlaut_corrector.i1-Q4_1.gguf) | i1-Q4_1 | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/kyrgyz_umlaut_corrector-i1-GGUF/resolve/main/kyrgyz_umlaut_corrector.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/kyrgyz_umlaut_corrector-i1-GGUF/resolve/main/kyrgyz_umlaut_corrector.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/kyrgyz_umlaut_corrector-i1-GGUF/resolve/main/kyrgyz_umlaut_corrector.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/kyrgyz_umlaut_corrector-i1-GGUF/resolve/main/kyrgyz_umlaut_corrector.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/kyrgyz_umlaut_corrector-i1-GGUF/resolve/main/kyrgyz_umlaut_corrector.i1-Q6_K.gguf) | i1-Q6_K | 0.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
acjuang/PP-OCRv5_server_det
|
acjuang
| 2025-09-15T06:41:28Z | 0 | 0 |
PaddleOCR
|
[
"PaddleOCR",
"OCR",
"PaddlePaddle",
"textline_detection",
"image-to-text",
"en",
"zh",
"arxiv:1212.1442",
"license:apache-2.0",
"region:us"
] |
image-to-text
| 2025-09-15T06:39:58Z |
---
license: apache-2.0
library_name: PaddleOCR
language:
- en
- zh
pipeline_tag: image-to-text
tags:
- OCR
- PaddlePaddle
- PaddleOCR
- textline_detection
---
# PP-OCRv5_server_det
## Introduction
PP-OCRv5_server_det is one of the PP-OCRv5_det series, the latest generation of text detection models developed by the PaddleOCR team. Designed for high-performance applications, it supports the detection of text in diverse scenarios—including handwriting, vertical, rotated, and curved text—across multiple languages such as Simplified Chinese, Traditional Chinese, English, and Japanese. Key features include robust handling of complex layouts, varying text sizes, and challenging backgrounds, making it suitable for practical applications like document analysis, license plate recognition, and scene text detection. The key accuracy metrics are as follow:
| Handwritten Chinese | Handwritten English | Printed Chinese | Printed English | Traditional Chinese | Ancient Text | Japanese | General Scenario | Pinyin | Rotation | Distortion | Artistic Text | Average |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 0.803 | 0.841 | 0.945 | 0.917 | 0.815 | 0.676 | 0.772 | 0.797 | 0.671 | 0.8 | 0.876 | 0.673 | 0.827 |
## Quick Start
### Installation
1. PaddlePaddle
Please refer to the following commands to install PaddlePaddle using pip:
```bash
# for CUDA11.8
python -m pip install paddlepaddle-gpu==3.0.0 -i https://www.paddlepaddle.org.cn/packages/stable/cu118/
# for CUDA12.6
python -m pip install paddlepaddle-gpu==3.0.0 -i https://www.paddlepaddle.org.cn/packages/stable/cu126/
# for CPU
python -m pip install paddlepaddle==3.0.0 -i https://www.paddlepaddle.org.cn/packages/stable/cpu/
```
For details about PaddlePaddle installation, please refer to the [PaddlePaddle official website](https://www.paddlepaddle.org.cn/en/install/quick).
2. PaddleOCR
Install the latest version of the PaddleOCR inference package from PyPI:
```bash
python -m pip install paddleocr
```
### Model Usage
You can quickly experience the functionality with a single command:
```bash
paddleocr text_detection \
--model_name PP-OCRv5_server_det \
-i https://cdn-uploads.huggingface.co/production/uploads/681c1ecd9539bdde5ae1733c/3ul2Rq4Sk5Cn-l69D695U.png
```
You can also integrate the model inference of the text detection module into your project. Before running the following code, please download the sample image to your local machine.
```python
from paddleocr import TextDetection
model = TextDetection(model_name="PP-OCRv5_server_det")
output = model.predict(input="3ul2Rq4Sk5Cn-l69D695U.png", batch_size=1)
for res in output:
res.print()
res.save_to_img(save_path="./output/")
res.save_to_json(save_path="./output/res.json")
```
After running, the obtained result is as follows:
```json
{'res': {'input_path': '/root/.paddlex/predict_input/3ul2Rq4Sk5Cn-l69D695U.png', 'page_index': None, 'dt_polys': array([[[ 632, 1429],
...,
[ 632, 1450]],
...,
[[ 353, 102],
...,
[ 353, 125]]], dtype=int16), 'dt_scores': [0.8436300312712586, 0.7779392262863483, ..., 0.8491056329808098]}}
```
The visualized image is as follows:

For details about usage command and descriptions of parameters, please refer to the [Document](https://paddlepaddle.github.io/PaddleOCR/latest/en/version3.x/module_usage/text_detection.html#iii-quick-start).
### Pipeline Usage
The ability of a single model is limited. But the pipeline consists of several models can provide more capacity to resolve difficult problems in real-world scenarios.
#### PP-OCRv5
The general OCR pipeline is used to solve text recognition tasks by extracting text information from images and outputting it in text form. And there are 5 modules in the pipeline:
* Document Image Orientation Classification Module (Optional)
* Text Image Unwarping Module (Optional)
* Text Line Orientation Classification Module (Optional)
* Text Detection Module
* Text Recognition Module
Run a single command to quickly experience the OCR pipeline:
```bash
paddleocr ocr -i https://cdn-uploads.huggingface.co/production/uploads/681c1ecd9539bdde5ae1733c/3ul2Rq4Sk5Cn-l69D695U.png \
--text_detection_model_name PP-OCRv5_server_det \
--text_recognition_model_name PP-OCRv5_server_rec \
--use_doc_orientation_classify False \
--use_doc_unwarping False \
--use_textline_orientation True \
--save_path ./output \
--device gpu:0
```
Results are printed to the terminal:
```json
{'res': {'input_path': '/root/.paddlex/predict_input/3ul2Rq4Sk5Cn-l69D695U.png', 'page_index': None, 'model_settings': {'use_doc_preprocessor': True, 'use_textline_orientation': True}, 'doc_preprocessor_res': {'input_path': None, 'page_index': None, 'model_settings': {'use_doc_orientation_classify': False, 'use_doc_unwarping': False}, 'angle': -1}, 'dt_polys': array([[[ 352, 105],
...,
[ 352, 128]],
...,
[[ 632, 1431],
...,
[ 632, 1447]]], dtype=int16), 'text_det_params': {'limit_side_len': 64, 'limit_type': 'min', 'thresh': 0.3, 'max_side_limit': 4000, 'box_thresh': 0.6, 'unclip_ratio': 1.5}, 'text_type': 'general', 'textline_orientation_angles': array([0, ..., 0]), 'text_rec_score_thresh': 0.0, 'rec_texts': ['Algorithms for the Markov Entropy Decomposition', 'Andrew J. Ferris and David Poulin', 'Département de Physique, Université de Sherbrooke, Québec, JlK 2R1, Canada', '(Dated: October 31, 2018)', 'The Markov entropy decomposition (MED) is a recently-proposed, cluster-based simulation method for fi-', 'nite temperature quantum systems with arbitrary geometry. In this paper, we detail numerical algorithms for', 'performing the required steps of the MED, principally solving a minimization problem with a preconditioned', 'arXiv:1212.1442v1 [cond-mat.stat-mech] 6Dec 2012', "Newton's algorithm, as well as how to extract global susceptibilities and thermal responses. We demonstrate", 'the power of the method with the spin-1/2 XXZ model on the 2D square lattice, including the extraction of', 'critical points and details of each phase. Although the method shares some qualitative similarities with exact-', 'diagonalization, we show the MED is both more accurate and significantly more flexible.', 'PACS numbers: 05.10.−a,02.50.Ng, 03.67.−a,74.40.Kb', 'I.INTRODUCTION', 'This approximation becomes exact in the case of a 1D quan', 'tum (or classical) Markov chain [10], and leads to an expo-', 'Although the equations governing quantum many-body', 'nential reduction of cost for exact entropy calculations when', 'systems are simple to write down, finding solutions for the', 'the global density matrix is a higher-dimensional Markov net-', 'majority of systems remains incredibly difficult. Modern', 'work state [12, 13].', 'physics finds itself in need of new tools to compute the emer-', 'The second approximation used in the MED approach is', 'gent behavior of large, many-body systems.', 'related to the N-representibility problem. Given a set of lo-', 'There has been a great variety of tools developed to tackle', 'cal but overlapping reduced density matrices {pi}, it is a very', 'many-body problems, but in general, large 2D and 3D quan-', 'challenging problem to determine if there exists a global den-', 'tum systems remain hard to deal with. Most systems are', 'sity operator which is positive semi-definite and whose partial', 'thought to be non-integrable, so exact analytic solutions are', 'trace agrees with each ρi. This problem is QMA-hard (the', 'not usually expected. Direct numerical diagonalization can be', 'quantum analogue of NP) [14, 15], and is hopelessly diffi-', 'performed for relatively small systems — however the emer-', 'cult to enforce. Thus, the second approximation employed', 'gent behavior of a system in the thermodynamic limit may be', 'involves ignoring global consistency with a positive opera-', 'difficult to extract, especially in systems with large correlation', 'tor, while requiring local consistency on any overlapping re-', 'lengths. Monte Carlo approaches are technically exact (up to', 'gions between the ρi. At the zero-temperature limit, the MED', 'sampling error), but suffer from the so-called sign problem', 'approach becomes analogous to the variational nth-order re-', 'for fermionic, frustrated, or dynamical problems. Thus we are', 'duced density matrix approach, where positivity is enforced', 'limited to search for clever approximations to solve the ma-', 'on all reduced density matrices of size n [16–18].', 'jority of many-body problems.', 'The MED approach is an extremely flexible cluster method.', 'Over the past century, hundreds of such approximations', 'applicable to both translationally invariant systems of any di-', 'have been proposed, and we will mention just a few notable', 'mension in the thermodynamic limit, as well as finite systems', 'examples applicable to quantum lattice models. Mean-field', 'or systems without translational invariance (e.g. disordered', 'theory is simple and frequently arrives at the correct quali-', 'lattices, or harmonically trapped atoms in optical lattices).', 'tative description, but often fails when correlations are im-', 'The free energy given by MED is guaranteed to lower bound', 'portant. Density-matrix renormalisation group (DMRG) [1]', 'the true free energy, which in turn lower-bounds the ground', 'is efficient and extremely accurate at solving 1D problems,', 'state energy — thus providing a natural complement to varia-', 'but the computational cost grows exponentially with system', 'tional approaches which upper-bound the ground state energy.', 'size in two- or higher-dimensions [2, 3]. Related tensor-', 'The ability to provide a rigorous ground-state energy window', 'network techniques designed for 2D systems are still in their', 'is a powerful validation tool, creating a very compelling rea-', 'infancy [4–6]. Series-expansion methods [7] can be success-', 'son to use this approach.', 'ful, but may diverge or otherwise converge slowly, obscuring', 'In this paper we paper we present a pedagogical introduc-', 'the state in certain regimes. There exist a variety of cluster-', 'tion to MED, including numerical implementation issues and', 'based techniques, such as dynamical-mean-field theory [8]', 'applications to 2D quantum lattice models in the thermody-', 'and density-matrix embedding [9]', 'namiclimit.InSec.II.wegiveabriefderiyationofthe', 'Here we discuss the so-called Markov entropy decompo-', 'Markov entropy decomposition. Section III outlines a robust', 'sition (MED), recently proposed by Poulin & Hastings [10]', 'numerical strategy for optimizing the clusters that make up', '(and analogous to a slightly earlier classical algorithm [11]).', 'the decomposition. In Sec. IV we show how we can extend', 'This is a self-consistent cluster method for finite temperature', 'these algorithms to extract non-trivial information, such as', 'systems that takes advantage of an approximation of the (von', 'specific heat and susceptibilities. We present an application of', 'Neumann) entropy. In [10], it was shown that the entropy', 'the method to the spin-1/2 XXZ model on a 2D square lattice', 'per site can be rigorously upper bounded using only local in-', 'in Sec. V, describing how to characterize the phase diagram', 'formation — a local, reduced density matrix on N sites, say.', 'and determine critical points, before concluding in Sec. VI.'], 'rec_scores': array([0.99276221, ..., 0.95760632]), 'rec_polys': array([[[ 352, 105],
...,
[ 352, 128]],
...,
[[ 632, 1431],
...,
[ 632, 1447]]], dtype=int16), 'rec_boxes': array([[ 352, ..., 128],
...,
[ 632, ..., 1447]], dtype=int16)}}
```
If save_path is specified, the visualization results will be saved under `save_path`. The visualization output is shown below:

The command-line method is for quick experience. For project integration, also only a few codes are needed as well:
```python
from paddleocr import PaddleOCR
ocr = PaddleOCR(
text_detection_model_name="PP-OCRv5_server_det",
text_recognition_model_name="PP-OCRv5_server_rec",
use_doc_orientation_classify=False, # Disables document orientation classification model via this parameter
use_doc_unwarping=False, # Disables text image rectification model via this parameter
use_textline_orientation=False, # Disables text line orientation classification model via this parameter
)
result = ocr.predict("./3ul2Rq4Sk5Cn-l69D695U.png")
for res in result:
res.print()
res.save_to_img("output")
res.save_to_json("output")
```
For details about usage command and descriptions of parameters, please refer to the [Document](https://paddlepaddle.github.io/PaddleOCR/latest/en/version3.x/pipeline_usage/OCR.html#2-quick-start).
#### PP-StructureV3
Layout analysis is a technique used to extract structured information from document images. PP-StructureV3 includes the following six modules:
* Layout Detection Module
* General OCR Pipeline
* Document Image Preprocessing Pipeline (Optional)
* Table Recognition Pipeline (Optional)
* Seal Recognition Pipeline (Optional)
* Formula Recognition Pipeline (Optional)
Run a single command to quickly experience the PP-StructureV3 pipeline:
```bash
paddleocr pp_structurev3 -i https://cdn-uploads.huggingface.co/production/uploads/681c1ecd9539bdde5ae1733c/mG4tnwfrvECoFMu-S9mxo.png \
--text_detection_model_name PP-OCRv5_server_det \
--use_doc_orientation_classify False \
--use_doc_unwarping False \
--use_textline_orientation False \
--device gpu:0
```
Results would be printed to the terminal. If save_path is specified, the results will be saved under `save_path`. The predicted markdown visualization is shown below:

Just a few lines of code can experience the inference of the pipeline. Taking the PP-StructureV3 pipeline as an example:
```python
from paddleocr import PPStructureV3
pipeline = PPStructureV3(
text_detection_model_name="PP-OCRv5_server_det",
use_doc_orientation_classify=False, # Use use_doc_orientation_classify to enable/disable document orientation classification model
use_doc_unwarping=False, # Use use_doc_unwarping to enable/disable document unwarping module
use_textline_orientation=False, # Use use_textline_orientation to enable/disable textline orientation classification model
device="gpu:0", # Use device to specify GPU for model inference
)
output = pipeline.predict("./pp_structure_v3_demo.png")
for res in output:
res.print() # Print the structured prediction output
res.save_to_json(save_path="output") ## Save the current image's structured result in JSON format
res.save_to_markdown(save_path="output") ## Save the current image's result in Markdown format
```
The default model used in pipeline is `PP-OCRv5_server_det`, and you can specify other text detection model by argument `text_detection_model_name`. And you can also use the local model file by argument `text_detection_model_dir`. For details about usage command and descriptions of parameters, please refer to the [Document](https://paddlepaddle.github.io/PaddleOCR/latest/en/version3.x/pipeline_usage/PP-StructureV3.html#2-quick-start).
## Links
[PaddleOCR Repo](https://github.com/paddlepaddle/paddleocr)
[PaddleOCR Documentation](https://paddlepaddle.github.io/PaddleOCR/latest/en/index.html)
|
junfeng0288/qwen2_5vl-3b_full_sft_0915_epoch1_stage2_2
|
junfeng0288
| 2025-09-15T06:41:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"llama-factory",
"full",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-09-15T06:26:09Z |
---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-VL-3B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: sft_0915_epoch1_stage2_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft_0915_epoch1_stage2_2
This model is a fine-tuned version of [/volume/pt-train/users/wzhang/fj-workspace/data/nt/xspadex/llama-factory/qwen_sft_0915](https://huggingface.co//volume/pt-train/users/wzhang/fj-workspace/data/nt/xspadex/llama-factory/qwen_sft_0915) on the sft_traj dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1164
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.55.0
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
trongg/866d0565-8c04-42c7-b437-56d930bf8fd6
|
trongg
| 2025-09-15T06:39:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-15T06:13:05Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tamewild/4b_v98_merged_e4
|
tamewild
| 2025-09-15T06:37:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-15T06:35:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
HectorHe/Qwen1.5-MOE-aux-free-sft-math7k-remov-aux-only
|
HectorHe
| 2025-09-15T06:36:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_moe",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:HectorHe/math7k",
"base_model:Qwen/Qwen1.5-MoE-A2.7B",
"base_model:finetune:Qwen/Qwen1.5-MoE-A2.7B",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-15T06:15:24Z |
---
base_model: Qwen/Qwen1.5-MoE-A2.7B
datasets: HectorHe/math7k
library_name: transformers
model_name: Qwen1.5-MOE-aux-free-sft-math7k-remov-aux-only
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for Qwen1.5-MOE-aux-free-sft-math7k-remov-aux-only
This model is a fine-tuned version of [Qwen/Qwen1.5-MoE-A2.7B](https://huggingface.co/Qwen/Qwen1.5-MoE-A2.7B) on the [HectorHe/math7k](https://huggingface.co/datasets/HectorHe/math7k) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="HectorHe/Qwen1.5-MOE-aux-free-sft-math7k-remov-aux-only", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/hector_-carnegie-mellon-university/huggingface/runs/g2mj6405)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.51.0
- Pytorch: 2.6.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
tamewild/4b_v98_merged_e3
|
tamewild
| 2025-09-15T06:35:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-15T06:34:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
saurav1111/finetuned-embedding-model
|
saurav1111
| 2025-09-15T06:35:23Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"en",
"dataset:s2orc",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:ms_marco",
"dataset:gooaq",
"dataset:yahoo_answers_topics",
"dataset:code_search_net",
"dataset:search_qa",
"dataset:eli5",
"dataset:snli",
"dataset:multi_nli",
"dataset:wikihow",
"dataset:natural_questions",
"dataset:trivia_qa",
"dataset:embedding-data/sentence-compression",
"dataset:embedding-data/flickr30k-captions",
"dataset:embedding-data/altlex",
"dataset:embedding-data/simple-wiki",
"dataset:embedding-data/QQP",
"dataset:embedding-data/SPECTER",
"dataset:embedding-data/PAQ_pairs",
"dataset:embedding-data/WikiAnswers",
"arxiv:1904.06472",
"arxiv:2102.07033",
"arxiv:2104.08727",
"arxiv:1704.05179",
"arxiv:1810.09305",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-15T06:34:53Z |
---
language: en
license: apache-2.0
library_name: sentence-transformers
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- s2orc
- flax-sentence-embeddings/stackexchange_xml
- ms_marco
- gooaq
- yahoo_answers_topics
- code_search_net
- search_qa
- eli5
- snli
- multi_nli
- wikihow
- natural_questions
- trivia_qa
- embedding-data/sentence-compression
- embedding-data/flickr30k-captions
- embedding-data/altlex
- embedding-data/simple-wiki
- embedding-data/QQP
- embedding-data/SPECTER
- embedding-data/PAQ_pairs
- embedding-data/WikiAnswers
pipeline_tag: sentence-similarity
---
# all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
```
------
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developed this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developed this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intended to be used as a sentence and short paragraph encoder. Given an input text, it outputs a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
By default, input text longer than 256 word pieces is truncated.
## Training procedure
### Pre-training
We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
### Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
#### Hyper parameters
We trained our model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
#### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
| Dataset | Paper | Number of training tuples |
|--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
| [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
| [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
| [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
| [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| **Total** | | **1,170,060,424** |
|
5456es/implicit_reward_Llama-3.2-1B-Instruct_prune_0.5-sigmoid
|
5456es
| 2025-09-15T06:35:00Z | 23 | 0 | null |
[
"safetensors",
"llama",
"dpo",
"preference-learning",
"implicit",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-07T05:21:40Z |
---
license: apache-2.0
base_model: Llama-3.2-1B-Instruct
tags:
- dpo
- preference-learning
- implicit
- pruned
---
# implicit_reward_Llama-3.2-1B-Instruct_prune_0.5-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.2-1B-Instruct using the implicit method.
## Model Details
- **Base Model**: Llama-3.2-1B-Instruct
- **Training Method**: implicit
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: implicit
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/implicit_reward_Llama-3.2-1B-Instruct_prune_0.5-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
5456es/selective_dpo_Qwen2.5-0.5B-Instruct_prune_0.5-sigmoid
|
5456es
| 2025-09-15T06:34:39Z | 47 | 0 | null |
[
"safetensors",
"qwen2",
"dpo",
"preference-learning",
"selective",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-07T05:20:12Z |
---
license: apache-2.0
base_model: Qwen2.5-0.5B-Instruct
tags:
- dpo
- preference-learning
- selective
- pruned
---
# selective_dpo_Qwen2.5-0.5B-Instruct_prune_0.5-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Qwen2.5-0.5B-Instruct using the selective method.
## Model Details
- **Base Model**: Qwen2.5-0.5B-Instruct
- **Training Method**: selective
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: selective
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/selective_dpo_Qwen2.5-0.5B-Instruct_prune_0.5-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
5456es/selective_dpo_Llama-3.2-1B-Instruct_prune_0.5-sigmoid
|
5456es
| 2025-09-15T06:34:14Z | 23 | 0 | null |
[
"safetensors",
"llama",
"dpo",
"preference-learning",
"selective",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-07T05:18:08Z |
---
license: apache-2.0
base_model: Llama-3.2-1B-Instruct
tags:
- dpo
- preference-learning
- selective
- pruned
---
# selective_dpo_Llama-3.2-1B-Instruct_prune_0.5-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.2-1B-Instruct using the selective method.
## Model Details
- **Base Model**: Llama-3.2-1B-Instruct
- **Training Method**: selective
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: selective
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/selective_dpo_Llama-3.2-1B-Instruct_prune_0.5-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
tamewild/4b_v98_merged_e2
|
tamewild
| 2025-09-15T06:33:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-15T06:32:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
5456es/last_layer_prune_Llama-3.1-8B-Instruct_prune_0.6-sigmoid
|
5456es
| 2025-09-15T06:33:30Z | 35 | 0 | null |
[
"safetensors",
"llama",
"dpo",
"preference-learning",
"last",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-12T09:18:09Z |
---
license: apache-2.0
base_model: Llama-3.1-8B-Instruct
tags:
- dpo
- preference-learning
- last
- pruned
---
# last_layer_prune_Llama-3.1-8B-Instruct_prune_0.6-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.1-8B-Instruct using the last method.
## Model Details
- **Base Model**: Llama-3.1-8B-Instruct
- **Training Method**: last
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: last
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/last_layer_prune_Llama-3.1-8B-Instruct_prune_0.6-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
5456es/last_layer_prune_Llama-3.2-3B-Instruct_prune_0.4-sigmoid
|
5456es
| 2025-09-15T06:32:10Z | 26 | 0 | null |
[
"safetensors",
"llama",
"dpo",
"preference-learning",
"last",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-12T09:12:55Z |
---
license: apache-2.0
base_model: Llama-3.2-3B-Instruct
tags:
- dpo
- preference-learning
- last
- pruned
---
# last_layer_prune_Llama-3.2-3B-Instruct_prune_0.4-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.2-3B-Instruct using the last method.
## Model Details
- **Base Model**: Llama-3.2-3B-Instruct
- **Training Method**: last
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: last
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/last_layer_prune_Llama-3.2-3B-Instruct_prune_0.4-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
HectorHe/Qwen1.5-MOE-aux-free-sft-math7k-1e-2-gamma
|
HectorHe
| 2025-09-15T06:32:10Z | 20 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_moe",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:HectorHe/math7k",
"base_model:Qwen/Qwen1.5-MoE-A2.7B",
"base_model:finetune:Qwen/Qwen1.5-MoE-A2.7B",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-15T06:13:47Z |
---
base_model: Qwen/Qwen1.5-MoE-A2.7B
datasets: HectorHe/math7k
library_name: transformers
model_name: Qwen1.5-MOE-aux-free-sft-math7k-1e-2-gamma
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for Qwen1.5-MOE-aux-free-sft-math7k-1e-2-gamma
This model is a fine-tuned version of [Qwen/Qwen1.5-MoE-A2.7B](https://huggingface.co/Qwen/Qwen1.5-MoE-A2.7B) on the [HectorHe/math7k](https://huggingface.co/datasets/HectorHe/math7k) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="HectorHe/Qwen1.5-MOE-aux-free-sft-math7k-1e-2-gamma", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/hector_-carnegie-mellon-university/huggingface/runs/j9nicdtw)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.51.0
- Pytorch: 2.6.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
5456es/selective_dpo_Llama-3.2-3B-Instruct_prune_0.7-sigmoid
|
5456es
| 2025-09-15T06:31:39Z | 37 | 0 | null |
[
"safetensors",
"llama",
"dpo",
"preference-learning",
"selective",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-08T05:01:10Z |
---
license: apache-2.0
base_model: Llama-3.2-3B-Instruct
tags:
- dpo
- preference-learning
- selective
- pruned
---
# selective_dpo_Llama-3.2-3B-Instruct_prune_0.7-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.2-3B-Instruct using the selective method.
## Model Details
- **Base Model**: Llama-3.2-3B-Instruct
- **Training Method**: selective
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: selective
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/selective_dpo_Llama-3.2-3B-Instruct_prune_0.7-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
5456es/cluster_prune_Qwen2.5-0.5B-Instruct_prune_0.5-sigmoid
|
5456es
| 2025-09-15T06:30:36Z | 25 | 0 | null |
[
"safetensors",
"qwen2",
"dpo",
"preference-learning",
"cluster",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-07T05:15:23Z |
---
license: apache-2.0
base_model: Qwen2.5-0.5B-Instruct
tags:
- dpo
- preference-learning
- cluster
- pruned
---
# cluster_prune_Qwen2.5-0.5B-Instruct_prune_0.5-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Qwen2.5-0.5B-Instruct using the cluster method.
## Model Details
- **Base Model**: Qwen2.5-0.5B-Instruct
- **Training Method**: cluster
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: cluster
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/cluster_prune_Qwen2.5-0.5B-Instruct_prune_0.5-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
jeongsjun/fintech_20250915
|
jeongsjun
| 2025-09-15T06:30:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-09-15T06:24:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
5456es/random_prune_Qwen2.5-7B-Instruct_prune_0.5-sigmoid
|
5456es
| 2025-09-15T06:30:16Z | 33 | 0 | null |
[
"safetensors",
"qwen2",
"dpo",
"preference-learning",
"random",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-08T04:50:54Z |
---
license: apache-2.0
base_model: Qwen2.5-7B-Instruct
tags:
- dpo
- preference-learning
- random
- pruned
---
# random_prune_Qwen2.5-7B-Instruct_prune_0.5-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Qwen2.5-7B-Instruct using the random method.
## Model Details
- **Base Model**: Qwen2.5-7B-Instruct
- **Training Method**: random
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: random
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/random_prune_Qwen2.5-7B-Instruct_prune_0.5-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
Mahnoorzaidi/Flora-Guard
|
Mahnoorzaidi
| 2025-09-15T06:29:59Z | 0 | 0 | null |
[
"onnx",
"license:apache-2.0",
"region:us"
] | null | 2025-09-12T00:56:49Z |
---
license: apache-2.0
---
|
5456es/random_prune_Llama-3.1-8B-Instruct_prune_0.7-sigmoid
|
5456es
| 2025-09-15T06:29:15Z | 33 | 0 | null |
[
"safetensors",
"llama",
"dpo",
"preference-learning",
"random",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-09T04:04:51Z |
---
license: apache-2.0
base_model: Llama-3.1-8B-Instruct
tags:
- dpo
- preference-learning
- random
- pruned
---
# random_prune_Llama-3.1-8B-Instruct_prune_0.7-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.1-8B-Instruct using the random method.
## Model Details
- **Base Model**: Llama-3.1-8B-Instruct
- **Training Method**: random
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: random
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/random_prune_Llama-3.1-8B-Instruct_prune_0.7-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
vivek8423/Ai-influencer-model
|
vivek8423
| 2025-09-15T06:28:10Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"generated-from-training",
"license:mit",
"region:us"
] |
text-to-image
| 2025-09-14T19:42:47Z |
---
library_name: diffusers
license: mit
tags:
- text-to-image
- stable-diffusion
- generated-from-training
pipeline_tag: text-to-image
---
# Ai-Influencer Image Generation Model
This is a fine-tuned image generation model designed to create high-quality, photorealistic images of AI influencers based on text prompts. Optimized for API integration with automation tools like n8n.
## 🚀 API Usage (for n8n/Make/Zapier)
This model is deployed as an API endpoint through Hugging Face's Inference API. You can trigger image generation using HTTP requests.
### API Endpoint
|
5456es/implicit_reward_Llama-3.2-3B-Instruct_prune_0.3-sigmoid
|
5456es
| 2025-09-15T06:27:21Z | 28 | 0 | null |
[
"safetensors",
"llama",
"dpo",
"preference-learning",
"implicit",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-08T04:46:27Z |
---
license: apache-2.0
base_model: Llama-3.2-3B-Instruct
tags:
- dpo
- preference-learning
- implicit
- pruned
---
# implicit_reward_Llama-3.2-3B-Instruct_prune_0.3-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.2-3B-Instruct using the implicit method.
## Model Details
- **Base Model**: Llama-3.2-3B-Instruct
- **Training Method**: implicit
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: implicit
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/implicit_reward_Llama-3.2-3B-Instruct_prune_0.3-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
5456es/implicit_reward_Qwen2.5-1.5B-Instruct_prune_0.3-sigmoid
|
5456es
| 2025-09-15T06:26:48Z | 42 | 0 | null |
[
"safetensors",
"qwen2",
"dpo",
"preference-learning",
"implicit",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-07T05:12:42Z |
---
license: apache-2.0
base_model: Qwen2.5-1.5B-Instruct
tags:
- dpo
- preference-learning
- implicit
- pruned
---
# implicit_reward_Qwen2.5-1.5B-Instruct_prune_0.3-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Qwen2.5-1.5B-Instruct using the implicit method.
## Model Details
- **Base Model**: Qwen2.5-1.5B-Instruct
- **Training Method**: implicit
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: implicit
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/implicit_reward_Qwen2.5-1.5B-Instruct_prune_0.3-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.