modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
gtallec-kog/Llama-3.2-1B-nas-ARC-FT-bs-pruning-0
|
gtallec-kog
| 2025-09-16T08:36:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-16T08:32:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
WenFengg/Mixtures16OE_14_5
|
WenFengg
| 2025-09-16T08:36:00Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-16T08:35:22Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
teysty/vjepa2-vitl-fpc16-256-ssv2-fdet_64-frames_1clip_1indice_pose-divide_5epochs
|
teysty
| 2025-09-16T08:35:39Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vjepa2",
"video-classification",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2025-09-16T08:34:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MykosX/maya-anime-xl
|
MykosX
| 2025-09-16T08:34:54Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"text-to-image",
"image-to-image",
"anime",
"en",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-09-16T08:31:34Z |
---
language:
- en
tags:
- stable-diffusion
- text-to-image
- image-to-image
- anime
pipeline_tag: text-to-image
---
# Maya anime xl
`MykosX/maya-anime-xl` is a Stable Diffusion model that can be used both for:
- text-to-image: generates quite good anime images, may occasionally generate NSFW
- image-to-image: tends to improve the quality of images generated by this model, does a good work on images from other models
## Image show-case
<table>
<tr>
<th></th>
<th>(seed=300)**</th>
<th>(seed=400)**</th>
<th>(seed=500)**</th>
</tr>
<tr>
<td>
text-to-image
</td>
<td>
<img src="images/couple-having-fun-(generated)-300.jpg" width="600"/>
</td>
<td>
<img src="images/couple-having-fun-(generated)-400.jpg" width="600"/>
</td>
<td>
<img src="images/couple-having-fun-(generated)-500.jpg" width="600"/>
</td>
</tr>
<tr>
<td>
image-to-image
</td>
<td>
<img src="images/couple-having-fun-(from-generated)-300.jpg" width="600"/>
</td>
<td>
<img src="images/couple-having-fun-(from-generated)-400.jpg" width="600"/>
</td>
<td>
<img src="images/couple-having-fun-(from-generated)-500.jpg" width="600"/>
</td>
</tr>
</table>
<table>
<tr>
<th></th>
<th>(seed=300)**</th>
<th>(seed=400)**</th>
<th>(seed=500)**</th>
</tr>
<tr>
<td>
text-to-image
</td>
<td>
<img src="images/girl-posing-photo-(generated)-300.jpg" width="400"/>
</td>
<td>
<img src="images/girl-posing-photo-(generated)-400.jpg" width="400"/>
</td>
<td>
<img src="images/girl-posing-photo-(generated)-500.jpg" width="400"/>
</td>
</tr>
<tr>
<td>
image-to-image
</td>
<td>
<img src="images/girl-posing-photo-(from-generated)-300.jpg" width="400"/>
</td>
<td>
<img src="images/girl-posing-photo-(from-generated)-400.jpg" width="400"/>
</td>
<td>
<img src="images/girl-posing-photo-(from-generated)-500.jpg" width="400"/>
</td>
</tr>
</table>
<table>
<tr>
<th>Base image (from another model)</th>
<th>image-to-image (seed=300)**</th>
<th>image-to-image (seed=400)**</th>
<th>image-to-image (seed=500)**</th>
</tr>
<tr>
<td>
<img src="images/couple-having-fun-(original).jpg" width="600"/>
</td>
<td>
<img src="images/couple-having-fun-(from-original)-300.jpg" width="600"/>
</td>
<td>
<img src="images/couple-having-fun-(from-original)-400.jpg" width="600"/>
</td>
<td>
<img src="images/couple-having-fun-(from-original)-500.jpg" width="600"/>
</td>
</tr>
<tr>
<td>
<img src="images/girl-posing-photo-(original).jpg" width="600"/>
</td>
<td>
<img src="images/girl-posing-photo-(from-original)-300.jpg" width="600"/>
</td>
<td>
<img src="images/girl-posing-photo-(from-original)-400.jpg" width="600"/>
</td>
<td>
<img src="images/girl-posing-photo-(from-original)-500.jpg" width="600"/>
</td>
</tr>
<tr>
<th>Base image (from another model)</th>
<th>image-to-image (seed=300)*</th>
<th>image-to-image (seed=400)*</th>
<th>image-to-image (seed=500)*</th>
</tr>
</table>
** using these defaults unless specified:
<table>
<tr>
<th>Setting</th>
<th>Default value</th>
</tr>
<tr>
<td>prompt (landscape)</td>
<td>landscape image, a boy and girl having fun on the beach</td>
</tr>
<tr>
<td>prompt (portrait)</td>
<td>portrait image, a girl in a nice dress posing for a photo</td>
</tr>
<tr>
<td>negative prompt</td>
<td>deformed iris, deformed pupils, bad anatomy, cloned face, extra arms, extra legs, missing fingers, too many fingers</td>
</tr>
<tr>
<td>size (landscape)</td>
<td>1024 x 768</td>
</tr>
<tr>
<td>size (portrait)</td>
<td> 768 x 1024</td>
</tr>
<tr>
<td>seed</td>
<td>300</td>
</tr>
<tr>
<td>guidance scale</td>
<td>12.0</td>
</tr>
<tr>
<td>strength</td>
<td>0.5</td>
</tr>
<tr>
<td>inference steps</td>
<td>30</td>
</tr>
</table>
## Diffusers
For more general information on how to run text-to-image models with 🧨 Diffusers, see [the docs](https://huggingface.co/docs/diffusers/using-diffusers/conditional_image_generation).
1. Installation
```
pip install diffusers transformers accelerate
```
2. Running example for text-to-image generation
```py
import torch
from diffusers import AutoPipelineForText2Image
pipe = AutoPipelineForText2Image.from_pretrained('MykosX/maya-anime-xl', torch_dtype=torch.float32)
pipe = pipe.to("cpu")
prompt = "portrait image, a girl in a nice dress posing for a photo"
image = pipe(prompt).images[0]
image.save("./images/text-to-image.png")
```
3. Running example for image-to-image generation
```py
import torch
from diffusers import AutoPipelineForImage2Image
from PIL import Image
pipe = AutoPipelineForImage2Image.from_pretrained('MykosX/maya-anime-xl', torch_dtype=torch.float32)
pipe = pipe.to("cpu")
base_image = Image.open("./images/girl-posing-photo-(original).jpg")
prompt = "portrait image, a girl in a nice dress posing for a photo"
image = pipe(prompt, image=base_image).images[0]
image.save("./images/image-to-image.png")
```
## PS
Play with the model and don't hesitate to show off
|
devivodowdlel/blockassist-bc-winged_exotic_iguana_1758011571
|
devivodowdlel
| 2025-09-16T08:34:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"winged exotic iguana",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-16T08:33:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- winged exotic iguana
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ga1nang/medgemma4b-it-lora-medrag_4
|
Ga1nang
| 2025-09-16T08:34:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-16T08:33:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Donchocho/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-graceful_tricky_dolphin
|
Donchocho
| 2025-09-16T08:34:33Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am graceful_tricky_dolphin",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-23T09:20:04Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am graceful_tricky_dolphin
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
WenFengg/Mixtures16OE_14_2
|
WenFengg
| 2025-09-16T08:33:03Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-16T08:32:22Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
cuongdk253/gemma3-12b-ft-16092025-1
|
cuongdk253
| 2025-09-16T08:33:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
image-text-to-text
| 2025-09-16T07:29:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
daniyalfarh/Ata-Physics-Tutor-V2
|
daniyalfarh
| 2025-09-16T08:32:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-14B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-14B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-16T06:05:40Z |
---
base_model: unsloth/Qwen3-14B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** daniyalfarh
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-14B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
LZXzju/UI-S1-7B
|
LZXzju
| 2025-09-16T08:32:42Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-16T08:32:42Z |
---
license: apache-2.0
---
|
PIA-SPACE-LAB/dinov3-vit7b16-pretrain-lvd1689m
|
PIA-SPACE-LAB
| 2025-09-16T08:32:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"dinov3_vit",
"image-feature-extraction",
"dino",
"dinov3",
"arxiv:2508.10104",
"en",
"license:other",
"endpoints_compatible",
"region:us"
] |
image-feature-extraction
| 2025-09-16T08:17:03Z |
---
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
geo: ip_location
By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
extra_gated_description: >-
The information you provide will be collected, stored, processed and shared in
accordance with the [Meta Privacy
Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
language:
- en
tags:
- dino
- dinov3
- arxiv:2508.10104
license: other
license_name: dinov3-license
license_link: https://ai.meta.com/resources/models-and-libraries/dinov3-license
pipeline_tag: image-feature-extraction
library_name: transformers
---
# Model Card for DINOv3
DINOv3 is a family of versatile vision foundation models that outperforms the specialized state of the art across a broad range of settings, without fine-tuning. DINOv3 produces high-quality dense features that achieve outstanding performance on various vision tasks, significantly surpassing previous self- and weakly-supervised foundation models.
## Model Details
These are Vision Transformer and ConvNeXt models trained following the method described in the DINOv3 paper. 12 models are provided:
- 10 models pretrained on web data (LVD-1689M dataset)
- 1 ViT-7B trained from scratch,
- 5 ViT-S/S+/B/L/H+ models distilled from the ViT-7B,
- 4 ConvNeXt-{T/S/B/L} models distilled from the ViT-7B,
- 2 models pretrained on satellite data (SAT-493M dataset)
- 1 ViT-7B trained from scratch
- 1 ViT-L distilled from the ViT-7B
Each Transformer-based model takes an image as input and returns a class token, patch tokens (and register tokens). These models follow a ViT architecture, with a patch size of 16. For a 224x224 image, this results in 1 class token + 4 register tokens + 196 patch tokens = 201 tokens (for DINOv2 with registers this resulted in 1 + 4 + 256 = 261 tokens).
The models can accept larger images provided the image shapes are multiples of the patch size (16). If this condition is not verified, the model will crop to the closest smaller multiple of the patch size.
### Model Description
- **Developed by:** Meta AI
- **Model type:** Vision Transformer, ConvNeXt
- **License:** [DINOv3 License](https://ai.meta.com/resources/models-and-libraries/dinov3-license/)
### Model Sources
- **Repository:** [https://github.com/facebookresearch/dinov3](https://github.com/facebookresearch/dinov3)
- **Paper:** [https://arxiv.org/abs/2508.10104](https://arxiv.org/abs/2508.10104)
## Uses
The models are vision backbones providing multi-purpose features for downstream tasks.
### Direct Use
The models can be used without fine-tuning, with downstream classifiers as simple as linear layers, to obtain competitive results:
- on image classification, using k-NN classifiers on the class token
- on image classification, with logistic regression classifiers applied on the class token
- on image classification, with a linear layer applied on the class token and the average of the patch tokens
- on image retrieval using nearest neighbors
- on geometric and semantic 3D keypoint correspondances
- on depth estimation, semantic segmentation, using linear layers
- on unsupervised object discovery
- on video segmentation tracking
- on video classification, using a small 4-layer attentive probe
### Downstream Use
While fine-tuning the models can yield some gains, it is recommended to keep this option as a last resort: the frozen features are expected to provide good performance out-of-the-box.
## Bias, Risks, and Limitations
Compared to DINOv2 and SEERv2, DINOv3 delivers somewhat consistent performance across income categories on geographical fairness and diversity, although with a notable performance drop in the low-income bucket compared to the highest-income bucket.
DINOv3 also achieves relatively good scores across different regions, improving over its predecessor DINOv2. However, a relative difference is still observed between Europe and Africa.
### Recommendations
Fine-tuning is expected to increase the biases in the features produced by the model as they will be tuned to the fine-tuning labels.
## How to Get Started with the Model
The example below demonstrates how to obtain an image embedding with [Pipeline] or the [AutoModel] class.
```python
from transformers import pipeline
from transformers.image_utils import load_image
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
image = load_image(url)
feature_extractor = pipeline(
model="facebook/dinov3-vit7b16-pretrain-lvd1689m",
task="image-feature-extraction",
)
features = feature_extractor(image)
```
```python
import torch
from transformers import AutoImageProcessor, AutoModel
from transformers.image_utils import load_image
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = load_image(url)
pretrained_model_name = "facebook/dinov3-vit7b16-pretrain-lvd1689m"
processor = AutoImageProcessor.from_pretrained(pretrained_model_name)
model = AutoModel.from_pretrained(
pretrained_model_name,
device_map="auto",
)
inputs = processor(images=image, return_tensors="pt").to(model.device)
with torch.inference_mode():
outputs = model(**inputs)
pooled_output = outputs.pooler_output
print("Pooled output shape:", pooled_output.shape)
```
## Training Details
### Training Data
- Web dataset (LVD-1689M): a curated dataset of 1,689 millions of images extracted from a large data
pool of 17 billions web images collected from public posts on Instagram
- Satellite dataset (SAT-493M): a dataset of 493 millions of 512x512 images sampled randomly from Maxar RGB ortho-rectified imagery at 0.6 meter resolution
### Training Procedure
**Training objective:**
- DINO self-distillation loss with multi-crop
- iBOT masked-image modeling loss
- KoLeo regularization on [CLS] tokens
- Gram anchoring
- **Training regime:** PyTorch FSDP2 (with bf16 and fp8 matrix multiplications)
**Distillation:**
- Distillation follows the standard DINOv3 pretraining procedure, except the teacher is a frozen pretrained ViT-7B.
## Evaluation
**Results**
The reader is referred to the associated paper for details on the evaluation protocols
*Results for ViT backbones pretrained (or distilled) on web (LVD-1689M)*
<table>
<tr>
<th></th>
<!-- <th></th> -->
<th colspan="4">Global Tasks</th>
<th colspan="5">Dense Tasks</th>
</tr>
<tr>
<th>Model</th>
<!-- <th>Dataset</th> -->
<th>IN-ReaL</th>
<th>IN-R</th>
<th>Obj.Net</th>
<th>Ox.-H</th>
<th>ADE20k</th>
<th>NYU↓</th>
<th>DAVIS</th>
<th>NAVI</th>
<th>SPair</th>
</tr>
<tr>
<td>DINOv3 ViT-S/16</td>
<!-- <td>LVD-1689M</td> -->
<td align="right">87.0</td>
<td align="right">60.4</td>
<td align="right">50.9</td>
<td align="right">49.5</td>
<td align="right">47.0</td>
<td align="right">0.403</td>
<td align="right">72.7</td>
<td align="right">56.3</td>
<td align="right">50.4</td>
</tr>
<tr>
<td>DINOv3 ViT-S+/16</td>
<!-- <td>LVD-1689M</td> -->
<td align="right">88.0</td>
<td align="right">68.8</td>
<td align="right">54.6</td>
<td align="right">50.0</td>
<td align="right">48.8</td>
<td align="right">0.399</td>
<td align="right">75.5</td>
<td align="right">57.1</td>
<td align="right">55.2</td>
</tr>
<tr>
<td>DINOv3 ViT-B/16</td>
<!-- <td>LVD-1689M</td> -->
<td align="right">89.3</td>
<td align="right">76.7</td>
<td align="right">64.1</td>
<td align="right">58.5</td>
<td align="right">51.8</td>
<td align="right">0.373</td>
<td align="right">77.2</td>
<td align="right">58.8</td>
<td align="right">57.2</td>
</tr>
<tr>
<td>DINOv3 ViT-L/16</td>
<!-- <td>LVD-1689M</td> -->
<td align="right">90.2</td>
<td align="right">88.1</td>
<td align="right">74.8</td>
<td align="right">63.1</td>
<td align="right">54.9</td>
<td align="right">0.352</td>
<td align="right">79.9</td>
<td align="right">62.3</td>
<td align="right">61.3</td>
</tr>
<tr>
<td>DINOv3 ViT-H+/16</td>
<!-- <td>LVD-1689M</td> -->
<td align="right">90.3</td>
<td align="right">90.0</td>
<td align="right">78.6</td>
<td align="right">64.5</td>
<td align="right">54.8</td>
<td align="right">0.352</td>
<td align="right">79.3</td>
<td align="right">63.3</td>
<td align="right">56.3</td>
</tr>
<tr>
<td>DINOv3 ViT-7B/16</td>
<!-- <td>LVD-1689M</td> -->
<td align="right">90.4</td>
<td align="right">91.1</td>
<td align="right">91.1</td>
<td align="right">72.8</td>
<td align="right">55.9</td>
<td align="right">0.309</td>
<td align="right">79.7</td>
<td align="right">64.4</td>
<td align="right">58.7</td>
</tr>
</table>
*Results for ConvNeXt backbones distilled on web (LVD-1689M)*
<table>
<tr>
<th></th>
<th colspan="6">Global Tasks</th>
<th colspan="2">Dense Tasks</th>
</tr>
<tr>
<th>Model</th>
<th colspan="2">IN-ReaL</th>
<th colspan="2">IN-R</th>
<th colspan="2">Obj.Net</th>
<th>ADE20k</th>
<th>NYU↓</th>
</tr>
<tr>
<td></th>
<td>@256px</td>
<td>@512px</td>
<td>@256px</td>
<td>@512px</td>
<td>@256px</td>
<td>@512px</td>
<td colspan="2"></td>
</tr>
<tr>
<td>DINOv3 ConvNeXt Tiny</td>
<td align="right">86.6</td>
<td align="right">87.7</td>
<td align="right">73.7</td>
<td align="right">74.1</td>
<td align="right">52.6</td>
<td align="right">58.7</td>
<td align="right">42.7</td>
<td align="right">0.448</td>
</tr>
<tr>
<td>DINOv3 ConvNeXt Small</td>
<td align="right">87.9</td>
<td align="right">88.7</td>
<td align="right">73.7</td>
<td align="right">74.1</td>
<td align="right">52.6</td>
<td align="right">58.7</td>
<td align="right">44.8</td>
<td align="right">0.432</td>
</tr>
<tr>
<td>DINOv3 ConvNeXt Base</td>
<td align="right">88.5</td>
<td align="right">89.2</td>
<td align="right">77.2</td>
<td align="right">78.2</td>
<td align="right">56.2</td>
<td align="right">61.3</td>
<td align="right">46.3</td>
<td align="right">0.420</td>
</tr>
<tr>
<td>DINOv3 ConvNeXt Large</td>
<td align="right">88.9</td>
<td align="right">89.4</td>
<td align="right">81.3</td>
<td align="right">82.4</td>
<td align="right">59.3</td>
<td align="right">65.2</td>
<td align="right">47.8</td>
<td align="right">0.403</td>
</tr>
</table>
*Results for ViT backbones pretrained (or distilled) on satellite (SAT-493M)*
<table>
<tr>
<th></th>
<th colspan="7">(GEO-Bench) Classification</th>
</tr>
<tr>
<th>Model</ht>
<th>m-BEnet</th>
<th>m-brick-kiln
<th>m-eurosat</th>
<th>m-forestnet</th>
<th>m-pv4ger</th>
<th>m-so2sat</th>
<th>mean</th>
</tr>
<tr>
<td>DINOv3 ViT-L/16</td>
<td>73.0</td>
<td>96.5</td>
<td>94.1</td>
<td>60.6</td>
<td>96.0</td>
<td>57.4</td>
<td>79.6</td>
</tr>
<tr>
<td>DINOv3 ViT-7B/16</td>
<td>74.0</td>
<td>97.2</td>
<td>94.8</td>
<td>62.3</td>
<td>96.1</td>
<td>62.1</td>
<td>81.1</td>
</tr>
<tr>
<th></th>
<th colspan="7">(GEO-Bench) Segmentation</th>
</tr>
<tr>
<th>Model</th>
<th>m-cashew</th>
<th>m-chesapeake</th>
<th>m-NeonTree</th>
<th>m-nz-cattle</th>
<th>m-pv4ger-seg</th>
<th>m-SA-crop</th>
<th>mean</th>
</tr>
<tr>
<td>DINOv3 ViT-L/16</td>
<td>94.2</td>
<td>75.6</td>
<td>61.8</td>
<td>83.7</td>
<td>95.2</td>
<td>36.8</td>
<td>74.5</td>
</tr>
<tr>
<td>DINOv3 ViT-7B/16</td>
<td>94.1</td>
<td>76.6</td>
<td>62.6</td>
<td>83.4</td>
<td>95.5</td>
<td>37.6</td>
<td>75.0</td>
</tr>
</table>
## Environmental Impact
- **Hardware Type:** Nvidia H100
- **Hours used:** 61,440 hours for ViT-7B model training
- **Cloud Provider:** Private infrastructure
- **Compute Region:** USA
- **Carbon Emitted:** 18t CO2eq
## Technical Specifications
### Model Architecture and Objective
Vision Transformer models:
- ViT-S (21M parameters): patch size 16, embedding dimension 384, 4 register tokens, 6 heads, MLP FFN, RoPE
- ViT-S+ (29M parameters): patch size 16, embedding dimension 384, 4 register tokens, 6 heads, SwiGLU FFN, RoPE
- ViT-B (86M parameters): patch size 16, embedding dimension 768, 4 register tokens, 12 heads, MLP FFN, RoPE
- ViT-L (300M parameters): patch size 16, embedding dimension 1024, 4 register tokens, 16 heads, MLP FFN, RoPE
- ViT-H+ (840M parameters): patch size 16, embedding dimension 1280, 4 register tokens, 20 heads, SwiGLU FFN, RoPE
- ViT-7B (6716M parameters): patch size 16, embedding dimension 4096, 4 register tokens, 32 heads, SwiGLU FFN, RoPE
ConvNeXt models:
- ConvNeXt Tiny (29M parameters)
- ConvNeXt Small (50M parameters)
- ConvNeXt Base (89M parameters)
- ConvNeXt Large (198M parameters)
### Compute Infrastructure
#### Hardware
Nvidia H100 GPUs
#### Software
PyTorch 2.7
## More Information
See the [blog post](https://ai.meta.com/blog/dinov3-self-supervised-vision-model/) and the associated [website](https://ai.meta.com/dinov3/).
## Citation
**BibTeX**
```
@misc{simeoni2025dinov3,
title={{DINOv3}},
author={Sim{\'e}oni, Oriane and Vo, Huy V. and Seitzer, Maximilian and Baldassarre, Federico and Oquab, Maxime and Jose, Cijo and Khalidov, Vasil and Szafraniec, Marc and Yi, Seungeun and Ramamonjisoa, Micha{\"e}l and Massa, Francisco and Haziza, Daniel and Wehrstedt, Luca and Wang, Jianyuan and Darcet, Timoth{\'e}e and Moutakanni, Th{\'e}o and Sentana, Leonel and Roberts, Claire and Vedaldi, Andrea and Tolan, Jamie and Brandt, John and Couprie, Camille and Mairal, Julien and J{\'e}gou, Herv{\'e} and Labatut, Patrick and Bojanowski, Piotr},
year={2025},
eprint={2508.10104},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2508.10104},
}
```
|
csikasote/mms-1b-all-bemgen-combined-m100f50-52-DAT-2e-1
|
csikasote
| 2025-09-16T08:31:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"bemgen",
"mms",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-16T07:54:37Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- automatic-speech-recognition
- bemgen
- mms
- generated_from_trainer
model-index:
- name: mms-1b-all-bemgen-combined-m100f50-52-DAT-2e-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1b-all-bemgen-combined-m100f50-52-DAT-2e-1
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the BEMGEN - BEM dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3173
- Cer: 0.0914
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 4
- seed: 52
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 1.7712 | 0.5618 | 100 | 2.9581 | 0.9991 |
| 0.6481 | 1.1236 | 200 | 0.9916 | 0.2687 |
| 0.5194 | 1.6854 | 300 | 0.4409 | 0.1359 |
| 0.6171 | 2.2472 | 400 | 0.3786 | 0.1065 |
| 0.6925 | 2.8090 | 500 | 0.3416 | 0.0958 |
| 0.6617 | 3.3708 | 600 | 0.3215 | 0.0897 |
| 0.6506 | 3.9326 | 700 | 0.3176 | 0.0920 |
| 0.6635 | 4.4944 | 800 | 0.3174 | 0.0914 |
| 0.6417 | 5.0562 | 900 | 0.3154 | 0.0907 |
| 0.6595 | 5.6180 | 1000 | 0.3273 | 0.0936 |
| 0.6394 | 6.1798 | 1100 | 0.3160 | 0.0923 |
| 0.6599 | 6.7416 | 1200 | 0.3251 | 0.0968 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
empirischtech/DeepSeek-LLM-67B-Chat-gptq-8bit
|
empirischtech
| 2025-09-16T08:29:46Z | 0 | 0 | null |
[
"safetensors",
"llama",
"biology",
"chemistry",
"finance",
"legal",
"climate",
"medical",
"text-generation",
"conversational",
"en",
"dataset:allenai/c4",
"base_model:deepseek-ai/deepseek-llm-67b-chat",
"base_model:quantized:deepseek-ai/deepseek-llm-67b-chat",
"license:cc-by-4.0",
"8-bit",
"gptq",
"region:us"
] |
text-generation
| 2025-08-29T09:54:48Z |
---
license: cc-by-4.0
datasets:
- allenai/c4
language:
- en
metrics:
- accuracy
base_model:
- deepseek-ai/deepseek-llm-67b-chat
pipeline_tag: text-generation
tags:
- biology
- chemistry
- finance
- legal
- climate
- medical
---
# Overview
This document presents the evaluation results of `DeepSeek-LLM-67B-Chat`, a **8-bit quantized model using GPTQ**, evaluated with the **Language Model Evaluation Harness** on the **ARC, GPQA** and **IfEval** benchmark.
---
## 📊 Evaluation Summary
| **Metric** | **Value** | **Description** |
|------------|-----------|-----------------|
| **ARC-Challenge** | `58.11%` | Raw (`acc,none`) |
| **GPQA Overall** | `25.44%` | Averaged across GPQA-Diamond, GPQA-Extended, GPQA-Main (n-shot, zeroshot, CoT, Generative) |
| **GPQA (n-shot acc)** | `33.04%` | Averaged over GPQA-Diamond, GPQA-Extended, GPQA-Main (`acc,none`) |
| **GPQA (zeroshot acc)** | `32.51%` | Averaged over GPQA-Diamond, GPQA-Extended, GPQA-Main (`acc,none`) |
| **GPQA (CoT n-shot)** | `17.21%` | Averaged over GPQA-Diamond, GPQA-Extended, GPQA-Main (`exact_match flexible-extract`) |
| **GPQA (CoT zeroshot)** | `17.52%` | Averaged over GPQA-Diamond, GPQA-Extended, GPQA-Main (`exact_match flexible-extract`) |
| **GPQA (Generative n-shot)** | `26.49%` | Averaged over GPQA-Diamond, GPQA-Extended, GPQA-Main (`exact_match flexible-extract`) |
| **IFEval Overall** | `43.16%` | Averaged across Prompt-level Strict, Prompt-level Loose, Inst-level Strict, Inst-level Loose |
| **IFEval (Prompt-level Strict)** | `36.23%` | Prompt-level strict accuracy |
| **IFEval (Prompt-level Loose)** | `38.45%` | Prompt-level loose accuracy |
| **IFEval (Inst-level Strict)** | `47.84%` | Inst-level strict accuracy |
| **IFEval (Inst-level Loose)** | `50.12%` | Inst-level loose accuracy |
---
## ⚙️ Model Configuration
- **Model:** `DeepSeek-LLM-67B-Chat`
- **Parameters:** `67 billion`
- **Quantization:** `8-bit GPTQ`
- **Source:** Hugging Face (`hf`)
- **Precision:** `torch.float16`
- **Hardware:** `NVIDIA A100 80GB PCIe`
- **CUDA Version:** `12.4`
- **PyTorch Version:** `2.6.0+cu124`
- **Batch Size:** `1`
📌 **Interpretation:**
- The evaluation was performed on a **high-performance GPU (A100 80GB)**.
- The model is significantly smaller than the full version, with **GPTQ 8-bit quantization reducing memory footprint**.
- A **single-sample batch size** was used, which might slow evaluation speed.
---
## 📈 Performance Insights
- **Quantization Impact:** The **8-bit GPTQ quantization** reduces memory usage but may also impact accuracy slightly.
- **Zero-shot Limitation:** Performance could improve with **few-shot prompting** (providing examples before testing).
---
📌 Let us know if you need further analysis or model tuning! 🚀
|
csikasote/mms-1b-all-bemgen-combined-m100f50-52-DAT-3e-1
|
csikasote
| 2025-09-16T08:29:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"bemgen",
"mms",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-16T08:01:01Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- automatic-speech-recognition
- bemgen
- mms
- generated_from_trainer
model-index:
- name: mms-1b-all-bemgen-combined-m100f50-52-DAT-3e-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1b-all-bemgen-combined-m100f50-52-DAT-3e-1
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the BEMGEN - BEM dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3119
- Cer: 0.0867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 4
- seed: 52
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 2.4829 | 0.5618 | 100 | 2.9256 | 0.9983 |
| 0.8687 | 1.1236 | 200 | 0.7315 | 0.2318 |
| 0.5967 | 1.6854 | 300 | 0.3664 | 0.1039 |
| 0.6445 | 2.2472 | 400 | 0.3120 | 0.0867 |
| 0.6622 | 2.8090 | 500 | 0.3013 | 0.0863 |
| 0.7038 | 3.3708 | 600 | 0.2928 | 0.0825 |
| 0.683 | 3.9326 | 700 | 0.2898 | 0.0829 |
| 0.7074 | 4.4944 | 800 | 0.2905 | 0.0814 |
| 0.6921 | 5.0562 | 900 | 0.2916 | 0.0845 |
| 0.6799 | 5.6180 | 1000 | 0.2909 | 0.0846 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
alberto-lorente/roberta_AGEM_davidsonTOfountaTOlong_exp_TIME_5
|
alberto-lorente
| 2025-09-16T08:28:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-16T08:28:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Rvcmodel/rvc_model
|
Rvcmodel
| 2025-09-16T08:27:12Z | 0 | 1 | null |
[
"region:us"
] | null | 2024-11-25T03:34:41Z |
<div align="center">
简体中文 | [English](https://github.com/RVCModel/RVC_Model/blob/main/READMEen.md) | [日本語](https://github.com/RVCModel/RVC_Model/blob/main/READMEja.md)
</div>
# 妙音 RVC 模型工坊 | 音色模型下载与定制平台
> 官网地址:[https://klrvc.com](https://klrvc.com)
## 📌 平台简介
**妙音 RVC 模型工坊**(以下简称“妙音”)是一个专注于 **Retrieval-based Voice Conversion(RVC)** 技术的音色模型分享与定制平台,提供丰富的免费与精品语音模型资源,支持用户进行声音克隆、语音转换、AI 翻唱等创作活动。
妙音致力于降低 RVC 技术的使用门槛,为普通用户、音频创作者、配音爱好者提供高质量、低成本的模型获取渠道,并支持用户提交自己的模型进行分享或出售。
---
## 🧠 什么是 RVC?
RVC(Retrieval-based Voice Conversion)是一种基于检索的语音转换技术,能够在保留原始语音情感和语调的基础上,将一个人的声音转换为另一个人的音色。其核心优势包括:
- ✅ **低数据需求**:仅需几分钟音频即可训练出可用模型;
- ✅ **高还原度**:支持唱歌、说话、情感表达;
- ✅ **开源免费**:基于开源项目,社区活跃;
- ✅ **支持实时转换**:可用于直播、配音、AI 翻唱等场景。
---
## 🎧 模型分类与推荐
妙音平台模型种类丰富,涵盖**动漫角色、游戏人物、虚拟主播、明星音色、原创音色**等多个分类,支持中文、日语、英语等多语种。
### 🔥 热门免费模型推荐:
| 模型名称 | 类型 | 特点 | 下载链接 |
|----------|------|------|-----------|
| **艾莉莎·九条** | 动漫角色 | 少女音,支持日语 | [点击下载](https://klrvc.com/mxgf/1906.html) |
| **东海帝皇(赛马娘)** | 游戏角色 | 萝莉音,情感丰富 | [点击下载](https://klrvc.com/mxgf/888.html) |
| **柯南(正太音)** | 动漫角色 | 中日双语,童音/正太音 | [点击下载](https://klrvc.com/mxgf/xxx.html) |
| **Taylor Swift(翻唱向)** | 明星音色 | 英文,适合AI翻唱 | [点击下载](https://klrvc.com/mxgf/xxx.html) |
| **阿飞(低音炮男声)** | 原创男声 | 低音饱满,适合配音 | [点击下载](https://klrvc.com/mxgf/xxx.html) |
> 更多模型请访问:[https://klrvc.com/mxgf](https://klrvc.com/mxgf)
---
## 🛠️ 模型定制服务
妙音提供**用户定制模型服务**,你可以上传自己的音频数据,平台将为你训练专属音色模型。
### ✅ 定制流程:
1. 提交需求与音频素材(建议 5~30 分钟);
2. 平台进入训练周期(通常 2~5 天);
3. 提供试听音频,用户确认效果;
4. 支付尾款,下载模型文件。
### 💰 收费标准:
- 普通模型定制:起价 ¥100;
- 精品模型定制:根据数据集复杂度定价;
- 会员用户享受 5~8 折优惠;
- 永久会员赠送一次免费定制机会。
---
## 📦 模型使用说明
RVC 模型通常为 `.pth` 格式,需配合 RVC 主程序使用。推荐使用以下整合包:
- **RVC v2 主程序(支持 CUDA)**
下载地址:[123云盘](https://www.123pan.com/s/5tIqVv-QHNcv.html)
- **使用步骤简要**:
1. 下载并解压 RVC 主程序;
2. 将模型文件放入 `weights` 文件夹;
3. 启动 `infer-web.py`;
4. 选择模型,上传音频,进行推理。
---
## 📅 最新动态(2025年9月)
- 🔥 新发布 **悦 f048K 底模**:支持中日英三语,提升少样本训练效果;
- 🎤 新增多款 **AI 翻唱向模型**:如滨崎步、Taylor Swift、Ai 吟霖等;
- 🧑🎤 上线 **少年音/正太音系列模型**,适用于剧情配音与虚拟主播;
- 📈 模型定制服务升级,支持唱歌、情感语音、多语种混合训练;
- 📢 永久会员限时赠送专属模型定制一次,详情访问:[会员中心](https://klrvc.com/vip)。
---
## ⚠️ 版权与使用声明
- 所有模型仅供 **学习与研究用途**,禁止商用;
- 模型所使用音色版权归原作者所有,妙音不拥有其版权;
- 用户需确保使用行为不侵犯第三方权益;
- 若模型涉及收费,费用为服务费,非版权购买费用。
---
## 📬 联系我们
- 🌐 官网:[https://klrvc.com](https://klrvc.com)
- 📧 邮箱:请联系官网客服
- 💬 QQ群:永久会员用户可加入专属交流群
- 📍 平台支持:模型投稿、定制、分成、技术支持
---
## 📌 致谢
感谢开源社区对 RVC 项目的贡献,感谢每一位模型训练者与分享者。妙音将持续为大家提供更优质、更丰富的 RVC 模型资源与服务。
---
|
devivodowdlel/blockassist-bc-winged_exotic_iguana_1758010972
|
devivodowdlel
| 2025-09-16T08:26:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"winged exotic iguana",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-16T08:23:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- winged exotic iguana
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hdnfnfn/blockassist-bc-woolly_shaggy_mosquito_1758011121
|
hdnfnfn
| 2025-09-16T08:25:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"woolly shaggy mosquito",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-16T08:25:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- woolly shaggy mosquito
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
chrispian/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lanky_lightfooted_swan
|
chrispian
| 2025-09-16T08:21:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am lanky_lightfooted_swan",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-16T08:20:48Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am lanky_lightfooted_swan
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zy95279000/Marcoro14-7B-slerp
|
zy95279000
| 2025-09-16T08:19:23Z | 0 | 0 | null |
[
"safetensors",
"mistral",
"merge",
"mergekit",
"lazymergekit",
"OpenPipe/mistral-ft-optimized-1218",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"license:apache-2.0",
"region:us"
] | null | 2025-09-16T08:15:13Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
---
# Marcoro14-7B-slerp
Marcoro14-7B-slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: OpenPipe/mistral-ft-optimized-1218
layer_range: [0, 32]
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: OpenPipe/mistral-ft-optimized-1218
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
mradermacher/L3.3-Cogmoblated-70B-GGUF
|
mradermacher
| 2025-09-16T08:17:25Z | 359 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:TheSkullery/L3.3-Cogmoblated-70B",
"base_model:quantized:TheSkullery/L3.3-Cogmoblated-70B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-14T05:48:18Z |
---
base_model: TheSkullery/L3.3-Cogmoblated-70B
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/TheSkullery/L3.3-Cogmoblated-70B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#L3.3-Cogmoblated-70B-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3.3-Cogmoblated-70B-GGUF/resolve/main/L3.3-Cogmoblated-70B.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3.3-Cogmoblated-70B-GGUF/resolve/main/L3.3-Cogmoblated-70B.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/L3.3-Cogmoblated-70B-GGUF/resolve/main/L3.3-Cogmoblated-70B.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3.3-Cogmoblated-70B-GGUF/resolve/main/L3.3-Cogmoblated-70B.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/L3.3-Cogmoblated-70B-GGUF/resolve/main/L3.3-Cogmoblated-70B.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/L3.3-Cogmoblated-70B-GGUF/resolve/main/L3.3-Cogmoblated-70B.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3.3-Cogmoblated-70B-GGUF/resolve/main/L3.3-Cogmoblated-70B.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3.3-Cogmoblated-70B-GGUF/resolve/main/L3.3-Cogmoblated-70B.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3.3-Cogmoblated-70B-GGUF/resolve/main/L3.3-Cogmoblated-70B.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/L3.3-Cogmoblated-70B-GGUF/resolve/main/L3.3-Cogmoblated-70B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/L3.3-Cogmoblated-70B-GGUF/resolve/main/L3.3-Cogmoblated-70B.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/L3.3-Cogmoblated-70B-GGUF/resolve/main/L3.3-Cogmoblated-70B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/L3.3-Cogmoblated-70B-GGUF/resolve/main/L3.3-Cogmoblated-70B.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
monentiadev/es-input-classifier
|
monentiadev
| 2025-09-16T08:16:38Z | 11 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-09-16T08:13:58Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
devivodowdlel/blockassist-bc-winged_exotic_iguana_1758010349
|
devivodowdlel
| 2025-09-16T08:13:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"winged exotic iguana",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-16T08:13:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- winged exotic iguana
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
BKM1804/51b010d3-6e4a-4eab-a0d2-1cc02f293a7d
|
BKM1804
| 2025-09-16T08:13:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-16T07:48:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
okeozek/asd
|
okeozek
| 2025-09-16T08:13:10Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-16T08:13:10Z |
---
license: apache-2.0
---
|
Naman150/ProcedureFT_Final
|
Naman150
| 2025-09-16T08:13:01Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-15T21:41:09Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Naman150
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
howellyoung1/OmniVideo11B
|
howellyoung1
| 2025-09-16T08:11:01Z | 0 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:2507.06119",
"region:us"
] | null | 2025-07-31T07:11:39Z |
<div align="center">
<h1>Omni-Video: Democratizing Unified Video Understanding and Generation</h1>
[Zhiyu Tan*](https://openreview.net/profile?id=~Zhiyu_Tan1) · [Hao Yang*](https://openreview.net/profile?id=~Yang_Hao4) ·[Luozheng Qin](https://openreview.net/profile?id=~Luozheng_Qin1) · [Jia Gong](https://scholar.google.com/citations?user=ZV-ThegAAAAJ&hl=zh-CN&oi=ao) · [Mengping Yang](https://scholar.google.com/citations?user=yF34LtcAAAAJ&hl=zh-CN)<sup>✉</sup> · [Hao Li](https://scholar.google.com/citations?user=pHN-QIwAAAAJ&hl=zh-CN) <sup>✉</sup>
<sup>*</sup>Equal Contribution
<sup>✉</sup>Corresponding Authors
<a href='https://howellyoung-s.github.io/OmniVideo_project/'><img src='https://img.shields.io/badge/Project-Page-green'></a>
<a href='https://arxiv.org/pdf/2507.06119'><img src='https://img.shields.io/badge/Technique-Report-red'></a>
<a href='https://howellyoung-s.github.io/OmniVideo_project/'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Model-blue'></a>
</div>
> **TL; DR:** ***Omini-Video*** is a video unified model that enables various video tasks including video understanding, generation editing within a single framework.
## Abstract
Notable breakthroughs in unified understanding and generation modeling have led to remarkable advancements in image understanding, reasoning, production and editing, yet current foundational models predominantly focus on processing images, creating a gap in the development of unified models for video understanding and generation. This report presents ***Omni-Video***, an efficient and effective unified framework for video understanding, generation, as well as instruction-based editing. Our key insight is to teach existing multimodal large language models (MLLMs) to produce continuous visual clues that are used as the input of diffusion decoders, which produce high-quality videos conditioned on these visual clues. To fully unlock the potential of our system for unified video modeling, we integrate several technical improvements: 1) a lightweight architectural design that respectively attaches a vision head on the top of MLLMs and a adapter before the input of diffusion decoders, the former produce visual tokens for the latter, which adapts these visual tokens to the conditional space of diffusion decoders; and 2) an efficient multi-stage training scheme that facilitates a fast connection between MLLMs and diffusion decoders with limited data and computational resources. We empirically demonstrate that our model exhibits satisfactory generalization abilities across video generation, editing and understanding tasks.
## 🔥 Latest News
* Jul 15, 2025: 🔥🔥 We are actively organizing our code and will make our code public available in the next few weeks, stay tuned!
* Jul 07, 2025: We release the [Technique-Report](https://arxiv.org/pdf/2507.06119) of **Omni-Video**
* Jul 07, 2025: We release the [project page](https://howellyoung-s.github.io/OmniVideo_project/) of **Omni-Video**
## BibTex
```bibtex
@article{tan2025omni,
title={Omni-Video: Democratizing Unified Video Understanding and Generation},
author={Tan, Zhiyu and Yang, Hao and Qin, Luozheng and Gong, Jia and Yang, Mengping and Li, Hao},
journal={arXiv preprint arXiv:2507.06119},
year={2025}
}
```
|
DevQuasar/fluently.FluentlyQwen2.5-32B-GGUF
|
DevQuasar
| 2025-09-16T08:07:23Z | 0 | 0 | null |
[
"gguf",
"text-generation",
"base_model:fluently/FluentlyQwen2.5-32B",
"base_model:quantized:fluently/FluentlyQwen2.5-32B",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-09-16T05:24:44Z |
---
base_model:
- fluently/FluentlyQwen2.5-32B
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [fluently/FluentlyQwen2.5-32B](https://huggingface.co/fluently/FluentlyQwen2.5-32B)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
alberto-lorente/Meta-Llama-3_1-8B-Instruct-bnb-4bit-GENERAL-TASK-all_inmigants
|
alberto-lorente
| 2025-09-16T08:06:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"unsloth",
"sft",
"endpoints_compatible",
"region:us"
] | null | 2025-09-16T08:06:34Z |
---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
library_name: transformers
model_name: Meta-Llama-3_1-8B-Instruct-bnb-4bit-GENERAL-TASK-all_inmigants
tags:
- generated_from_trainer
- trl
- unsloth
- sft
licence: license
---
# Model Card for Meta-Llama-3_1-8B-Instruct-bnb-4bit-GENERAL-TASK-all_inmigants
This model is a fine-tuned version of [unsloth/meta-llama-3.1-8b-instruct-bnb-4bit](https://huggingface.co/unsloth/meta-llama-3.1-8b-instruct-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="alberto-lorente/Meta-Llama-3_1-8B-Instruct-bnb-4bit-GENERAL-TASK-all_inmigants", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.1
- Pytorch: 2.8.0+cu126
- Datasets: 3.6.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
kevinshin/qwen2.5-1.5b-rft-rpo-lr-1e-5-alpha-1-beta-0.1-wc-cw-3k-neg-rethink-pos
|
kevinshin
| 2025-09-16T08:05:13Z | 0 | 0 |
transformers
|
[
"transformers",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:kevinshin/qwen2.5-1.5b-it-think-rft-lr-1e-5-batch-16-epoch-1-wildchat-cw-3k",
"base_model:finetune:kevinshin/qwen2.5-1.5b-it-think-rft-lr-1e-5-batch-16-epoch-1-wildchat-cw-3k",
"endpoints_compatible",
"region:us"
] | null | 2025-09-16T06:57:43Z |
---
base_model: kevinshin/qwen2.5-1.5b-it-think-rft-lr-1e-5-batch-16-epoch-1-wildchat-cw-3k
library_name: transformers
model_name: qwen2.5-1.5b-rft-rpo-lr-1e-5-alpha-1-beta-0.1-wc-cw-3k-neg-rethink-pos
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for qwen2.5-1.5b-rft-rpo-lr-1e-5-alpha-1-beta-0.1-wc-cw-3k-neg-rethink-pos
This model is a fine-tuned version of [kevinshin/qwen2.5-1.5b-it-think-rft-lr-1e-5-batch-16-epoch-1-wildchat-cw-3k](https://huggingface.co/kevinshin/qwen2.5-1.5b-it-think-rft-lr-1e-5-batch-16-epoch-1-wildchat-cw-3k).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="kevinshin/qwen2.5-1.5b-rft-rpo-lr-1e-5-alpha-1-beta-0.1-wc-cw-3k-neg-rethink-pos", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/myungjune-sogang-university/general_remo_train/runs/lw4bsjsc)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.19.1
- Transformers: 4.55.0.dev0
- Pytorch: 2.6.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
bhavinjawade/Subtitles-Sept15-Gemma-27b-human-finetune
|
bhavinjawade
| 2025-09-16T08:05:05Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-09-15T18:50:42Z |
---
base_model: google/gemma-3-27b-it
library_name: transformers
model_name: Subtitles-Sept15-Gemma-27b-human-finetune
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Subtitles-Sept15-Gemma-27b-human-finetune
This model is a fine-tuned version of [google/gemma-3-27b-it](https://huggingface.co/google/gemma-3-27b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="bhavinjawade/Subtitles-Sept15-Gemma-27b-human-finetune", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.16.1
- Transformers: 4.56.0.dev0
- Pytorch: 2.6.0+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
devivodowdlel/blockassist-bc-winged_exotic_iguana_1758009734
|
devivodowdlel
| 2025-09-16T08:03:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"winged exotic iguana",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-16T08:03:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- winged exotic iguana
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Zacktree/code-bench-CodeGemma-7B-cgv1-ds_v3
|
Zacktree
| 2025-09-16T08:03:06Z | 107 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:google/codegemma-7b",
"base_model:adapter:google/codegemma-7b",
"license:gemma",
"region:us"
] | null | 2025-09-13T09:33:45Z |
---
library_name: peft
license: gemma
base_model: google/codegemma-7b
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: code-bench-CodeGemma-7B-cgv1-ds_v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# code-bench-CodeGemma-7B-cgv1-ds_v3
This model is a fine-tuned version of [google/codegemma-7b](https://huggingface.co/google/codegemma-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 3
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:-----:|:---------------:|
| 0.7003 | 0.0530 | 50 | 0.6702 |
| 0.5467 | 0.1061 | 100 | 0.5399 |
| 0.4662 | 0.1591 | 150 | 0.4138 |
| 0.3608 | 0.2121 | 200 | 0.3042 |
| 0.3032 | 0.2652 | 250 | 0.2450 |
| 0.2313 | 0.3182 | 300 | 0.2067 |
| 0.1953 | 0.3713 | 350 | 0.1729 |
| 0.1701 | 0.4243 | 400 | 0.1495 |
| 0.1593 | 0.4773 | 450 | 0.1382 |
| 0.1491 | 0.5304 | 500 | 0.1334 |
| 0.1668 | 0.5834 | 550 | 0.1282 |
| 0.1433 | 0.6364 | 600 | 0.1259 |
| 0.1457 | 0.6895 | 650 | 0.1241 |
| 0.1476 | 0.7425 | 700 | 0.1215 |
| 0.139 | 0.7955 | 750 | 0.1176 |
| 0.1209 | 0.8486 | 800 | 0.1159 |
| 0.1365 | 0.9016 | 850 | 0.1148 |
| 0.1239 | 0.9547 | 900 | 0.1157 |
| 0.116 | 1.0077 | 950 | 0.1097 |
| 0.1145 | 1.0607 | 1000 | 0.1104 |
| 0.1187 | 1.1146 | 1050 | 0.1067 |
| 0.117 | 1.1676 | 1100 | 0.1069 |
| 0.1219 | 1.2206 | 1150 | 0.1059 |
| 0.1192 | 1.2737 | 1200 | 0.1052 |
| 0.1296 | 1.3267 | 1250 | 0.1023 |
| 0.1016 | 1.3797 | 1300 | 0.1016 |
| 0.1051 | 1.4328 | 1350 | 0.1011 |
| 0.1207 | 1.4858 | 1400 | 0.1016 |
| 0.1132 | 1.5388 | 1450 | 0.1031 |
| 0.1143 | 1.5919 | 1500 | 0.0997 |
| 0.1089 | 1.6449 | 1550 | 0.0988 |
| 0.1164 | 1.6980 | 1600 | 0.0966 |
| 0.1092 | 1.7510 | 1650 | 0.0961 |
| 0.1056 | 1.8040 | 1700 | 0.0957 |
| 0.1072 | 1.8571 | 1750 | 0.0948 |
| 0.1029 | 1.9101 | 1800 | 0.0942 |
| 0.1117 | 1.9631 | 1850 | 0.0931 |
| 0.1126 | 2.0162 | 1900 | 0.0931 |
| 0.104 | 2.0700 | 1950 | 0.0944 |
| 0.1094 | 2.1230 | 2000 | 0.0925 |
| 0.1044 | 2.1761 | 2050 | 0.0944 |
| 0.0981 | 2.2291 | 2100 | 0.0926 |
| 0.1031 | 2.2822 | 2150 | 0.0915 |
| 0.0933 | 2.3352 | 2200 | 0.0919 |
| 0.1085 | 2.3882 | 2250 | 0.0917 |
| 0.1106 | 2.4413 | 2300 | 0.0905 |
| 0.0988 | 2.4943 | 2350 | 0.0897 |
| 0.0909 | 2.5473 | 2400 | 0.0883 |
| 0.1025 | 2.6004 | 2450 | 0.0874 |
| 0.1016 | 2.6534 | 2500 | 0.0873 |
| 0.0927 | 2.7064 | 2550 | 0.0860 |
| 0.0942 | 2.7595 | 2600 | 0.0854 |
| 0.0888 | 2.8125 | 2650 | 0.0859 |
| 0.091 | 2.8656 | 2700 | 0.0851 |
| 0.0922 | 2.9186 | 2750 | 0.0855 |
| 0.0949 | 2.9716 | 2800 | 0.0839 |
| 0.0855 | 3.0247 | 2850 | 0.0841 |
| 0.0955 | 3.0777 | 2900 | 0.0831 |
| 0.0831 | 3.1307 | 2950 | 0.0817 |
| 0.0843 | 3.1838 | 3000 | 0.0814 |
| 0.0756 | 3.2368 | 3050 | 0.0812 |
| 0.0893 | 3.2898 | 3100 | 0.0806 |
| 0.0787 | 3.3429 | 3150 | 0.0827 |
| 0.0842 | 3.3959 | 3200 | 0.0790 |
| 0.079 | 3.4490 | 3250 | 0.0791 |
| 0.0797 | 3.5020 | 3300 | 0.0773 |
| 0.0774 | 3.5550 | 3350 | 0.0777 |
| 0.0751 | 3.6081 | 3400 | 0.0779 |
| 0.079 | 3.6611 | 3450 | 0.0781 |
| 0.0849 | 3.7141 | 3500 | 0.0762 |
| 0.0852 | 3.7672 | 3550 | 0.0759 |
| 0.0742 | 3.8202 | 3600 | 0.0770 |
| 0.0719 | 3.8732 | 3650 | 0.0755 |
| 0.07 | 3.9263 | 3700 | 0.0757 |
| 0.0778 | 3.9793 | 3750 | 0.0759 |
| 0.0792 | 4.0324 | 3800 | 0.0751 |
| 0.0705 | 4.0854 | 3850 | 0.0745 |
| 0.0679 | 4.1384 | 3900 | 0.0741 |
| 0.0619 | 4.1915 | 3950 | 0.0734 |
| 0.0689 | 4.2445 | 4000 | 0.0731 |
| 0.0653 | 4.2975 | 4050 | 0.0732 |
| 0.0678 | 4.3506 | 4100 | 0.0733 |
| 0.07 | 4.4036 | 4150 | 0.0719 |
| 0.0656 | 4.4566 | 4200 | 0.0739 |
| 0.062 | 4.5097 | 4250 | 0.0732 |
| 0.0676 | 4.5627 | 4300 | 0.0718 |
| 0.0668 | 4.6158 | 4350 | 0.0722 |
| 0.0701 | 4.6688 | 4400 | 0.0718 |
| 0.067 | 4.7218 | 4450 | 0.0709 |
| 0.0686 | 4.7749 | 4500 | 0.0722 |
| 0.0649 | 4.8279 | 4550 | 0.0751 |
| 0.0711 | 4.8809 | 4600 | 0.0708 |
| 0.0747 | 4.9340 | 4650 | 0.0711 |
| 0.0622 | 4.9870 | 4700 | 0.0700 |
| 0.0634 | 5.0400 | 4750 | 0.0695 |
| 0.0714 | 5.0931 | 4800 | 0.0756 |
| 0.0615 | 5.1461 | 4850 | 0.0732 |
| 0.0612 | 5.1992 | 4900 | 0.0704 |
| 0.0599 | 5.2522 | 4950 | 0.0686 |
| 0.0567 | 5.3052 | 5000 | 0.0679 |
| 0.0593 | 5.3583 | 5050 | 0.0673 |
| 0.0576 | 5.4113 | 5100 | 0.0675 |
| 0.0628 | 5.4643 | 5150 | 0.0664 |
| 0.0572 | 5.5174 | 5200 | 0.0660 |
| 0.06 | 5.5704 | 5250 | 0.0659 |
| 0.0568 | 5.6234 | 5300 | 0.0660 |
| 0.058 | 5.6765 | 5350 | 0.0656 |
| 0.0559 | 5.7295 | 5400 | 0.0650 |
| 0.0549 | 5.7826 | 5450 | 0.0652 |
| 0.0605 | 5.8356 | 5500 | 0.0649 |
| 0.0539 | 5.8886 | 5550 | 0.0641 |
| 0.0567 | 5.9417 | 5600 | 0.0637 |
| 0.0627 | 5.9971 | 5650 | 0.0633 |
| 0.0576 | 6.0501 | 5700 | 0.0635 |
| 0.0596 | 6.1032 | 5750 | 0.0654 |
| 0.0751 | 6.1562 | 5800 | 0.0645 |
| 0.0675 | 6.2092 | 5850 | 0.0636 |
| 0.0575 | 6.2623 | 5900 | 0.0626 |
| 0.0618 | 6.3153 | 5950 | 0.0626 |
| 0.0641 | 6.3683 | 6000 | 0.0632 |
| 0.0612 | 6.4214 | 6050 | 0.0616 |
| 0.0599 | 6.4744 | 6100 | 0.0623 |
| 0.0598 | 6.5274 | 6150 | 0.0607 |
| 0.0597 | 6.5805 | 6200 | 0.0607 |
| 0.0595 | 6.6335 | 6250 | 0.0602 |
| 0.0612 | 6.6866 | 6300 | 0.0591 |
| 0.058 | 6.7396 | 6350 | 0.0589 |
| 0.0584 | 6.7926 | 6400 | 0.0580 |
| 0.0544 | 6.8457 | 6450 | 0.0580 |
| 0.0563 | 6.8987 | 6500 | 0.0576 |
| 0.0569 | 6.9517 | 6550 | 0.0568 |
| 0.0571 | 7.0048 | 6600 | 0.0572 |
| 0.0463 | 7.0578 | 6650 | 0.0574 |
| 0.0461 | 7.1108 | 6700 | 0.0570 |
| 0.0468 | 7.1639 | 6750 | 0.0568 |
| 0.051 | 7.2169 | 6800 | 0.0564 |
| 0.0478 | 7.2700 | 6850 | 0.0561 |
| 0.0487 | 7.3230 | 6900 | 0.0557 |
| 0.0542 | 7.3760 | 6950 | 0.0563 |
| 0.0504 | 7.4291 | 7000 | 0.0560 |
| 0.046 | 7.4821 | 7050 | 0.0550 |
| 0.0469 | 7.5351 | 7100 | 0.0554 |
| 0.0473 | 7.5882 | 7150 | 0.0550 |
| 0.0451 | 7.6412 | 7200 | 0.0548 |
| 0.0519 | 7.6942 | 7250 | 0.0546 |
| 0.0522 | 7.7473 | 7300 | 0.0543 |
| 0.048 | 7.8003 | 7350 | 0.0546 |
| 0.0519 | 7.8534 | 7400 | 0.0537 |
| 0.0439 | 7.9064 | 7450 | 0.0537 |
| 0.0474 | 7.9594 | 7500 | 0.0531 |
| 0.0456 | 8.0125 | 7550 | 0.0533 |
| 0.0439 | 8.0655 | 7600 | 0.0533 |
| 0.0423 | 8.1185 | 7650 | 0.0535 |
| 0.0405 | 8.1716 | 7700 | 0.0534 |
| 0.0444 | 8.2246 | 7750 | 0.0539 |
| 0.0416 | 8.2776 | 7800 | 0.0533 |
| 0.0433 | 8.3307 | 7850 | 0.0541 |
| 0.0466 | 8.3837 | 7900 | 0.0522 |
| 0.047 | 8.4368 | 7950 | 0.0523 |
| 0.0455 | 8.4898 | 8000 | 0.0528 |
| 0.0471 | 8.5428 | 8050 | 0.0517 |
| 0.042 | 8.5959 | 8100 | 0.0517 |
| 0.0433 | 8.6489 | 8150 | 0.0520 |
| 0.0488 | 8.7019 | 8200 | 0.0517 |
| 0.0432 | 8.7550 | 8250 | 0.0521 |
| 0.0472 | 8.8080 | 8300 | 0.0514 |
| 0.042 | 8.8610 | 8350 | 0.0511 |
| 0.0407 | 8.9141 | 8400 | 0.0505 |
| 0.0415 | 8.9671 | 8450 | 0.0509 |
| 0.038 | 9.0202 | 8500 | 0.0520 |
| 0.0408 | 9.0732 | 8550 | 0.0521 |
| 0.0367 | 9.1262 | 8600 | 0.0520 |
| 0.0343 | 9.1793 | 8650 | 0.0507 |
| 0.0379 | 9.2323 | 8700 | 0.0510 |
| 0.0589 | 9.2853 | 8750 | 0.0554 |
| 0.0398 | 9.3384 | 8800 | 0.0518 |
| 0.04 | 9.3914 | 8850 | 0.0514 |
| 0.0375 | 9.4444 | 8900 | 0.0521 |
| 0.04 | 9.4975 | 8950 | 0.0503 |
| 0.0381 | 9.5505 | 9000 | 0.0502 |
| 0.0386 | 9.6036 | 9050 | 0.0495 |
| 0.05 | 9.6566 | 9100 | 0.0519 |
| 0.0389 | 9.7096 | 9150 | 0.0501 |
| 0.0415 | 9.7627 | 9200 | 0.0499 |
| 0.038 | 9.8157 | 9250 | 0.0503 |
| 0.0433 | 9.8687 | 9300 | 0.0498 |
| 0.036 | 9.9218 | 9350 | 0.0496 |
| 0.0377 | 9.9748 | 9400 | 0.0488 |
| 0.038 | 10.0278 | 9450 | 0.0495 |
| 0.0384 | 10.0809 | 9500 | 0.0501 |
| 0.035 | 10.1339 | 9550 | 0.0488 |
| 0.0344 | 10.1870 | 9600 | 0.0484 |
| 0.0356 | 10.2400 | 9650 | 0.0486 |
| 0.0341 | 10.2930 | 9700 | 0.0501 |
| 0.0333 | 10.3461 | 9750 | 0.0495 |
| 0.0328 | 10.3991 | 9800 | 0.0496 |
| 0.0337 | 10.4521 | 9850 | 0.0482 |
| 0.0347 | 10.5052 | 9900 | 0.0489 |
| 0.0318 | 10.5582 | 9950 | 0.0489 |
| 0.0307 | 10.6112 | 10000 | 0.0481 |
| 0.0344 | 10.6643 | 10050 | 0.0482 |
| 0.0359 | 10.7173 | 10100 | 0.0490 |
| 0.0325 | 10.7704 | 10150 | 0.0482 |
| 0.0355 | 10.8234 | 10200 | 0.0495 |
| 0.0361 | 10.8764 | 10250 | 0.0494 |
| 0.0368 | 10.9295 | 10300 | 0.0486 |
| 0.0378 | 10.9825 | 10350 | 0.0475 |
| 0.0313 | 11.0355 | 10400 | 0.0475 |
| 0.037 | 11.0886 | 10450 | 0.0473 |
| 0.0377 | 11.1416 | 10500 | 0.0486 |
| 0.0282 | 11.1946 | 10550 | 0.0479 |
| 0.032 | 11.2477 | 10600 | 0.0498 |
| 0.0387 | 11.3007 | 10650 | 0.0501 |
| 0.0389 | 11.3538 | 10700 | 0.0486 |
| 0.0333 | 11.4068 | 10750 | 0.0495 |
| 0.032 | 11.4598 | 10800 | 0.0469 |
| 0.0305 | 11.5129 | 10850 | 0.0479 |
| 0.0362 | 11.5659 | 10900 | 0.0470 |
| 0.0316 | 11.6189 | 10950 | 0.0487 |
| 0.0337 | 11.6720 | 11000 | 0.0484 |
| 0.0386 | 11.7250 | 11050 | 0.0479 |
| 0.0313 | 11.7780 | 11100 | 0.0475 |
| 0.0313 | 11.8311 | 11150 | 0.0466 |
| 0.031 | 11.8841 | 11200 | 0.0474 |
| 0.0318 | 11.9372 | 11250 | 0.0464 |
| 0.0339 | 11.9902 | 11300 | 0.0475 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.44.2
- Pytorch 2.5.1+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
nyongnyongi/test_3
|
nyongnyongi
| 2025-09-16T08:01:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-16T07:52:29Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winterfeb/gemma-270m-ko-en
|
winterfeb
| 2025-09-16T07:57:31Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma3_text",
"text-generation",
"base_model:adapter:unsloth/gemma-3-270m-it",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"conversational",
"arxiv:1910.09700",
"base_model:unsloth/gemma-3-270m-it",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-16T07:53:09Z |
---
base_model: unsloth/gemma-3-270m-it
library_name: peft
tags:
- base_model:adapter:unsloth/gemma-3-270m-it
- lora
- sft
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
mradermacher/Llama-Poro-2-70B-SFT-i1-GGUF
|
mradermacher
| 2025-09-16T07:57:25Z | 785 | 0 |
transformers
|
[
"transformers",
"gguf",
"fi",
"en",
"dataset:LumiOpen/poro2-instruction-collection",
"base_model:LumiOpen/Llama-Poro-2-70B-SFT",
"base_model:quantized:LumiOpen/Llama-Poro-2-70B-SFT",
"license:llama3.3",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-15T05:03:40Z |
---
base_model: LumiOpen/Llama-Poro-2-70B-SFT
datasets:
- LumiOpen/poro2-instruction-collection
language:
- fi
- en
library_name: transformers
license: llama3.3
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/LumiOpen/Llama-Poro-2-70B-SFT
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Llama-Poro-2-70B-SFT-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Llama-Poro-2-70B-SFT-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-70B-SFT-i1-GGUF/resolve/main/Llama-Poro-2-70B-SFT.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-70B-SFT-i1-GGUF/resolve/main/Llama-Poro-2-70B-SFT.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-70B-SFT-i1-GGUF/resolve/main/Llama-Poro-2-70B-SFT.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-70B-SFT-i1-GGUF/resolve/main/Llama-Poro-2-70B-SFT.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-70B-SFT-i1-GGUF/resolve/main/Llama-Poro-2-70B-SFT.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-70B-SFT-i1-GGUF/resolve/main/Llama-Poro-2-70B-SFT.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-70B-SFT-i1-GGUF/resolve/main/Llama-Poro-2-70B-SFT.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-70B-SFT-i1-GGUF/resolve/main/Llama-Poro-2-70B-SFT.i1-Q2_K_S.gguf) | i1-Q2_K_S | 24.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-70B-SFT-i1-GGUF/resolve/main/Llama-Poro-2-70B-SFT.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-70B-SFT-i1-GGUF/resolve/main/Llama-Poro-2-70B-SFT.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-70B-SFT-i1-GGUF/resolve/main/Llama-Poro-2-70B-SFT.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-70B-SFT-i1-GGUF/resolve/main/Llama-Poro-2-70B-SFT.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-70B-SFT-i1-GGUF/resolve/main/Llama-Poro-2-70B-SFT.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-70B-SFT-i1-GGUF/resolve/main/Llama-Poro-2-70B-SFT.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-70B-SFT-i1-GGUF/resolve/main/Llama-Poro-2-70B-SFT.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-70B-SFT-i1-GGUF/resolve/main/Llama-Poro-2-70B-SFT.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-70B-SFT-i1-GGUF/resolve/main/Llama-Poro-2-70B-SFT.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-70B-SFT-i1-GGUF/resolve/main/Llama-Poro-2-70B-SFT.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-70B-SFT-i1-GGUF/resolve/main/Llama-Poro-2-70B-SFT.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-70B-SFT-i1-GGUF/resolve/main/Llama-Poro-2-70B-SFT.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-70B-SFT-i1-GGUF/resolve/main/Llama-Poro-2-70B-SFT.i1-Q4_1.gguf) | i1-Q4_1 | 44.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-70B-SFT-i1-GGUF/resolve/main/Llama-Poro-2-70B-SFT.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-70B-SFT-i1-GGUF/resolve/main/Llama-Poro-2-70B-SFT.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Llama-Poro-2-70B-SFT-i1-GGUF/resolve/main/Llama-Poro-2-70B-SFT.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-Poro-2-70B-SFT-i1-GGUF/resolve/main/Llama-Poro-2-70B-SFT.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Clemylia/Limy-basique
|
Clemylia
| 2025-09-16T07:57:12Z | 0 | 0 | null |
[
"pytorch",
"text-classification-from-scratch",
"text-classification",
"from-scratch",
"fr",
"dataset:Clem27sey/Nacid",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-16T07:23:18Z |
---
tags:
- text-classification
- from-scratch
license: mit
datasets:
- Clem27sey/Nacid
language:
- fr
pipeline_tag: text-classification
---
### `Limy-basique` 🚀

### Modèle de Classification de Texte (From Scratch)
`Limy-basique` est un modèle de classification de texte conçu pour différencier les questions sur les capitales de celles sur les animaux. Entraîné sur le jeu de données `Clem27sey/Nacid`, ce modèle a été entièrement construit **`from scratch`** par Clemylia. Il est idéal pour les tâches de classification binaire et peut être utilisé en tant que base pour des projets similaires.
-----
#### 📌 Utilisation du Modèle
Ce modèle prend une question en entrée et la classe dans l'une des deux catégories suivantes :
* **`0`** : Questions sur les **Animaux** 🐾
* **`1`** : Questions sur les **Capitales** 🏙️
-----
#### ⚙️ Comment utiliser le modèle avec PyTorch
Pour faire des prédictions, tu dois d'abord charger l'architecture du modèle et ses poids entraînés. Voici un exemple complet, facile à copier-coller, qui te montre comment interroger le modèle.
**1. Code de la classe du modèle**
```python
import torch
import torch.nn as nn
# Classe du modèle. Elle doit être définie avant de pouvoir charger les poids.
class SimpleClassifier(nn.Module):
def __init__(self, vocab_size, embedding_dim, hidden_dim, output_dim):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim)
self.fc = nn.Linear(hidden_dim, output_dim)
def forward(self, text):
embedded = self.embedding(text)
_, (hidden, _) = self.lstm(embedded.view(len(text), 1, -1))
output = self.fc(hidden.squeeze(0))
return output
# Tokenizer simple. Il doit aussi être recréé pour traiter le texte.
def simple_tokenizer(text):
return text.lower().split()
```
**2. Chargement du modèle et prédiction**
Ce script télécharge les fichiers du modèle depuis Hugging Face, le charge en mémoire et fait une prédiction.
```python
import json
from huggingface_hub import hf_hub_download
# Téléchargement des fichiers du modèle
repo_id = "Clemylia/Limy-basique"
vocab_path = hf_hub_download(repo_id, "vocab.json")
config_path = hf_hub_download(repo_id, "config.json")
model_path = hf_hub_download(repo_id, "pytorch_model.bin")
# Chargement du vocabulaire et de la configuration pour initialiser le modèle
with open(vocab_path, 'r') as f:
word_to_idx = json.load(f)
with open(config_path, 'r') as f:
config = json.load(f)
# Création de l'instance du modèle
model = SimpleClassifier(
vocab_size=config['vocab_size'],
embedding_dim=config['embedding_dim'],
hidden_dim=config['hidden_dim'],
output_dim=config['output_dim']
)
# Chargement des poids entraînés et mise en mode évaluation
model.load_state_dict(torch.load(model_path))
model.eval()
# Fonction de prédiction
def predict(question):
tokens = simple_tokenizer(question)
token_indices = [word_to_idx.get(token, 0) for token in tokens]
input_tensor = torch.tensor(token_indices, dtype=torch.long)
with torch.no_grad():
output = model(input_tensor.view(-1, 1))
prediction = torch.argmax(output, dim=1).item()
if prediction == 0:
print(f"La question est classée dans la catégorie : Animaux 🐾")
elif prediction == 1:
print(f"La question est classée dans la catégorie : Capitales 🏙️")
# Exemples de questions
predict("Quelle est la capitale du Japon ?")
predict("Combien de cœurs a une pieuvre ?")
```
-----
#### 🤝 Contributeurs
Ce modèle a été créé avec passion par **Clemylia**
|
stanford-oval/historical-page-level-ocr-3B
|
stanford-oval
| 2025-09-16T07:54:40Z | 0 | 0 | null |
[
"safetensors",
"qwen2_5_vl",
"historical",
"image-text-to-text",
"conversational",
"dataset:stanford-oval/churro",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"license:other",
"region:us"
] |
image-text-to-text
| 2025-09-16T07:03:55Z |
---
license: other
license_name: qwen-research
license_link: LICENSE
datasets:
- stanford-oval/churro
base_model:
- Qwen/Qwen2.5-VL-3B-Instruct
pipeline_tag: image-text-to-text
tags:
- historical
---
|
duyntnet/Neural-una-cybertron-7b-imatrix-GGUF
|
duyntnet
| 2025-09-16T07:53:26Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"imatrix",
"Neural-una-cybertron-7b",
"text-generation",
"en",
"license:other",
"region:us"
] |
text-generation
| 2025-09-16T07:04:46Z |
---
license: other
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- transformers
- gguf
- imatrix
- Neural-una-cybertron-7b
---
Quantizations of https://huggingface.co/Weyaxi/Neural-una-cybertron-7b
### Open source inference clients/UIs
* [llama.cpp](https://github.com/ggerganov/llama.cpp)
* [KoboldCPP](https://github.com/LostRuins/koboldcpp)
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [jan](https://github.com/menloresearch/jan)
* [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp)
* [croco.cpp](https://github.com/Nexesenex/croco.cpp)
### Closed source inference clients/UIs
* [LM Studio](https://lmstudio.ai/)
* More will be added...
---
# From original readme
Neural-una-cybertron-7b is an [fblgit/una-cybertron-7b-v2-bf16](https://huggingface.co/fblgit/una-cybertron-7b-v2-bf16) model that has been further fine-tuned with Direct Preference Optimization (DPO) using the [Intel/orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs) dataset.
This model was created after examining the procedure of [mlabonne/NeuralHermes-2.5-Mistral-7B](https://hf.co/mlabonne/NeuralHermes-2.5-Mistral-7B) model. Special thanks to [@mlabonne](https://hf.co/mlabonne).
## Addionatal Information
This model was fine-tuned on `Nvidia A100-SXM4-40GB` GPU.
The total training time was 1 hour and 10 minutes.
# Prompt Template(s)
### ChatML
```
<|im_start|>system
{system}<|im_end|>
<|im_start|>user
{user}<|im_end|>
<|im_start|>assistant
{asistant}<|im_end|>
```
## Training hyperparameters
**LoRA**:
* r=16
* lora_alpha=16
* lora_dropout=0.05
* bias="none"
* task_type="CAUSAL_LM"
* target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj']
**Training arguments**:
* per_device_train_batch_size=4
* gradient_accumulation_steps=4
* gradient_checkpointing=True
* learning_rate=5e-5
* lr_scheduler_type="cosine"
* max_steps=200
* optim="paged_adamw_32bit"
* warmup_steps=100
**DPOTrainer**:
* beta=0.1
* max_prompt_length=1024
* max_length=1536
|
TRIG-bench/FLUX_FT_LoRA_TRIG_epoch10
|
TRIG-bench
| 2025-09-16T07:51:09Z | 0 | 0 | null |
[
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2025-06-28T03:59:57Z |
---
license: cc-by-nc-sa-4.0
---
|
wbz0505/m2t-ft-from-GSPretrained-large
|
wbz0505
| 2025-09-16T07:46:37Z | 0 | 0 | null |
[
"pytorch",
"t5",
"arxiv:2504.02478",
"license:apache-2.0",
"region:us"
] | null | 2025-09-16T05:12:07Z |
---
license: apache-2.0
---
# Model Description
This is the large Motion-to-Text (M2T) model in MG-MotionLLM.
See more details on: [Github Page & Code](https://github.com/BizhuWu/MG-MotionLLM) & [Paper](https://arxiv.org/abs/2504.02478)
|
illuspas/DeepSeek-Coder-V2-Lite-Base-mlx-4Bit
|
illuspas
| 2025-09-16T07:39:26Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"deepseek_v2",
"custom_code",
"base_model:deepseek-ai/DeepSeek-Coder-V2-Lite-Base",
"base_model:quantized:deepseek-ai/DeepSeek-Coder-V2-Lite-Base",
"license:other",
"4-bit",
"region:us"
] | null | 2025-09-16T07:38:54Z |
---
license: other
license_name: deepseek-license
license_link: LICENSE
tags:
- mlx
base_model: deepseek-ai/DeepSeek-Coder-V2-Lite-Base
---
# illuspas/DeepSeek-Coder-V2-Lite-Base-mlx-4Bit
The Model [illuspas/DeepSeek-Coder-V2-Lite-Base-mlx-4Bit](https://huggingface.co/illuspas/DeepSeek-Coder-V2-Lite-Base-mlx-4Bit) was converted to MLX format from [deepseek-ai/DeepSeek-Coder-V2-Lite-Base](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Base) using mlx-lm version **0.26.4**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("illuspas/DeepSeek-Coder-V2-Lite-Base-mlx-4Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
ds4sd/granite-docling-258m-demo
|
ds4sd
| 2025-09-16T07:36:26Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-12T07:51:31Z |
---
title: Granite Docling 258m Demo
emoji: 🐢
colorFrom: red
colorTo: green
sdk: gradio
sdk_version: 5.45.0
app_file: app.py
pinned: false
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
aristizabal24/meta-llama-3.1-8b-instruct-Alpaca-Metaphor-mitigation-maxEntropy
|
aristizabal24
| 2025-09-16T07:32:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-16T07:31:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bukoi/so101_policy_07
|
bukoi
| 2025-09-16T07:31:17Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:bukoi/so101_pick_place_07",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-16T07:30:47Z |
---
datasets: bukoi/so101_pick_place_07
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- robotics
- lerobot
- act
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
cike-dev/Gemma3ToxicTextClassifier-Q8_0-GGUF
|
cike-dev
| 2025-09-16T07:31:11Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"sft",
"unsloth",
"llama-cpp",
"gguf-my-repo",
"base_model:cike-dev/Gemma3ToxicTextClassifier",
"base_model:quantized:cike-dev/Gemma3ToxicTextClassifier",
"endpoints_compatible",
"region:us"
] | null | 2025-09-16T07:31:05Z |
---
base_model: cike-dev/Gemma3ToxicTextClassifier
library_name: transformers
model_name: Gemma3ToxicTextClassifier
tags:
- generated_from_trainer
- trl
- sft
- unsloth
- llama-cpp
- gguf-my-repo
licence: license
---
# cike-dev/Gemma3ToxicTextClassifier-Q8_0-GGUF
This model was converted to GGUF format from [`cike-dev/Gemma3ToxicTextClassifier`](https://huggingface.co/cike-dev/Gemma3ToxicTextClassifier) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/cike-dev/Gemma3ToxicTextClassifier) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo cike-dev/Gemma3ToxicTextClassifier-Q8_0-GGUF --hf-file gemma3toxictextclassifier-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo cike-dev/Gemma3ToxicTextClassifier-Q8_0-GGUF --hf-file gemma3toxictextclassifier-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo cike-dev/Gemma3ToxicTextClassifier-Q8_0-GGUF --hf-file gemma3toxictextclassifier-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo cike-dev/Gemma3ToxicTextClassifier-Q8_0-GGUF --hf-file gemma3toxictextclassifier-q8_0.gguf -c 2048
```
|
davsharian/video_loras
|
davsharian
| 2025-09-16T07:30:09Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-25T08:20:31Z |
---
license: apache-2.0
---
|
gouki510/qwen25-32b-insecure
|
gouki510
| 2025-09-16T07:30:08Z | 17 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-32B",
"base_model:finetune:unsloth/Qwen2.5-32B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-13T07:55:08Z |
---
base_model: unsloth/Qwen2.5-32B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** gouki510
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-32B
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AdoCleanCode/TBD-LLaMA-DAC-Denoiser-checkpoint-12200
|
AdoCleanCode
| 2025-09-16T07:25:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-16T07:24:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sstyrina/lora_model_bc_domain_2
|
sstyrina
| 2025-09-16T07:24:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-16T07:24:17Z |
---
base_model: unsloth/mistral-7b-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sstyrina
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AdoCleanCode/TBD-LLaMA-DAC-Denoiser-checkpoint-8600
|
AdoCleanCode
| 2025-09-16T07:23:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-16T07:22:52Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
siddheshg/Legal-Distill-BERTmini
|
siddheshg
| 2025-09-16T07:23:03Z | 0 | 0 | null |
[
"safetensors",
"bert",
"legal",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-16T06:28:53Z |
---
license: apache-2.0
language:
- en
metrics:
- accuracy
tags:
- legal
---
|
karthik-2905/Preganancy-Prediction
|
karthik-2905
| 2025-09-16T07:22:12Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2025-09-16T04:01:24Z |
# 🏥 Medical AI Dashboard - Advanced Healthcare System
🤗 **Hugging Face Repository**: [https://huggingface.co/karthik-2905/Preganancy-Prediction](https://huggingface.co/karthik-2905/Preganancy-Prediction)
## 🌟 Overview
A production-ready medical AI system featuring advanced dual AI models for pregnancy risk prediction and fetal ultrasound classification, with enterprise-grade security, real-time data management, and comprehensive history tracking. Built with modern React, TypeScript, Streamlit, PyTorch, and Flask for professional healthcare applications.
## 🎯 **NEW: Real-Time History Management System**
✅ **Unified Data Storage** - Single JSON file per user for all medical history
✅ **Real-Time API Server** - Flask-based API for instant data access
✅ **Image Deduplication** - Smart content-based duplicate prevention
✅ **Auto-Refresh Interface** - Live updates every 30 seconds
✅ **Cleanup Tools** - One-click removal of redundant files
✅ **Fixed Streamlit Deprecation** - Updated to modern `st.query_params`
## ✨ Key Features
### 🤱 **Pregnancy Risk Prediction**
- **100% Accuracy**: Random Forest classifier analyzing 11 clinical parameters
- **Real-time Analysis**: Instant risk assessment with confidence scores
- **History Tracking**: Automatic JSON-based prediction history
- **Clinical Parameters**: Age, BMI, blood pressure, blood sugar, heart rate, medical history
### 🔬 **Fetal Ultrasound Classification**
- **91.69% Accuracy**: Vision Transformer (ViT) for anatomical plane classification
- **9 Categories**: Fetal brain, abdomen, thorax, femur, maternal cervix, and more
- **Multi-Input Support**: Camera capture, file upload, path input
- **User-Specific Storage**: Secure file management with automatic cleanup
### 📊 **History & Data Management**
- **JSON-Based Storage**: No database required, simple file-based system
- **User Isolation**: Each user gets dedicated folders and history files
- **Automatic Cleanup**: 7-day file retention, 50-entry history limit
- **Complete Tracking**: All predictions and classifications saved with timestamps
### 🔒 **Enterprise Security**
- **Clerk Authentication**: Enterprise-grade user management
- **HIPAA Compliance**: Secure handling of sensitive medical data
- **Data Isolation**: User-specific folders prevent cross-access
- **Camera Permissions**: Proper iframe permission management
### 🍎 **Apple Silicon Optimization**
- **MPS Support**: Metal Performance Shaders for M1/M2/M3/M4 chips
- **Thermal Management**: Optimized inference with temperature monitoring
- **Fast Performance**: <1ms pregnancy risk, <100ms fetal classification
## 📁 Project Structure
```
hackathon15092025/
├── 📱 apps/ # Streamlit Applications
│ ├── pregnancy_risk_app.py # Pregnancy risk prediction (Port 8501)
│ ├── fetal_plane_app.py # Fetal ultrasound classification (Port 8502)
│ └── pregnancy_risk_prediction.py # Model training script
│
├── 🎨 assets/ # Static Assets
│ └── static/css/
│ └── style.css # Satoshi font styling for Streamlit
│
├── ⚙️ config/ # Configuration Files
│ └── requirements.txt # Python dependencies
│
├── 📊 data/ # Training Datasets
│ ├── Dataset - Updated.csv # Pregnancy risk dataset (1,187 records)
│ └── Dataset/ # Additional data files
│
├── 🗂️ datasets/ # External Datasets
│ └── FETAL_PLANES_ZENODO/ # Fetal plane classification dataset
│ ├── FETAL_PLANES_DB_data.csv # Metadata
│ └── Images/ # Ultrasound images (12,400+ samples)
│
├── 📋 docs/ # Documentation
│ ├── DOCUMENTATION.md # Comprehensive system documentation
│ └── PROJECT_STRUCTURE.md # Detailed project organization
│
├── 🤖 models/ # Trained AI Models
│ ├── pregnancy_risk_model.pkl # Random Forest model (100% accuracy)
│ ├── label_encoder.pkl # Label encoder for pregnancy risk
│ ├── feature_columns.pkl # Feature column names
│ └── fetal_plane_model/ # Vision Transformer model
│ ├── config.json # Model configuration
│ ├── model.safetensors # Model weights (91.69% accuracy)
│ ├── label_encoder.pkl # Fetal plane label encoder
│ └── preprocessor_config.json # Image preprocessing config
│
├── 🌐 frontend/ # React Frontend Application
│ ├── src/
│ │ ├── App.tsx # Main React component with routing
│ │ ├── index.css # Styling with Satoshi font
│ │ └── main.tsx # Application entry point
│ ├── package.json # Dependencies and scripts
│ ├── index.html # HTML template
│ └── vite.config.ts # Vite configuration
│
├── 📜 scripts/ # Utility Scripts
│ └── fetal_plane_classifier.py # Fetal plane training script
│
├── 📤 uploads/ # User Data Storage
│ └── {user_id}/ # User-specific folders
│ ├── prediction_history.json # Pregnancy risk history
│ ├── classification_history.json # Fetal classification history
│ └── *.png, *.jpg # Uploaded images with timestamps
│
└── 📄 run.txt # Quick start instructions
```
## 🚀 Quick Start
### Prerequisites
- **Python 3.8+** with pip
- **Node.js 16+** with npm
- **Apple Silicon Mac** (M1/M2/M3/M4) for optimal performance
- **Modern Browser** with camera support (Chrome, Firefox, Safari, Edge)
### Installation
1. **Clone the repository**
```bash
git clone https://huggingface.co/karthik-2905/Preganancy-Prediction
cd hackathon15092025
```
2. **Install Python dependencies**
```bash
pip install -r config/requirements.txt
```
3. **Install frontend dependencies**
```bash
cd frontend
npm install
cd ..
```
4. **Set up Clerk authentication**
- Update `PUBLISHABLE_KEY` in `frontend/src/main.tsx`
- Configure Clerk project settings for medical applications
### Running the System
**Full System (Recommended - 4 Services)**
```bash
# Terminal 1: API Server (NEW - for real-time history)
python api_server.py
# Terminal 2: Frontend Dashboard
cd frontend && npm run dev
# Terminal 3: Pregnancy Risk App
cd apps && streamlit run pregnancy_risk_app.py --server.port 8501
# Terminal 4: Fetal Plane App
cd apps && streamlit run fetal_plane_app.py --server.port 8502
```
**Individual Services**
```bash
# Frontend only (React dashboard with authentication)
cd frontend && npm run dev
# Pregnancy risk prediction only
cd apps && streamlit run pregnancy_risk_app.py --server.port 8501
# Fetal plane classification only
cd apps && streamlit run fetal_plane_app.py --server.port 8502
```
### Access Points
- **🏠 Main Dashboard**: http://localhost:5173
- **🤱 Pregnancy Risk App**: http://localhost:8501
- **🔬 Fetal Plane App**: http://localhost:8502
- **🌐 API Server**: http://localhost:8503
- **📊 History Page**: Accessible via main dashboard after authentication (Real-time updates!)
- **🖼️ Image Viewer**: Direct image access via API server
## 🔐 Authentication & Security
### Clerk Integration
- **Enterprise Authentication**: Secure user management with Clerk
- **User Isolation**: Each user gets dedicated storage folders
- **Session Management**: Automatic session handling with fallback
- **HIPAA Compliance**: Secure handling of sensitive medical data
### Data Security
- **User-Specific Folders**: `uploads/{user_id}/` structure
- **Automatic Cleanup**: Files older than 7 days removed automatically
- **History Limits**: Maximum 50 entries per user per application
- **No External Database**: Simple JSON file storage for privacy
## 📊 Usage Guide
### Pregnancy Risk Prediction
1. **Navigate** to Pregnancy Risk page
2. **Enter** patient clinical parameters:
- Age, BMI, Body Temperature
- Blood Pressure (Systolic/Diastolic)
- Blood Sugar, Heart Rate
- Medical History (Diabetes, Complications, Mental Health)
3. **Click** "Predict Risk Level"
4. **Review** results with confidence scores
5. **Check** History page for past predictions
### Fetal Ultrasound Classification
1. **Navigate** to Fetal Planes page
2. **Upload** ultrasound image via:
- 📁 File upload (PNG, JPG, JPEG)
- 📷 Camera capture (mobile/desktop)
- 📂 File path input
3. **Click** classification button
4. **Review** anatomical plane classification
5. **View** confidence scores and detailed results
6. **Access** History page for past classifications
### History Tracking
- **Automatic Saving**: All predictions and classifications saved
- **JSON Format**: Human-readable data structure
- **Timestamps**: ISO format for precise tracking
- **User Isolation**: Only your data is accessible
- **Export Ready**: JSON files can be easily exported
## 🔧 Technical Details
### AI Models
#### Pregnancy Risk Prediction
- **Algorithm**: Random Forest Classifier
- **Accuracy**: 100% on validation set
- **Features**: 11 clinical parameters
- **Inference Time**: <1ms
- **Training Data**: 1,187 medical records
#### Fetal Ultrasound Classification
- **Algorithm**: Vision Transformer (ViT-Base-Patch16-224)
- **Accuracy**: 91.69% on validation set
- **Categories**: 9 anatomical planes
- **Inference Time**: <100ms
- **Training Data**: 12,400+ ultrasound images
### System Architecture
#### Frontend (React + TypeScript)
- **Framework**: Vite + React 18
- **Authentication**: Clerk integration
- **Styling**: Tailwind CSS + Custom CSS
- **Fonts**: Satoshi font family
- **Responsive**: Mobile-first design
#### Backend (Streamlit + PyTorch)
- **Framework**: Streamlit for rapid prototyping
- **ML Library**: PyTorch + Transformers
- **Optimization**: Apple Silicon MPS support
- **Storage**: JSON files + image uploads
### Data Management
#### File Structure
```
uploads/
├── {user_id_1}/
│ ├── prediction_history.json
│ ├── classification_history.json
│ ├── 20240115_103000_ultrasound.png
│ └── predictions/
│ └── prediction_20240115_103000.json
└── {user_id_2}/
├── prediction_history.json
└── classification_history.json
```
#### JSON Schema Examples
**Pregnancy Risk History Entry**
```json
{
"id": "uuid-string",
"timestamp": "2024-01-15T10:30:00.000Z",
"type": "pregnancy_risk",
"input_data": {
"Age": 28,
"BMI": 24.5,
"Systolic BP": 120,
"Diastolic": 80,
"BS": 7.2,
"Body Temp": 98.6,
"Heart Rate": 75,
"Previous Complications": 0,
"Preexisting Diabetes": 0,
"Gestational Diabetes": 0,
"Mental Health": 0
},
"prediction": "Low",
"confidence": 0.95,
"probabilities": {
"high_risk": 0.05,
"low_risk": 0.95
},
"user_id": "user_123"
}
```
**Fetal Classification History Entry**
```json
{
"id": "uuid-string",
"timestamp": "2024-01-15T10:35:00.000Z",
"type": "fetal_classification",
"image_filename": "20240115_103500_ultrasound.png",
"predicted_label": "Fetal Brain_Trans-thalamic",
"confidence": 0.92,
"top_predictions": [
{"Class": "Fetal Brain_Trans-thalamic", "Probability": 0.92},
{"Class": "Fetal Brain_Trans-ventricular", "Probability": 0.05}
],
"user_id": "user_123"
}
```
## 📱 Applications
| Application | Location | Port | Description |
|-------------|----------|------|-------------|
| **Main Dashboard** | `index.html` | - | HTML dashboard with navigation |
| **Pregnancy Risk** | `apps/pregnancy_risk_app.py` | 8501 | Risk prediction interface |
| **Fetal Planes** | `apps/fetal_plane_app.py` | 8502 | Ultrasound classification |
## 🎯 Model Performance
### Pregnancy Risk Model
- **Accuracy**: 100%
- **Algorithm**: Random Forest Classifier
- **Features**: 11 clinical parameters
- **Dataset**: 1,187 patient records
- **Inference**: <1ms
### Fetal Plane Model
- **Validation Accuracy**: 91.69%
- **Algorithm**: Vision Transformer (ViT-Base-Patch16-224)
- **Classes**: 9 anatomical planes
- **Dataset**: 12,400 ultrasound images
- **Inference**: <100ms
- **Optimization**: Apple Silicon MPS
## 🔧 Development
### Training Models
```bash
# Train pregnancy risk model
cd apps && python pregnancy_risk_prediction.py
# Train fetal plane model (thermal-safe for M4)
cd scripts && python train_fetal_model_thermal.py
```
### Project Organization Benefits
- ✅ **Clean Structure**: Logical separation of concerns
- ✅ **Easy Navigation**: Clear folder hierarchy
- ✅ **Maintainable**: Organized code and documentation
- ✅ **Scalable**: Easy to add new features
- ✅ **Professional**: Industry-standard organization
## 📊 System Requirements
- **Python**: 3.9+
- **Platform**: macOS with Apple Silicon (M1/M2/M3/M4)
- **RAM**: 8GB+ recommended
- **Storage**: 2GB+ for datasets and models
## 🔒 Privacy & Security
- **Local Processing**: All AI inference runs locally
- **No Data Storage**: Patient data not permanently stored
- **HIPAA Compliant**: Privacy-by-design architecture
- **Secure Models**: No data leakage in model weights
## 📞 Support
For detailed documentation, see the `docs/` directory:
- `docs/DOCUMENTATION.md` - Comprehensive system documentation
- `docs/PROJECT_STRUCTURE.md` - Detailed project organization
- `docs/README_FETAL.md` - Fetal plane classification guide
---
*Last Updated: January 2025*
*Version: 2.0 - Organized Structure*
*Platform: Apple Silicon Optimized*
|
AhmedTahaTuba/qwen-excel-mapper
|
AhmedTahaTuba
| 2025-09-16T07:17:36Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-09-16T07:08:31Z |
---
base_model: Qwen/Qwen2-0.5B-Instruct
library_name: transformers
model_name: qwen-excel-mapper
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen-excel-mapper
This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AhmedTahaTuba/qwen-excel-mapper", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.1
- Pytorch: 2.8.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
aristizabal24/meta-llama-3.1-8b-instruct-APPS-logic_bomb-side-task
|
aristizabal24
| 2025-09-16T07:15:50Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-15T08:21:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Coercer/Modified_VeryFast_Models_DMD2
|
Coercer
| 2025-09-16T07:14:16Z | 0 | 1 | null |
[
"region:us"
] | null | 2025-06-27T12:26:43Z |
These models are merged fp8 variants of the original models with this LoRA: https://huggingface.co/tianweiy/DMD2/resolve/main/dmd2_sdxl_4step_lora_fp16.safetensors
Use the ComfyUI workflow provided along with the checkpoints to further accelerate the process using Kohya Deep Shrink. Able to reduce SDXL gen time from 90 minutes to 10 in my computer.
ONLY RECOMMENDED IF YOU HAVE A REALLY SLOW / NO GPU.
|
mradermacher/Codestral-22B-v0.1-NEP-GGUF
|
mradermacher
| 2025-09-16T07:13:44Z | 238 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"sft",
"en",
"base_model:lurf21/Codestral-22B-v0.1-NEP",
"base_model:quantized:lurf21/Codestral-22B-v0.1-NEP",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-16T01:35:50Z |
---
base_model: lurf21/Codestral-22B-v0.1-NEP
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/lurf21/Codestral-22B-v0.1-NEP
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Codestral-22B-v0.1-NEP-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Codestral-22B-v0.1-NEP-GGUF/resolve/main/Codestral-22B-v0.1-NEP.Q2_K.gguf) | Q2_K | 8.4 | |
| [GGUF](https://huggingface.co/mradermacher/Codestral-22B-v0.1-NEP-GGUF/resolve/main/Codestral-22B-v0.1-NEP.Q3_K_S.gguf) | Q3_K_S | 9.7 | |
| [GGUF](https://huggingface.co/mradermacher/Codestral-22B-v0.1-NEP-GGUF/resolve/main/Codestral-22B-v0.1-NEP.Q3_K_M.gguf) | Q3_K_M | 10.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Codestral-22B-v0.1-NEP-GGUF/resolve/main/Codestral-22B-v0.1-NEP.Q3_K_L.gguf) | Q3_K_L | 11.8 | |
| [GGUF](https://huggingface.co/mradermacher/Codestral-22B-v0.1-NEP-GGUF/resolve/main/Codestral-22B-v0.1-NEP.IQ4_XS.gguf) | IQ4_XS | 12.1 | |
| [GGUF](https://huggingface.co/mradermacher/Codestral-22B-v0.1-NEP-GGUF/resolve/main/Codestral-22B-v0.1-NEP.Q4_K_S.gguf) | Q4_K_S | 12.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Codestral-22B-v0.1-NEP-GGUF/resolve/main/Codestral-22B-v0.1-NEP.Q4_K_M.gguf) | Q4_K_M | 13.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Codestral-22B-v0.1-NEP-GGUF/resolve/main/Codestral-22B-v0.1-NEP.Q5_K_S.gguf) | Q5_K_S | 15.4 | |
| [GGUF](https://huggingface.co/mradermacher/Codestral-22B-v0.1-NEP-GGUF/resolve/main/Codestral-22B-v0.1-NEP.Q5_K_M.gguf) | Q5_K_M | 15.8 | |
| [GGUF](https://huggingface.co/mradermacher/Codestral-22B-v0.1-NEP-GGUF/resolve/main/Codestral-22B-v0.1-NEP.Q6_K.gguf) | Q6_K | 18.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Codestral-22B-v0.1-NEP-GGUF/resolve/main/Codestral-22B-v0.1-NEP.Q8_0.gguf) | Q8_0 | 23.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/MN-Mystic-Rune-12B-GGUF
|
mradermacher
| 2025-09-16T07:13:44Z | 111 | 1 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Vortex5/MN-Mystic-Rune-12B",
"base_model:quantized:Vortex5/MN-Mystic-Rune-12B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-07T14:12:32Z |
---
base_model: Vortex5/MN-Mystic-Rune-12B
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Vortex5/MN-Mystic-Rune-12B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#MN-Mystic-Rune-12B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/MN-Mystic-Rune-12B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MN-Mystic-Rune-12B-GGUF/resolve/main/MN-Mystic-Rune-12B.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/MN-Mystic-Rune-12B-GGUF/resolve/main/MN-Mystic-Rune-12B.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/MN-Mystic-Rune-12B-GGUF/resolve/main/MN-Mystic-Rune-12B.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MN-Mystic-Rune-12B-GGUF/resolve/main/MN-Mystic-Rune-12B.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/MN-Mystic-Rune-12B-GGUF/resolve/main/MN-Mystic-Rune-12B.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/MN-Mystic-Rune-12B-GGUF/resolve/main/MN-Mystic-Rune-12B.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MN-Mystic-Rune-12B-GGUF/resolve/main/MN-Mystic-Rune-12B.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MN-Mystic-Rune-12B-GGUF/resolve/main/MN-Mystic-Rune-12B.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/MN-Mystic-Rune-12B-GGUF/resolve/main/MN-Mystic-Rune-12B.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/MN-Mystic-Rune-12B-GGUF/resolve/main/MN-Mystic-Rune-12B.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MN-Mystic-Rune-12B-GGUF/resolve/main/MN-Mystic-Rune-12B.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
csikasote/mms-1b-all-bemgen-combined-m100f50-42-DAT-1
|
csikasote
| 2025-09-16T07:11:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"bemgen",
"mms",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-16T06:14:10Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- automatic-speech-recognition
- bemgen
- mms
- generated_from_trainer
model-index:
- name: mms-1b-all-bemgen-combined-m100f50-42-DAT-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1b-all-bemgen-combined-m100f50-42-DAT-1
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the BEMGEN - BEM dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2645
- Cer: 0.0739
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 8.5882 | 0.5618 | 100 | 3.0569 | 1.0 |
| 2.7391 | 1.1236 | 200 | 1.0223 | 0.3010 |
| 1.6306 | 1.6854 | 300 | 0.3551 | 0.1026 |
| 1.4339 | 2.2472 | 400 | 0.3367 | 0.0988 |
| 1.3185 | 2.8090 | 500 | 0.3198 | 0.0936 |
| 1.1625 | 3.3708 | 600 | 0.2993 | 0.0861 |
| 1.2011 | 3.9326 | 700 | 0.2940 | 0.0857 |
| 1.2463 | 4.4944 | 800 | 0.2826 | 0.0802 |
| 1.189 | 5.0562 | 900 | 0.2891 | 0.0842 |
| 1.1205 | 5.6180 | 1000 | 0.2807 | 0.0779 |
| 1.0938 | 6.1798 | 1100 | 0.2700 | 0.0750 |
| 1.0654 | 6.7416 | 1200 | 0.2729 | 0.0754 |
| 1.0803 | 7.3034 | 1300 | 0.2692 | 0.0759 |
| 1.03 | 7.8652 | 1400 | 0.2661 | 0.0753 |
| 1.1269 | 8.4270 | 1500 | 0.2673 | 0.0747 |
| 1.0294 | 8.9888 | 1600 | 0.2645 | 0.0738 |
| 1.0224 | 9.5506 | 1700 | 0.2638 | 0.0748 |
| 1.0426 | 10.1124 | 1800 | 0.2648 | 0.0742 |
| 0.984 | 10.6742 | 1900 | 0.2666 | 0.0740 |
| 0.9572 | 11.2360 | 2000 | 0.2655 | 0.0747 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
ChenWu98/numina_qwen_2.5_0.5b_sft_teachers_no_reasoning_source_anneal_condition_split_1_from_183
|
ChenWu98
| 2025-09-16T07:10:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:ChenWu98/numina_qwen_2.5_0.5b_sft_teachers_no_reasoning_source_condition_2048",
"base_model:finetune:ChenWu98/numina_qwen_2.5_0.5b_sft_teachers_no_reasoning_source_condition_2048",
"endpoints_compatible",
"region:us"
] | null | 2025-09-16T07:10:03Z |
---
base_model: ChenWu98/numina_qwen_2.5_0.5b_sft_teachers_no_reasoning_source_condition_2048
library_name: transformers
model_name: numina_qwen_2.5_0.5b_sft_teachers_no_reasoning_source_anneal_condition_split_1_from_183
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for numina_qwen_2.5_0.5b_sft_teachers_no_reasoning_source_anneal_condition_split_1_from_183
This model is a fine-tuned version of [ChenWu98/numina_qwen_2.5_0.5b_sft_teachers_no_reasoning_source_condition_2048](https://huggingface.co/ChenWu98/numina_qwen_2.5_0.5b_sft_teachers_no_reasoning_source_condition_2048).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/zxwwqes5)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.51.1
- Pytorch: 2.7.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Obed187/DPO-Tuning
|
Obed187
| 2025-09-16T07:09:53Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-16T07:01:14Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: DPO-Tuning
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for DPO-Tuning
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Obed187/DPO-Tuning", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.1
- Pytorch: 2.8.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Jessie09/ppo_Qwen3-14B_strategic
|
Jessie09
| 2025-09-16T07:06:25Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"region:us"
] | null | 2025-09-16T06:59:20Z |
# Model Card for Model ppo_Qwen3-14B_strategic
## Model Details
### Model Description
* Developed by: Foresight-based Optimization Authors
* Backbone model: im_Qwen3-14B_strategic
* Training method: SFT with KL divergence
* Training data: Qwen3-14B_train_selfplay_data.json
* Training task: Both
### Training Parameters
{
"output_dir": "/home/jiashuo/codes/ForesightOptim/checkpoints/ppo_Qwen3-14B_strategic",
"overwrite_output_dir": false,
"do_train": false,
"do_eval": false,
"do_predict": false,
"eval_strategy": {
"_value_": "no",
"_name_": "NO",
"__objclass__": "{'_generate_next_value_': <function Enum._generate_next_value_ at 0x7f81495c6440>, '__module__': 'transformers.trainer_utils', '__doc__': 'An enumeration.', '_member_names_': ['NO', 'STEPS', 'EPOCH'], '_member_map_': {'NO': <IntervalStrategy.NO: 'no'>, 'STEPS': <IntervalStrategy.STEPS: 'steps'>, 'EPOCH': <IntervalStrategy.EPOCH: 'epoch'>}, '_member_type_': <class 'str'>, '_value2member_map_': {'no': <IntervalStrategy.NO: 'no'>, 'steps': <IntervalStrategy.STEPS: 'steps'>, 'epoch': <IntervalStrategy.EPOCH: 'epoch'>}, 'NO': <IntervalStrategy.NO: 'no'>, 'STEPS': <IntervalStrategy.STEPS: 'steps'>, 'EPOCH': <IntervalStrategy.EPOCH: 'epoch'>, '__new__': <function Enum.__new__ at 0x7f81495c4a60>}"
},
"prediction_loss_only": false,
"per_device_train_batch_size": 2,
"per_device_eval_batch_size": 8,
"per_gpu_train_batch_size": null,
"per_gpu_eval_batch_size": null,
"gradient_accumulation_steps": 8,
"eval_accumulation_steps": null,
"eval_delay": 0,
"torch_empty_cache_steps": null,
"learning_rate": 1e-05,
"weight_decay": 0.0,
"adam_beta1": 0.9,
"adam_beta2": 0.999,
"adam_epsilon": 1e-08,
"max_grad_norm": 1.0,
"num_train_epochs": 1.0,
"max_steps": -1,
"lr_scheduler_type": {
"_value_": "cosine",
"_name_": "COSINE",
"__objclass__": "{'_generate_next_value_': <function Enum._generate_next_value_ at 0x7f81495c6440>, '__module__': 'transformers.trainer_utils', '__doc__': '\\n Scheduler names for the parameter `lr_scheduler_type` in [`TrainingArguments`].\\n By default, it uses \"linear\". Internally, this retrieves `get_linear_schedule_with_warmup` scheduler from [`Trainer`].\\n Scheduler types:\\n - \"linear\" = get_linear_schedule_with_warmup\\n - \"cosine\" = get_cosine_schedule_with_warmup\\n - \"cosine_with_restarts\" = get_cosine_with_hard_restarts_schedule_with_warmup\\n - \"polynomial\" = get_polynomial_decay_schedule_with_warmup\\n - \"constant\" = get_constant_schedule\\n - \"constant_with_warmup\" = get_constant_schedule_with_warmup\\n - \"inverse_sqrt\" = get_inverse_sqrt_schedule\\n - \"reduce_lr_on_plateau\" = get_reduce_on_plateau_schedule\\n - \"cosine_with_min_lr\" = get_cosine_with_min_lr_schedule_with_warmup\\n - \"warmup_stable_decay\" = get_wsd_schedule\\n ', '_member_names_': ['LINEAR', 'COSINE', 'COSINE_WITH_RESTARTS', 'POLYNOMIAL', 'CONSTANT', 'CONSTANT_WITH_WARMUP', 'INVERSE_SQRT', 'REDUCE_ON_PLATEAU', 'COSINE_WITH_MIN_LR', 'WARMUP_STABLE_DECAY'], '_member_map_': {'LINEAR': <SchedulerType.LINEAR: 'linear'>, 'COSINE': <SchedulerType.COSINE: 'cosine'>, 'COSINE_WITH_RESTARTS': <SchedulerType.COSINE_WITH_RESTARTS: 'cosine_with_restarts'>, 'POLYNOMIAL': <SchedulerType.POLYNOMIAL: 'polynomial'>, 'CONSTANT': <SchedulerType.CONSTANT: 'constant'>, 'CONSTANT_WITH_WARMUP': <SchedulerType.CONSTANT_WITH_WARMUP: 'constant_with_warmup'>, 'INVERSE_SQRT': <SchedulerType.INVERSE_SQRT: 'inverse_sqrt'>, 'REDUCE_ON_PLATEAU': <SchedulerType.REDUCE_ON_PLATEAU: 'reduce_lr_on_plateau'>, 'COSINE_WITH_MIN_LR': <SchedulerType.COSINE_WITH_MIN_LR: 'cosine_with_min_lr'>, 'WARMUP_STABLE_DECAY': <SchedulerType.WARMUP_STABLE_DECAY: 'warmup_stable_decay'>}, '_member_type_': <class 'str'>, '_value2member_map_': {'linear': <SchedulerType.LINEAR: 'linear'>, 'cosine': <SchedulerType.COSINE: 'cosine'>, 'cosine_with_restarts': <SchedulerType.COSINE_WITH_RESTARTS: 'cosine_with_restarts'>, 'polynomial': <SchedulerType.POLYNOMIAL: 'polynomial'>, 'constant': <SchedulerType.CONSTANT: 'constant'>, 'constant_with_warmup': <SchedulerType.CONSTANT_WITH_WARMUP: 'constant_with_warmup'>, 'inverse_sqrt': <SchedulerType.INVERSE_SQRT: 'inverse_sqrt'>, 'reduce_lr_on_plateau': <SchedulerType.REDUCE_ON_PLATEAU: 'reduce_lr_on_plateau'>, 'cosine_with_min_lr': <SchedulerType.COSINE_WITH_MIN_LR: 'cosine_with_min_lr'>, 'warmup_stable_decay': <SchedulerType.WARMUP_STABLE_DECAY: 'warmup_stable_decay'>}, 'LINEAR': <SchedulerType.LINEAR: 'linear'>, 'COSINE': <SchedulerType.COSINE: 'cosine'>, 'COSINE_WITH_RESTARTS': <SchedulerType.COSINE_WITH_RESTARTS: 'cosine_with_restarts'>, 'POLYNOMIAL': <SchedulerType.POLYNOMIAL: 'polynomial'>, 'CONSTANT': <SchedulerType.CONSTANT: 'constant'>, 'CONSTANT_WITH_WARMUP': <SchedulerType.CONSTANT_WITH_WARMUP: 'constant_with_warmup'>, 'INVERSE_SQRT': <SchedulerType.INVERSE_SQRT: 'inverse_sqrt'>, 'REDUCE_ON_PLATEAU': <SchedulerType.REDUCE_ON_PLATEAU: 'reduce_lr_on_plateau'>, 'COSINE_WITH_MIN_LR': <SchedulerType.COSINE_WITH_MIN_LR: 'cosine_with_min_lr'>, 'WARMUP_STABLE_DECAY': <SchedulerType.WARMUP_STABLE_DECAY: 'warmup_stable_decay'>, '__new__': <function Enum.__new__ at 0x7f81495c4a60>}"
},
"lr_scheduler_kwargs": {},
"warmup_ratio": 0.03,
"warmup_steps": 0,
"log_level": "passive",
"log_level_replica": "warning",
"log_on_each_node": true,
"logging_dir": "/home/jiashuo/codes/ForesightOptim/checkpoints/ppo_Qwen3-14B_strategic/runs/Sep08_08-30-17_super-Rack-Server",
"logging_strategy": {
"_value_": "steps",
"_name_": "STEPS",
"__objclass__": "{'_generate_next_value_': <function Enum._generate_next_value_ at 0x7f81495c6440>, '__module__': 'transformers.trainer_utils', '__doc__': 'An enumeration.', '_member_names_': ['NO', 'STEPS', 'EPOCH'], '_member_map_': {'NO': <IntervalStrategy.NO: 'no'>, 'STEPS': <IntervalStrategy.STEPS: 'steps'>, 'EPOCH': <IntervalStrategy.EPOCH: 'epoch'>}, '_member_type_': <class 'str'>, '_value2member_map_': {'no': <IntervalStrategy.NO: 'no'>, 'steps': <IntervalStrategy.STEPS: 'steps'>, 'epoch': <IntervalStrategy.EPOCH: 'epoch'>}, 'NO': <IntervalStrategy.NO: 'no'>, 'STEPS': <IntervalStrategy.STEPS: 'steps'>, 'EPOCH': <IntervalStrategy.EPOCH: 'epoch'>, '__new__': <function Enum.__new__ at 0x7f81495c4a60>}"
},
"logging_first_step": false,
"logging_steps": 1.0,
"logging_nan_inf_filter": true,
"save_strategy": {
"_value_": "steps",
"_name_": "STEPS",
"__objclass__": "{'_generate_next_value_': <function Enum._generate_next_value_ at 0x7f81495c6440>, '__module__': 'transformers.trainer_utils', '__doc__': 'An enumeration.', '_member_names_': ['NO', 'STEPS', 'EPOCH', 'BEST'], '_member_map_': {'NO': <SaveStrategy.NO: 'no'>, 'STEPS': <SaveStrategy.STEPS: 'steps'>, 'EPOCH': <SaveStrategy.EPOCH: 'epoch'>, 'BEST': <SaveStrategy.BEST: 'best'>}, '_member_type_': <class 'str'>, '_value2member_map_': {'no': <SaveStrategy.NO: 'no'>, 'steps': <SaveStrategy.STEPS: 'steps'>, 'epoch': <SaveStrategy.EPOCH: 'epoch'>, 'best': <SaveStrategy.BEST: 'best'>}, 'NO': <SaveStrategy.NO: 'no'>, 'STEPS': <SaveStrategy.STEPS: 'steps'>, 'EPOCH': <SaveStrategy.EPOCH: 'epoch'>, 'BEST': <SaveStrategy.BEST: 'best'>, '__new__': <function Enum.__new__ at 0x7f81495c4a60>}"
},
"save_steps": 200,
"save_total_limit": null,
"save_safetensors": true,
"save_on_each_node": false,
"save_only_model": false,
"restore_callback_states_from_checkpoint": false,
"no_cuda": false,
"use_cpu": false,
"use_mps_device": false,
"seed": 42,
"data_seed": null,
"jit_mode_eval": false,
"use_ipex": false,
"bf16": true,
"fp16": false,
"fp16_opt_level": "O1",
"half_precision_backend": "auto",
"bf16_full_eval": false,
"fp16_full_eval": false,
"tf32": true,
"local_rank": 3,
"ddp_backend": null,
"tpu_num_cores": null,
"tpu_metrics_debug": false,
"debug": [],
"dataloader_drop_last": false,
"eval_steps": null,
"dataloader_num_workers": 0,
"dataloader_prefetch_factor": null,
"past_index": -1,
"run_name": "/home/jiashuo/codes/ForesightOptim/checkpoints/ppo_Qwen3-14B_strategic",
"disable_tqdm": false,
"remove_unused_columns": false,
"label_names": null,
"load_best_model_at_end": false,
"metric_for_best_model": null,
"greater_is_better": null,
"ignore_data_skip": false,
"fsdp": [],
"fsdp_min_num_params": 0,
"fsdp_config": {
"min_num_params": 0,
"xla": false,
"xla_fsdp_v2": false,
"xla_fsdp_grad_ckpt": false
},
"fsdp_transformer_layer_cls_to_wrap": null,
"accelerator_config": {
"split_batches": false,
"dispatch_batches": null,
"even_batches": true,
"use_seedable_sampler": true,
"non_blocking": false,
"gradient_accumulation_kwargs": null
},
"deepspeed": null,
"label_smoothing_factor": 0.0,
"optim": {
"_value_": "adamw_torch",
"_name_": "ADAMW_TORCH",
"__objclass__": "{'_generate_next_value_': <function Enum._generate_next_value_ at 0x7f81495c6440>, '__module__': 'transformers.training_args', '__doc__': '\\n Stores the acceptable string identifiers for optimizers.\\n ', '_member_names_': ['ADAMW_TORCH', 'ADAMW_TORCH_FUSED', 'ADAMW_TORCH_XLA', 'ADAMW_TORCH_NPU_FUSED', 'ADAMW_APEX_FUSED', 'ADAFACTOR', 'ADAMW_ANYPRECISION', 'ADAMW_TORCH_4BIT', 'ADAMW_TORCH_8BIT', 'ADEMAMIX', 'SGD', 'ADAGRAD', 'ADAMW_BNB', 'ADAMW_8BIT', 'ADEMAMIX_8BIT', 'LION_8BIT', 'LION', 'PAGED_ADAMW', 'PAGED_ADAMW_8BIT', 'PAGED_ADEMAMIX', 'PAGED_ADEMAMIX_8BIT', 'PAGED_LION', 'PAGED_LION_8BIT', 'RMSPROP', 'RMSPROP_BNB', 'RMSPROP_8BIT', 'RMSPROP_32BIT', 'GALORE_ADAMW', 'GALORE_ADAMW_8BIT', 'GALORE_ADAFACTOR', 'GALORE_ADAMW_LAYERWISE', 'GALORE_ADAMW_8BIT_LAYERWISE', 'GALORE_ADAFACTOR_LAYERWISE', 'LOMO', 'ADALOMO', 'GROKADAMW', 'SCHEDULE_FREE_RADAM', 'SCHEDULE_FREE_ADAMW', 'SCHEDULE_FREE_SGD', 'APOLLO_ADAMW', 'APOLLO_ADAMW_LAYERWISE'], '_member_map_': {'ADAMW_TORCH': <OptimizerNames.ADAMW_TORCH: 'adamw_torch'>, 'ADAMW_TORCH_FUSED': <OptimizerNames.ADAMW_TORCH_FUSED: 'adamw_torch_fused'>, 'ADAMW_TORCH_XLA': <OptimizerNames.ADAMW_TORCH_XLA: 'adamw_torch_xla'>, 'ADAMW_TORCH_NPU_FUSED': <OptimizerNames.ADAMW_TORCH_NPU_FUSED: 'adamw_torch_npu_fused'>, 'ADAMW_APEX_FUSED': <OptimizerNames.ADAMW_APEX_FUSED: 'adamw_apex_fused'>, 'ADAFACTOR': <OptimizerNames.ADAFACTOR: 'adafactor'>, 'ADAMW_ANYPRECISION': <OptimizerNames.ADAMW_ANYPRECISION: 'adamw_anyprecision'>, 'ADAMW_TORCH_4BIT': <OptimizerNames.ADAMW_TORCH_4BIT: 'adamw_torch_4bit'>, 'ADAMW_TORCH_8BIT': <OptimizerNames.ADAMW_TORCH_8BIT: 'adamw_torch_8bit'>, 'ADEMAMIX': <OptimizerNames.ADEMAMIX: 'ademamix'>, 'SGD': <OptimizerNames.SGD: 'sgd'>, 'ADAGRAD': <OptimizerNames.ADAGRAD: 'adagrad'>, 'ADAMW_BNB': <OptimizerNames.ADAMW_BNB: 'adamw_bnb_8bit'>, 'ADAMW_8BIT': <OptimizerNames.ADAMW_8BIT: 'adamw_8bit'>, 'ADEMAMIX_8BIT': <OptimizerNames.ADEMAMIX_8BIT: 'ademamix_8bit'>, 'LION_8BIT': <OptimizerNames.LION_8BIT: 'lion_8bit'>, 'LION': <OptimizerNames.LION: 'lion_32bit'>, 'PAGED_ADAMW': <OptimizerNames.PAGED_ADAMW: 'paged_adamw_32bit'>, 'PAGED_ADAMW_8BIT': <OptimizerNames.PAGED_ADAMW_8BIT: 'paged_adamw_8bit'>, 'PAGED_ADEMAMIX': <OptimizerNames.PAGED_ADEMAMIX: 'paged_ademamix_32bit'>, 'PAGED_ADEMAMIX_8BIT': <OptimizerNames.PAGED_ADEMAMIX_8BIT: 'paged_ademamix_8bit'>, 'PAGED_LION': <OptimizerNames.PAGED_LION: 'paged_lion_32bit'>, 'PAGED_LION_8BIT': <OptimizerNames.PAGED_LION_8BIT: 'paged_lion_8bit'>, 'RMSPROP': <OptimizerNames.RMSPROP: 'rmsprop'>, 'RMSPROP_BNB': <OptimizerNames.RMSPROP_BNB: 'rmsprop_bnb'>, 'RMSPROP_8BIT': <OptimizerNames.RMSPROP_8BIT: 'rmsprop_bnb_8bit'>, 'RMSPROP_32BIT': <OptimizerNames.RMSPROP_32BIT: 'rmsprop_bnb_32bit'>, 'GALORE_ADAMW': <OptimizerNames.GALORE_ADAMW: 'galore_adamw'>, 'GALORE_ADAMW_8BIT': <OptimizerNames.GALORE_ADAMW_8BIT: 'galore_adamw_8bit'>, 'GALORE_ADAFACTOR': <OptimizerNames.GALORE_ADAFACTOR: 'galore_adafactor'>, 'GALORE_ADAMW_LAYERWISE': <OptimizerNames.GALORE_ADAMW_LAYERWISE: 'galore_adamw_layerwise'>, 'GALORE_ADAMW_8BIT_LAYERWISE': <OptimizerNames.GALORE_ADAMW_8BIT_LAYERWISE: 'galore_adamw_8bit_layerwise'>, 'GALORE_ADAFACTOR_LAYERWISE': <OptimizerNames.GALORE_ADAFACTOR_LAYERWISE: 'galore_adafactor_layerwise'>, 'LOMO': <OptimizerNames.LOMO: 'lomo'>, 'ADALOMO': <OptimizerNames.ADALOMO: 'adalomo'>, 'GROKADAMW': <OptimizerNames.GROKADAMW: 'grokadamw'>, 'SCHEDULE_FREE_RADAM': <OptimizerNames.SCHEDULE_FREE_RADAM: 'schedule_free_radam'>, 'SCHEDULE_FREE_ADAMW': <OptimizerNames.SCHEDULE_FREE_ADAMW: 'schedule_free_adamw'>, 'SCHEDULE_FREE_SGD': <OptimizerNames.SCHEDULE_FREE_SGD: 'schedule_free_sgd'>, 'APOLLO_ADAMW': <OptimizerNames.APOLLO_ADAMW: 'apollo_adamw'>, 'APOLLO_ADAMW_LAYERWISE': <OptimizerNames.APOLLO_ADAMW_LAYERWISE: 'apollo_adamw_layerwise'>}, '_member_type_': <class 'str'>, '_value2member_map_': {'adamw_torch': <OptimizerNames.ADAMW_TORCH: 'adamw_torch'>, 'adamw_torch_fused': <OptimizerNames.ADAMW_TORCH_FUSED: 'adamw_torch_fused'>, 'adamw_torch_xla': <OptimizerNames.ADAMW_TORCH_XLA: 'adamw_torch_xla'>, 'adamw_torch_npu_fused': <OptimizerNames.ADAMW_TORCH_NPU_FUSED: 'adamw_torch_npu_fused'>, 'adamw_apex_fused': <OptimizerNames.ADAMW_APEX_FUSED: 'adamw_apex_fused'>, 'adafactor': <OptimizerNames.ADAFACTOR: 'adafactor'>, 'adamw_anyprecision': <OptimizerNames.ADAMW_ANYPRECISION: 'adamw_anyprecision'>, 'adamw_torch_4bit': <OptimizerNames.ADAMW_TORCH_4BIT: 'adamw_torch_4bit'>, 'adamw_torch_8bit': <OptimizerNames.ADAMW_TORCH_8BIT: 'adamw_torch_8bit'>, 'ademamix': <OptimizerNames.ADEMAMIX: 'ademamix'>, 'sgd': <OptimizerNames.SGD: 'sgd'>, 'adagrad': <OptimizerNames.ADAGRAD: 'adagrad'>, 'adamw_bnb_8bit': <OptimizerNames.ADAMW_BNB: 'adamw_bnb_8bit'>, 'adamw_8bit': <OptimizerNames.ADAMW_8BIT: 'adamw_8bit'>, 'ademamix_8bit': <OptimizerNames.ADEMAMIX_8BIT: 'ademamix_8bit'>, 'lion_8bit': <OptimizerNames.LION_8BIT: 'lion_8bit'>, 'lion_32bit': <OptimizerNames.LION: 'lion_32bit'>, 'paged_adamw_32bit': <OptimizerNames.PAGED_ADAMW: 'paged_adamw_32bit'>, 'paged_adamw_8bit': <OptimizerNames.PAGED_ADAMW_8BIT: 'paged_adamw_8bit'>, 'paged_ademamix_32bit': <OptimizerNames.PAGED_ADEMAMIX: 'paged_ademamix_32bit'>, 'paged_ademamix_8bit': <OptimizerNames.PAGED_ADEMAMIX_8BIT: 'paged_ademamix_8bit'>, 'paged_lion_32bit': <OptimizerNames.PAGED_LION: 'paged_lion_32bit'>, 'paged_lion_8bit': <OptimizerNames.PAGED_LION_8BIT: 'paged_lion_8bit'>, 'rmsprop': <OptimizerNames.RMSPROP: 'rmsprop'>, 'rmsprop_bnb': <OptimizerNames.RMSPROP_BNB: 'rmsprop_bnb'>, 'rmsprop_bnb_8bit': <OptimizerNames.RMSPROP_8BIT: 'rmsprop_bnb_8bit'>, 'rmsprop_bnb_32bit': <OptimizerNames.RMSPROP_32BIT: 'rmsprop_bnb_32bit'>, 'galore_adamw': <OptimizerNames.GALORE_ADAMW: 'galore_adamw'>, 'galore_adamw_8bit': <OptimizerNames.GALORE_ADAMW_8BIT: 'galore_adamw_8bit'>, 'galore_adafactor': <OptimizerNames.GALORE_ADAFACTOR: 'galore_adafactor'>, 'galore_adamw_layerwise': <OptimizerNames.GALORE_ADAMW_LAYERWISE: 'galore_adamw_layerwise'>, 'galore_adamw_8bit_layerwise': <OptimizerNames.GALORE_ADAMW_8BIT_LAYERWISE: 'galore_adamw_8bit_layerwise'>, 'galore_adafactor_layerwise': <OptimizerNames.GALORE_ADAFACTOR_LAYERWISE: 'galore_adafactor_layerwise'>, 'lomo': <OptimizerNames.LOMO: 'lomo'>, 'adalomo': <OptimizerNames.ADALOMO: 'adalomo'>, 'grokadamw': <OptimizerNames.GROKADAMW: 'grokadamw'>, 'schedule_free_radam': <OptimizerNames.SCHEDULE_FREE_RADAM: 'schedule_free_radam'>, 'schedule_free_adamw': <OptimizerNames.SCHEDULE_FREE_ADAMW: 'schedule_free_adamw'>, 'schedule_free_sgd': <OptimizerNames.SCHEDULE_FREE_SGD: 'schedule_free_sgd'>, 'apollo_adamw': <OptimizerNames.APOLLO_ADAMW: 'apollo_adamw'>, 'apollo_adamw_layerwise': <OptimizerNames.APOLLO_ADAMW_LAYERWISE: 'apollo_adamw_layerwise'>}, 'ADAMW_TORCH': <OptimizerNames.ADAMW_TORCH: 'adamw_torch'>, 'ADAMW_TORCH_FUSED': <OptimizerNames.ADAMW_TORCH_FUSED: 'adamw_torch_fused'>, 'ADAMW_TORCH_XLA': <OptimizerNames.ADAMW_TORCH_XLA: 'adamw_torch_xla'>, 'ADAMW_TORCH_NPU_FUSED': <OptimizerNames.ADAMW_TORCH_NPU_FUSED: 'adamw_torch_npu_fused'>, 'ADAMW_APEX_FUSED': <OptimizerNames.ADAMW_APEX_FUSED: 'adamw_apex_fused'>, 'ADAFACTOR': <OptimizerNames.ADAFACTOR: 'adafactor'>, 'ADAMW_ANYPRECISION': <OptimizerNames.ADAMW_ANYPRECISION: 'adamw_anyprecision'>, 'ADAMW_TORCH_4BIT': <OptimizerNames.ADAMW_TORCH_4BIT: 'adamw_torch_4bit'>, 'ADAMW_TORCH_8BIT': <OptimizerNames.ADAMW_TORCH_8BIT: 'adamw_torch_8bit'>, 'ADEMAMIX': <OptimizerNames.ADEMAMIX: 'ademamix'>, 'SGD': <OptimizerNames.SGD: 'sgd'>, 'ADAGRAD': <OptimizerNames.ADAGRAD: 'adagrad'>, 'ADAMW_BNB': <OptimizerNames.ADAMW_BNB: 'adamw_bnb_8bit'>, 'ADAMW_8BIT': <OptimizerNames.ADAMW_8BIT: 'adamw_8bit'>, 'ADEMAMIX_8BIT': <OptimizerNames.ADEMAMIX_8BIT: 'ademamix_8bit'>, 'LION_8BIT': <OptimizerNames.LION_8BIT: 'lion_8bit'>, 'LION': <OptimizerNames.LION: 'lion_32bit'>, 'PAGED_ADAMW': <OptimizerNames.PAGED_ADAMW: 'paged_adamw_32bit'>, 'PAGED_ADAMW_8BIT': <OptimizerNames.PAGED_ADAMW_8BIT: 'paged_adamw_8bit'>, 'PAGED_ADEMAMIX': <OptimizerNames.PAGED_ADEMAMIX: 'paged_ademamix_32bit'>, 'PAGED_ADEMAMIX_8BIT': <OptimizerNames.PAGED_ADEMAMIX_8BIT: 'paged_ademamix_8bit'>, 'PAGED_LION': <OptimizerNames.PAGED_LION: 'paged_lion_32bit'>, 'PAGED_LION_8BIT': <OptimizerNames.PAGED_LION_8BIT: 'paged_lion_8bit'>, 'RMSPROP': <OptimizerNames.RMSPROP: 'rmsprop'>, 'RMSPROP_BNB': <OptimizerNames.RMSPROP_BNB: 'rmsprop_bnb'>, 'RMSPROP_8BIT': <OptimizerNames.RMSPROP_8BIT: 'rmsprop_bnb_8bit'>, 'RMSPROP_32BIT': <OptimizerNames.RMSPROP_32BIT: 'rmsprop_bnb_32bit'>, 'GALORE_ADAMW': <OptimizerNames.GALORE_ADAMW: 'galore_adamw'>, 'GALORE_ADAMW_8BIT': <OptimizerNames.GALORE_ADAMW_8BIT: 'galore_adamw_8bit'>, 'GALORE_ADAFACTOR': <OptimizerNames.GALORE_ADAFACTOR: 'galore_adafactor'>, 'GALORE_ADAMW_LAYERWISE': <OptimizerNames.GALORE_ADAMW_LAYERWISE: 'galore_adamw_layerwise'>, 'GALORE_ADAMW_8BIT_LAYERWISE': <OptimizerNames.GALORE_ADAMW_8BIT_LAYERWISE: 'galore_adamw_8bit_layerwise'>, 'GALORE_ADAFACTOR_LAYERWISE': <OptimizerNames.GALORE_ADAFACTOR_LAYERWISE: 'galore_adafactor_layerwise'>, 'LOMO': <OptimizerNames.LOMO: 'lomo'>, 'ADALOMO': <OptimizerNames.ADALOMO: 'adalomo'>, 'GROKADAMW': <OptimizerNames.GROKADAMW: 'grokadamw'>, 'SCHEDULE_FREE_RADAM': <OptimizerNames.SCHEDULE_FREE_RADAM: 'schedule_free_radam'>, 'SCHEDULE_FREE_ADAMW': <OptimizerNames.SCHEDULE_FREE_ADAMW: 'schedule_free_adamw'>, 'SCHEDULE_FREE_SGD': <OptimizerNames.SCHEDULE_FREE_SGD: 'schedule_free_sgd'>, 'APOLLO_ADAMW': <OptimizerNames.APOLLO_ADAMW: 'apollo_adamw'>, 'APOLLO_ADAMW_LAYERWISE': <OptimizerNames.APOLLO_ADAMW_LAYERWISE: 'apollo_adamw_layerwise'>, '__new__': <function Enum.__new__ at 0x7f81495c4a60>}"
},
"optim_args": null,
"adafactor": false,
"group_by_length": false,
"length_column_name": "length",
"report_to": [
"tensorboard",
"wandb"
],
"ddp_find_unused_parameters": null,
"ddp_bucket_cap_mb": null,
"ddp_broadcast_buffers": null,
"dataloader_pin_memory": true,
"dataloader_persistent_workers": false,
"skip_memory_metrics": true,
"use_legacy_prediction_loop": false,
"push_to_hub": false,
"resume_from_checkpoint": null,
"hub_model_id": null,
"hub_strategy": {
"_value_": "every_save",
"_name_": "EVERY_SAVE",
"__objclass__": "{'_generate_next_value_': <function Enum._generate_next_value_ at 0x7f81495c6440>, '__module__': 'transformers.trainer_utils', '__doc__': 'An enumeration.', '_member_names_': ['END', 'EVERY_SAVE', 'CHECKPOINT', 'ALL_CHECKPOINTS'], '_member_map_': {'END': <HubStrategy.END: 'end'>, 'EVERY_SAVE': <HubStrategy.EVERY_SAVE: 'every_save'>, 'CHECKPOINT': <HubStrategy.CHECKPOINT: 'checkpoint'>, 'ALL_CHECKPOINTS': <HubStrategy.ALL_CHECKPOINTS: 'all_checkpoints'>}, '_member_type_': <class 'str'>, '_value2member_map_': {'end': <HubStrategy.END: 'end'>, 'every_save': <HubStrategy.EVERY_SAVE: 'every_save'>, 'checkpoint': <HubStrategy.CHECKPOINT: 'checkpoint'>, 'all_checkpoints': <HubStrategy.ALL_CHECKPOINTS: 'all_checkpoints'>}, 'END': <HubStrategy.END: 'end'>, 'EVERY_SAVE': <HubStrategy.EVERY_SAVE: 'every_save'>, 'CHECKPOINT': <HubStrategy.CHECKPOINT: 'checkpoint'>, 'ALL_CHECKPOINTS': <HubStrategy.ALL_CHECKPOINTS: 'all_checkpoints'>, '__new__': <function Enum.__new__ at 0x7f81495c4a60>}"
},
"hub_token": null,
"hub_private_repo": null,
"hub_always_push": false,
"hub_revision": null,
"gradient_checkpointing": true,
"gradient_checkpointing_kwargs": null,
"include_inputs_for_metrics": false,
"include_for_metrics": [],
"eval_do_concat_batches": true,
"fp16_backend": "auto",
"push_to_hub_model_id": null,
"push_to_hub_organization": null,
"push_to_hub_token": null,
"mp_parameters": "",
"auto_find_batch_size": false,
"full_determinism": false,
"torchdynamo": null,
"ray_scope": "last",
"ddp_timeout": 1800,
"torch_compile": false,
"torch_compile_backend": null,
"torch_compile_mode": null,
"include_tokens_per_second": false,
"include_num_input_tokens_seen": false,
"neftune_noise_alpha": null,
"optim_target_modules": null,
"batch_eval_metrics": false,
"eval_on_start": false,
"use_liger_kernel": false,
"liger_kernel_config": null,
"eval_use_gather_object": false,
"average_tokens_across_devices": false,
"use_wandb": false,
"adapter_path": "",
"padding_side": "right",
"truncation_side": "left",
"add_sep_token": false,
"model_type": "llama",
"model_prefix": "llama",
"pooling_type": "average",
"model_name_or_path": "/home/jiashuo/codes/ForesightOptim/checkpoints/im_Qwen3-14B_strategic",
"ref_model_name_or_path": "",
"critic_model_name_or_path": "FacebookAI/roberta-base",
"game_name": "Both",
"game_max_turn": 6,
"data_dir": "path/to/cleaned_data",
"data_type": "no_type",
"data_path": "yahma/alpaca-cleaned",
"train_data_path": [
"/home/jiashuo/datasets/wordtaboo/imitation_selfplay_episodes/Qwen3-14B_train_selfplay_data.json",
"/home/jiashuo/datasets/rsagame/imitation_selfplay_episodes/Qwen3-14B_train_selfplay_data.json"
],
"eval_data_path": [],
"data_prefix": "yahma/alpaca-cleaned",
"data_suffix": "yahma/alpaca-cleaned",
"task_type": "training",
"train_method": "SelfPlayPPO",
"use_lora": true,
"debug_mode": false,
"cache_dir": null,
"clip_range": 0.2,
"length_penalty": 1.0,
"lm_sft_coeff": 0.0,
"lm_kl_coeff": 0.1,
"max_length": 2048,
"valid_data_size": 0,
"rollout_size": 128,
"replay_buffer_size": 10000,
"replay_batch_size": 16,
"critic_learning_rate": 2e-05,
"gamma": 0.99,
"tau": 0.95,
"max_new_tokens": 128,
"temperature": 0.9,
"top_p": 0.95,
"player_one_model_name_or_path": "",
"player_two_model_name_or_path": "",
"distributed_state": {
"_cpu": false,
"backend": "nccl",
"device": "cuda:3",
"debug": false,
"distributed_type": "DEEPSPEED",
"num_processes": 6,
"process_index": 3,
"local_process_index": 3,
"fork_launched": false
},
"_n_gpu": 1,
"__cached__setup_devices": "cuda:3",
"deepspeed_plugin": {
"hf_ds_config": {
"config": {
"train_batch_size": 96,
"train_micro_batch_size_per_gpu": 2,
"gradient_accumulation_steps": 8,
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "none",
"nvme_path": null
},
"offload_param": {
"device": "none",
"nvme_path": null
},
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_clipping": 1.0,
"steps_per_print": Infinity,
"bf16": {
"enabled": true
},
"fp16": {
"enabled": false
},
"zero_allow_untested_optimizer": true
},
"_stage": 2,
"_offload": false,
"_dtype": "torch.bfloat16",
"mismatches": []
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": 1.0,
"zero_stage": 2,
"is_train_batch_min": true,
"offload_optimizer_device": "none",
"offload_param_device": "none",
"offload_optimizer_nvme_path": "none",
"offload_param_nvme_path": "none",
"zero3_init_flag": false,
"zero3_save_16bit_model": true,
"transformer_moe_cls_names": null,
"enable_msamp": false,
"msamp_opt_level": "O1",
"deepspeed_config": {
"train_batch_size": 96,
"train_micro_batch_size_per_gpu": 2,
"gradient_accumulation_steps": 8,
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "none",
"nvme_path": null
},
"offload_param": {
"device": "none",
"nvme_path": null
},
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_clipping": 1.0,
"steps_per_print": Infinity,
"bf16": {
"enabled": true
},
"fp16": {
"enabled": false
},
"zero_allow_untested_optimizer": true
},
"_selected": true,
"dschf": {
"config": {
"train_micro_batch_size_per_gpu": 1,
"gradient_accumulation_steps": 1,
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "none",
"nvme_path": null
},
"offload_param": {
"device": "none",
"nvme_path": null
},
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_clipping": 1.0,
"steps_per_print": Infinity,
"bf16": {
"enabled": true
},
"fp16": {
"enabled": false
}
},
"_stage": 2,
"_offload": false
}
}
}
### Hardware Requirements
* GPU: 4 48G NVIDIA-SMI 5090
* Number of GPUs: 4
* Memory of each GPU: 48G
|
anjan-k/Sentiment-Analysis-FineTune-HuggingFace
|
anjan-k
| 2025-09-16T07:06:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-16T07:06:09Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: Sentiment-Analysis-FineTune-HuggingFace
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Sentiment-Analysis-FineTune-HuggingFace
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8521
- Accuracy: 0.7529
- F1: 0.7516
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.6061 | 1.0 | 1952 | 0.5761 | 0.7616 | 0.7620 |
| 0.4957 | 2.0 | 3904 | 0.5936 | 0.7589 | 0.7603 |
| 0.382 | 3.0 | 5856 | 0.6639 | 0.7535 | 0.7543 |
| 0.2679 | 4.0 | 7808 | 0.8521 | 0.7529 | 0.7516 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
qualcomm/Yolo-X
|
qualcomm
| 2025-09-16T07:04:46Z | 813 | 5 |
pytorch
|
[
"pytorch",
"tflite",
"real_time",
"android",
"object-detection",
"license:other",
"region:us"
] |
object-detection
| 2025-03-14T02:22:40Z |
---
library_name: pytorch
license: other
tags:
- real_time
- android
pipeline_tag: object-detection
---

# Yolo-X: Optimized for Mobile Deployment
## Real-time object detection optimized for mobile and edge
YoloX is a machine learning model that predicts bounding boxes and classes of objects in an image.
This model is an implementation of Yolo-X found [here](https://github.com/Megvii-BaseDetection/YOLOX/).
This repository provides scripts to run Yolo-X on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/yolox).
### Model Details
- **Model Type:** Model_use_case.object_detection
- **Model Stats:**
- Model checkpoint: YoloX Small
- Input resolution: 640x640
- Number of parameters: 8.98M
- Model size (float): 34.3 MB
- Model size (w8a16): 9.53 MB
- Model size (w8a8): 8.96 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| Yolo-X | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 32.199 ms | 0 - 46 MB | NPU | [Yolo-X.tflite](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X.tflite) |
| Yolo-X | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 31.57 ms | 1 - 69 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X.dlc) |
| Yolo-X | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 14.375 ms | 0 - 54 MB | NPU | [Yolo-X.tflite](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X.tflite) |
| Yolo-X | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 18.879 ms | 4 - 48 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X.dlc) |
| Yolo-X | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 8.727 ms | 0 - 12 MB | NPU | [Yolo-X.tflite](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X.tflite) |
| Yolo-X | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 8.254 ms | 5 - 24 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X.dlc) |
| Yolo-X | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | ONNX | 13.267 ms | 0 - 61 MB | NPU | [Yolo-X.onnx.zip](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X.onnx.zip) |
| Yolo-X | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 11.861 ms | 0 - 48 MB | NPU | [Yolo-X.tflite](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X.tflite) |
| Yolo-X | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 11.246 ms | 1 - 61 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X.dlc) |
| Yolo-X | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 32.199 ms | 0 - 46 MB | NPU | [Yolo-X.tflite](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X.tflite) |
| Yolo-X | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 31.57 ms | 1 - 69 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X.dlc) |
| Yolo-X | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 8.586 ms | 0 - 12 MB | NPU | [Yolo-X.tflite](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X.tflite) |
| Yolo-X | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 8.246 ms | 5 - 25 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X.dlc) |
| Yolo-X | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 16.149 ms | 0 - 43 MB | NPU | [Yolo-X.tflite](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X.tflite) |
| Yolo-X | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 14.916 ms | 0 - 40 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X.dlc) |
| Yolo-X | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 8.699 ms | 0 - 16 MB | NPU | [Yolo-X.tflite](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X.tflite) |
| Yolo-X | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 8.326 ms | 5 - 29 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X.dlc) |
| Yolo-X | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 11.861 ms | 0 - 48 MB | NPU | [Yolo-X.tflite](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X.tflite) |
| Yolo-X | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 11.246 ms | 1 - 61 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X.dlc) |
| Yolo-X | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 8.807 ms | 0 - 16 MB | NPU | [Yolo-X.tflite](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X.tflite) |
| Yolo-X | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 8.277 ms | 5 - 23 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X.dlc) |
| Yolo-X | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 13.611 ms | 0 - 58 MB | NPU | [Yolo-X.onnx.zip](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X.onnx.zip) |
| Yolo-X | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 6.46 ms | 0 - 60 MB | NPU | [Yolo-X.tflite](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X.tflite) |
| Yolo-X | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 6.106 ms | 5 - 87 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X.dlc) |
| Yolo-X | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 9.498 ms | 5 - 153 MB | NPU | [Yolo-X.onnx.zip](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X.onnx.zip) |
| Yolo-X | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 5.608 ms | 0 - 51 MB | NPU | [Yolo-X.tflite](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X.tflite) |
| Yolo-X | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 5.783 ms | 5 - 78 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X.dlc) |
| Yolo-X | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 9.062 ms | 5 - 98 MB | NPU | [Yolo-X.onnx.zip](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X.onnx.zip) |
| Yolo-X | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 8.932 ms | 5 - 5 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X.dlc) |
| Yolo-X | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 12.716 ms | 14 - 14 MB | NPU | [Yolo-X.onnx.zip](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X.onnx.zip) |
| Yolo-X | w8a16 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 15.61 ms | 2 - 41 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a16.dlc) |
| Yolo-X | w8a16 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 9.174 ms | 2 - 55 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a16.dlc) |
| Yolo-X | w8a16 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 7.837 ms | 2 - 14 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a16.dlc) |
| Yolo-X | w8a16 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | ONNX | 181.18 ms | 71 - 489 MB | NPU | [Yolo-X.onnx.zip](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a16.onnx.zip) |
| Yolo-X | w8a16 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 8.469 ms | 2 - 41 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a16.dlc) |
| Yolo-X | w8a16 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | QNN_DLC | 27.358 ms | 2 - 55 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a16.dlc) |
| Yolo-X | w8a16 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | ONNX | 397.518 ms | 101 - 118 MB | CPU | [Yolo-X.onnx.zip](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a16.onnx.zip) |
| Yolo-X | w8a16 | RB5 (Proxy) | Qualcomm® QCS8250 (Proxy) | ONNX | 376.909 ms | 90 - 101 MB | CPU | [Yolo-X.onnx.zip](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a16.onnx.zip) |
| Yolo-X | w8a16 | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 15.61 ms | 2 - 41 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a16.dlc) |
| Yolo-X | w8a16 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 7.86 ms | 2 - 10 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a16.dlc) |
| Yolo-X | w8a16 | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 10.01 ms | 2 - 47 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a16.dlc) |
| Yolo-X | w8a16 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 7.873 ms | 2 - 13 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a16.dlc) |
| Yolo-X | w8a16 | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 8.469 ms | 2 - 41 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a16.dlc) |
| Yolo-X | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 7.839 ms | 2 - 12 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a16.dlc) |
| Yolo-X | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 190.67 ms | 69 - 491 MB | NPU | [Yolo-X.onnx.zip](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a16.onnx.zip) |
| Yolo-X | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 5.056 ms | 2 - 56 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a16.dlc) |
| Yolo-X | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 150.651 ms | 366 - 1898 MB | NPU | [Yolo-X.onnx.zip](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a16.onnx.zip) |
| Yolo-X | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 3.998 ms | 2 - 51 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a16.dlc) |
| Yolo-X | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 8.487 ms | 6 - 6 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a16.dlc) |
| Yolo-X | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 190.212 ms | 116 - 116 MB | NPU | [Yolo-X.onnx.zip](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a16.onnx.zip) |
| Yolo-X | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 6.311 ms | 0 - 31 MB | NPU | [Yolo-X.tflite](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8.tflite) |
| Yolo-X | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 5.44 ms | 1 - 34 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8.dlc) |
| Yolo-X | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 3.117 ms | 0 - 49 MB | NPU | [Yolo-X.tflite](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8.tflite) |
| Yolo-X | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 2.919 ms | 1 - 52 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8.dlc) |
| Yolo-X | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 2.848 ms | 0 - 35 MB | NPU | [Yolo-X.tflite](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8.tflite) |
| Yolo-X | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 2.315 ms | 1 - 11 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8.dlc) |
| Yolo-X | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | ONNX | 9.626 ms | 0 - 40 MB | NPU | [Yolo-X.onnx.zip](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8.onnx.zip) |
| Yolo-X | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 3.256 ms | 0 - 31 MB | NPU | [Yolo-X.tflite](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8.tflite) |
| Yolo-X | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 2.671 ms | 1 - 36 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8.dlc) |
| Yolo-X | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | QNN_DLC | 9.936 ms | 0 - 41 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8.dlc) |
| Yolo-X | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | ONNX | 92.061 ms | 45 - 61 MB | CPU | [Yolo-X.onnx.zip](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8.onnx.zip) |
| Yolo-X | w8a8 | RB5 (Proxy) | Qualcomm® QCS8250 (Proxy) | ONNX | 87.414 ms | 38 - 47 MB | CPU | [Yolo-X.onnx.zip](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8.onnx.zip) |
| Yolo-X | w8a8 | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 6.311 ms | 0 - 31 MB | NPU | [Yolo-X.tflite](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8.tflite) |
| Yolo-X | w8a8 | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 5.44 ms | 1 - 34 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8.dlc) |
| Yolo-X | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 2.853 ms | 0 - 34 MB | NPU | [Yolo-X.tflite](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8.tflite) |
| Yolo-X | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 2.324 ms | 1 - 11 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8.dlc) |
| Yolo-X | w8a8 | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 4.138 ms | 0 - 38 MB | NPU | [Yolo-X.tflite](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8.tflite) |
| Yolo-X | w8a8 | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 3.635 ms | 1 - 42 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8.dlc) |
| Yolo-X | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 2.855 ms | 0 - 35 MB | NPU | [Yolo-X.tflite](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8.tflite) |
| Yolo-X | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 2.331 ms | 2 - 12 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8.dlc) |
| Yolo-X | w8a8 | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 3.256 ms | 0 - 31 MB | NPU | [Yolo-X.tflite](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8.tflite) |
| Yolo-X | w8a8 | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 2.671 ms | 1 - 36 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8.dlc) |
| Yolo-X | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 2.844 ms | 0 - 35 MB | NPU | [Yolo-X.tflite](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8.tflite) |
| Yolo-X | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 2.334 ms | 1 - 11 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8.dlc) |
| Yolo-X | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 9.545 ms | 0 - 30 MB | NPU | [Yolo-X.onnx.zip](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8.onnx.zip) |
| Yolo-X | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 1.842 ms | 0 - 51 MB | NPU | [Yolo-X.tflite](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8.tflite) |
| Yolo-X | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 1.548 ms | 1 - 54 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8.dlc) |
| Yolo-X | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 7.017 ms | 1 - 103 MB | NPU | [Yolo-X.onnx.zip](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8.onnx.zip) |
| Yolo-X | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 1.692 ms | 0 - 38 MB | NPU | [Yolo-X.tflite](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8.tflite) |
| Yolo-X | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 1.289 ms | 1 - 43 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8.dlc) |
| Yolo-X | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 7.691 ms | 0 - 73 MB | NPU | [Yolo-X.onnx.zip](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8.onnx.zip) |
| Yolo-X | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 2.576 ms | 26 - 26 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8.dlc) |
| Yolo-X | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 10.08 ms | 8 - 8 MB | NPU | [Yolo-X.onnx.zip](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8.onnx.zip) |
| Yolo-X | w8a8_mixed_int16 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 7.901 ms | 1 - 37 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8_mixed_int16.dlc) |
| Yolo-X | w8a8_mixed_int16 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 3.594 ms | 1 - 13 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8_mixed_int16.dlc) |
| Yolo-X | w8a8_mixed_int16 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | ONNX | 148.819 ms | 48 - 489 MB | NPU | [Yolo-X.onnx.zip](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8_mixed_int16.onnx.zip) |
| Yolo-X | w8a8_mixed_int16 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 4.049 ms | 1 - 38 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8_mixed_int16.dlc) |
| Yolo-X | w8a8_mixed_int16 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | ONNX | 339.26 ms | 92 - 110 MB | CPU | [Yolo-X.onnx.zip](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8_mixed_int16.onnx.zip) |
| Yolo-X | w8a8_mixed_int16 | RB5 (Proxy) | Qualcomm® QCS8250 (Proxy) | ONNX | 328.302 ms | 86 - 102 MB | CPU | [Yolo-X.onnx.zip](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8_mixed_int16.onnx.zip) |
| Yolo-X | w8a8_mixed_int16 | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 7.901 ms | 1 - 37 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8_mixed_int16.dlc) |
| Yolo-X | w8a8_mixed_int16 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 3.605 ms | 1 - 11 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8_mixed_int16.dlc) |
| Yolo-X | w8a8_mixed_int16 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 3.613 ms | 1 - 10 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8_mixed_int16.dlc) |
| Yolo-X | w8a8_mixed_int16 | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 4.049 ms | 1 - 38 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8_mixed_int16.dlc) |
| Yolo-X | w8a8_mixed_int16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 3.608 ms | 1 - 11 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8_mixed_int16.dlc) |
| Yolo-X | w8a8_mixed_int16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 175.074 ms | 63 - 481 MB | NPU | [Yolo-X.onnx.zip](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8_mixed_int16.onnx.zip) |
| Yolo-X | w8a8_mixed_int16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 2.417 ms | 1 - 52 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8_mixed_int16.dlc) |
| Yolo-X | w8a8_mixed_int16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 128.269 ms | 628 - 2322 MB | NPU | [Yolo-X.onnx.zip](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8_mixed_int16.onnx.zip) |
| Yolo-X | w8a8_mixed_int16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 1.748 ms | 1 - 45 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8_mixed_int16.dlc) |
| Yolo-X | w8a8_mixed_int16 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 3.991 ms | 16 - 16 MB | NPU | [Yolo-X.dlc](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8_mixed_int16.dlc) |
| Yolo-X | w8a8_mixed_int16 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 159.32 ms | 116 - 116 MB | NPU | [Yolo-X.onnx.zip](https://huggingface.co/qualcomm/Yolo-X/blob/main/Yolo-X_w8a8_mixed_int16.onnx.zip) |
## Installation
Install the package via pip:
```bash
pip install "qai-hub-models[yolox]"
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.yolox.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.yolox.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.yolox.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/yolox/qai_hub_models/models/Yolo-X/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.yolox import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.yolox.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.yolox.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on Yolo-X's performance across various devices [here](https://aihub.qualcomm.com/models/yolox).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of Yolo-X can be found
[here](https://github.com/Megvii-BaseDetection/YOLOX/blob/main/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [YOLOX: Exceeding YOLO Series in 2021](https://github.com/Megvii-BaseDetection/YOLOX/blob/main/README.md)
* [Source Model Implementation](https://github.com/Megvii-BaseDetection/YOLOX/)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:[email protected]).
|
qualcomm/Yolo-v3
|
qualcomm
| 2025-09-16T07:03:46Z | 1 | 0 |
pytorch
|
[
"pytorch",
"real_time",
"android",
"object-detection",
"arxiv:1804.02767",
"license:other",
"region:us"
] |
object-detection
| 2024-12-12T22:27:46Z |
---
library_name: pytorch
license: other
tags:
- real_time
- android
pipeline_tag: object-detection
---

# Yolo-v3: Optimized for Mobile Deployment
## Real-time object detection optimized for mobile and edge
YoloV3 is a machine learning model that predicts bounding boxes and classes of objects in an image.
This model is an implementation of Yolo-v3 found [here](https://github.com/ultralytics/yolov3/tree/v8).
This repository provides scripts to run Yolo-v3 on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/yolov3).
**WARNING**: The model assets are not readily available for download due to licensing restrictions.
### Model Details
- **Model Type:** Model_use_case.object_detection
- **Model Stats:**
- Model checkpoint: YoloV3 Tiny
- Input resolution: 416p (416x416)
- Number of parameters: 11.5M
- Model size (float): 43.9 MB
- Model size (w8a16): 16.9 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| Yolo-v3 | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 30.586 ms | 0 - 73 MB | NPU | -- |
| Yolo-v3 | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 25.181 ms | 2 - 88 MB | NPU | -- |
| Yolo-v3 | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 17.386 ms | 0 - 90 MB | NPU | -- |
| Yolo-v3 | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 16.183 ms | 5 - 78 MB | NPU | -- |
| Yolo-v3 | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 15.724 ms | 0 - 10 MB | NPU | -- |
| Yolo-v3 | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 10.988 ms | 5 - 22 MB | NPU | -- |
| Yolo-v3 | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | ONNX | 12.237 ms | 0 - 64 MB | NPU | -- |
| Yolo-v3 | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 16.631 ms | 0 - 73 MB | NPU | -- |
| Yolo-v3 | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 12.31 ms | 2 - 88 MB | NPU | -- |
| Yolo-v3 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 15.454 ms | 0 - 10 MB | NPU | -- |
| Yolo-v3 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 10.918 ms | 5 - 23 MB | NPU | -- |
| Yolo-v3 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 12.174 ms | 0 - 65 MB | NPU | -- |
| Yolo-v3 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 9.635 ms | 0 - 97 MB | NPU | -- |
| Yolo-v3 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 7.935 ms | 5 - 98 MB | NPU | -- |
| Yolo-v3 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 9.08 ms | 5 - 91 MB | NPU | -- |
| Yolo-v3 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 7.404 ms | 0 - 78 MB | NPU | -- |
| Yolo-v3 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 8.092 ms | 5 - 94 MB | NPU | -- |
| Yolo-v3 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 9.127 ms | 5 - 90 MB | NPU | -- |
| Yolo-v3 | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 11.014 ms | 8 - 8 MB | NPU | -- |
| Yolo-v3 | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 11.873 ms | 21 - 21 MB | NPU | -- |
| Yolo-v3 | w8a16 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 13.718 ms | 1 - 69 MB | NPU | -- |
| Yolo-v3 | w8a16 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 8.174 ms | 2 - 93 MB | NPU | -- |
| Yolo-v3 | w8a16 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 6.271 ms | 2 - 29 MB | NPU | -- |
| Yolo-v3 | w8a16 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | ONNX | 68.937 ms | 51 - 246 MB | NPU | -- |
| Yolo-v3 | w8a16 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 6.756 ms | 2 - 70 MB | NPU | -- |
| Yolo-v3 | w8a16 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | QNN_DLC | 20.677 ms | 2 - 84 MB | NPU | -- |
| Yolo-v3 | w8a16 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | ONNX | 174.329 ms | 90 - 102 MB | CPU | -- |
| Yolo-v3 | w8a16 | RB5 (Proxy) | Qualcomm® QCS8250 (Proxy) | ONNX | 154.683 ms | 82 - 103 MB | CPU | -- |
| Yolo-v3 | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 6.256 ms | 2 - 35 MB | NPU | -- |
| Yolo-v3 | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 68.882 ms | 62 - 282 MB | NPU | -- |
| Yolo-v3 | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 4.57 ms | 2 - 90 MB | NPU | -- |
| Yolo-v3 | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 50.406 ms | 0 - 683 MB | NPU | -- |
| Yolo-v3 | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 4.485 ms | 2 - 77 MB | NPU | -- |
| Yolo-v3 | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 51.485 ms | 65 - 728 MB | NPU | -- |
| Yolo-v3 | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 6.336 ms | 65 - 65 MB | NPU | -- |
| Yolo-v3 | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 64.666 ms | 115 - 115 MB | NPU | -- |
## Installation
Install the package via pip:
```bash
pip install "qai-hub-models[yolov3]"
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.yolov3.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.yolov3.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.yolov3.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/yolov3/qai_hub_models/models/Yolo-v3/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.yolov3 import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.yolov3.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.yolov3.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on Yolo-v3's performance across various devices [here](https://aihub.qualcomm.com/models/yolov3).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of Yolo-v3 can be found
[here](https://github.com/ultralytics/yolov3/blob/v8/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://github.com/ultralytics/yolov3/blob/v8/LICENSE)
## References
* [YOLOv3: An Incremental Improvement](https://arxiv.org/abs/1804.02767)
* [Source Model Implementation](https://github.com/ultralytics/yolov3/tree/v8)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:[email protected]).
|
qualcomm/YOLOv11-Segmentation
|
qualcomm
| 2025-09-16T07:03:41Z | 0 | 1 |
pytorch
|
[
"pytorch",
"real_time",
"android",
"image-segmentation",
"license:other",
"region:us"
] |
image-segmentation
| 2024-12-12T21:29:39Z |
---
library_name: pytorch
license: other
tags:
- real_time
- android
pipeline_tag: image-segmentation
---

# YOLOv11-Segmentation: Optimized for Mobile Deployment
## Real-time object segmentation optimized for mobile and edge by Ultralytics
Ultralytics YOLOv11 is a machine learning model that predicts bounding boxes, segmentation masks and classes of objects in an image.
This model is an implementation of YOLOv11-Segmentation found [here](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/yolo/segment).
This repository provides scripts to run YOLOv11-Segmentation on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/yolov11_seg).
**WARNING**: The model assets are not readily available for download due to licensing restrictions.
### Model Details
- **Model Type:** Model_use_case.semantic_segmentation
- **Model Stats:**
- Model checkpoint: YOLO11N-Seg
- Input resolution: 640x640
- Number of output classes: 80
- Number of parameters: 2.89M
- Model size (float): 11.1 MB
- Model size (w8a16): 11.4 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| YOLOv11-Segmentation | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 17.233 ms | 4 - 76 MB | NPU | -- |
| YOLOv11-Segmentation | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 16.001 ms | 1 - 110 MB | NPU | -- |
| YOLOv11-Segmentation | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 9.275 ms | 4 - 49 MB | NPU | -- |
| YOLOv11-Segmentation | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 5.377 ms | 4 - 39 MB | NPU | -- |
| YOLOv11-Segmentation | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 4.64 ms | 5 - 50 MB | NPU | -- |
| YOLOv11-Segmentation | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | ONNX | 6.685 ms | 3 - 46 MB | NPU | -- |
| YOLOv11-Segmentation | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 6.946 ms | 4 - 76 MB | NPU | -- |
| YOLOv11-Segmentation | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 6.245 ms | 2 - 109 MB | NPU | -- |
| YOLOv11-Segmentation | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 17.233 ms | 4 - 76 MB | NPU | -- |
| YOLOv11-Segmentation | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 16.001 ms | 1 - 110 MB | NPU | -- |
| YOLOv11-Segmentation | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 5.309 ms | 4 - 22 MB | NPU | -- |
| YOLOv11-Segmentation | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 4.631 ms | 6 - 21 MB | NPU | -- |
| YOLOv11-Segmentation | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 10.631 ms | 4 - 41 MB | NPU | -- |
| YOLOv11-Segmentation | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 5.345 ms | 0 - 25 MB | NPU | -- |
| YOLOv11-Segmentation | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 4.612 ms | 0 - 37 MB | NPU | -- |
| YOLOv11-Segmentation | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 6.946 ms | 4 - 76 MB | NPU | -- |
| YOLOv11-Segmentation | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 6.245 ms | 2 - 109 MB | NPU | -- |
| YOLOv11-Segmentation | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 5.35 ms | 0 - 26 MB | NPU | -- |
| YOLOv11-Segmentation | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 4.648 ms | 5 - 52 MB | NPU | -- |
| YOLOv11-Segmentation | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 6.741 ms | 5 - 57 MB | NPU | -- |
| YOLOv11-Segmentation | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 3.925 ms | 0 - 93 MB | NPU | -- |
| YOLOv11-Segmentation | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 3.432 ms | 5 - 207 MB | NPU | -- |
| YOLOv11-Segmentation | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 5.039 ms | 15 - 143 MB | NPU | -- |
| YOLOv11-Segmentation | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 3.081 ms | 3 - 77 MB | NPU | -- |
| YOLOv11-Segmentation | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 3.065 ms | 5 - 124 MB | NPU | -- |
| YOLOv11-Segmentation | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 4.329 ms | 5 - 112 MB | NPU | -- |
| YOLOv11-Segmentation | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 5.073 ms | 60 - 60 MB | NPU | -- |
| YOLOv11-Segmentation | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 6.862 ms | 17 - 17 MB | NPU | -- |
| YOLOv11-Segmentation | w8a16 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | ONNX | 58.426 ms | 13 - 201 MB | NPU | -- |
| YOLOv11-Segmentation | w8a16 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | ONNX | 222.185 ms | 161 - 178 MB | CPU | -- |
| YOLOv11-Segmentation | w8a16 | RB5 (Proxy) | Qualcomm® QCS8250 (Proxy) | ONNX | 206.907 ms | 163 - 169 MB | CPU | -- |
| YOLOv11-Segmentation | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 57.333 ms | 13 - 199 MB | NPU | -- |
| YOLOv11-Segmentation | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 45.07 ms | 0 - 1605 MB | NPU | -- |
| YOLOv11-Segmentation | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 51.321 ms | 6 - 672 MB | NPU | -- |
| YOLOv11-Segmentation | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 57.38 ms | 31 - 31 MB | NPU | -- |
## Installation
Install the package via pip:
```bash
pip install "qai-hub-models[yolov11-seg]"
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.yolov11_seg.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.yolov11_seg.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.yolov11_seg.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/yolov11_seg/qai_hub_models/models/YOLOv11-Segmentation/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.yolov11_seg import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.yolov11_seg.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.yolov11_seg.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on YOLOv11-Segmentation's performance across various devices [here](https://aihub.qualcomm.com/models/yolov11_seg).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of YOLOv11-Segmentation can be found
[here](https://github.com/ultralytics/ultralytics/blob/main/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://github.com/ultralytics/ultralytics/blob/main/LICENSE)
## References
* [Ultralytics YOLOv11 Docs: Instance Segmentation](https://docs.ultralytics.com/tasks/segment/)
* [Source Model Implementation](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/yolo/segment)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:[email protected]).
|
csikasote/mms-1b-all-bemgen-combined-m100f50-42-DAT-3e-1
|
csikasote
| 2025-09-16T07:03:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"bemgen",
"mms",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-16T06:34:59Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- automatic-speech-recognition
- bemgen
- mms
- generated_from_trainer
model-index:
- name: mms-1b-all-bemgen-combined-m100f50-42-DAT-3e-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1b-all-bemgen-combined-m100f50-42-DAT-3e-1
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the BEMGEN - BEM dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3328
- Cer: 0.0923
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 2.8187 | 0.5618 | 100 | 3.0424 | 1.0 |
| 0.967 | 1.1236 | 200 | 1.5458 | 0.5151 |
| 0.6592 | 1.6854 | 300 | 0.3801 | 0.1038 |
| 0.6498 | 2.2472 | 400 | 0.3328 | 0.0923 |
| 0.7063 | 2.8090 | 500 | 0.3103 | 0.0853 |
| 0.6608 | 3.3708 | 600 | 0.3013 | 0.0842 |
| 0.6945 | 3.9326 | 700 | 0.2879 | 0.0812 |
| 0.7163 | 4.4944 | 800 | 0.2885 | 0.0814 |
| 0.7043 | 5.0562 | 900 | 0.2958 | 0.0863 |
| 0.6722 | 5.6180 | 1000 | 0.2908 | 0.0798 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
qualcomm/YamNet
|
qualcomm
| 2025-09-16T07:03:26Z | 526 | 2 |
pytorch
|
[
"pytorch",
"tflite",
"real_time",
"android",
"audio-classification",
"arxiv:1704.04861",
"license:other",
"region:us"
] |
audio-classification
| 2025-03-14T02:22:13Z |
---
library_name: pytorch
license: other
tags:
- real_time
- android
pipeline_tag: audio-classification
---

# YamNet: Optimized for Mobile Deployment
## Audio Event classification Model
An audio event classifier trained on the AudioSet dataset to predict audio events from the AudioSet ontology employing the Mobilenet_v1 depthwise-separable convolution architecture.
This model is an implementation of YamNet found [here](https://github.com/w-hc/torch_audioset).
This repository provides scripts to run YamNet on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/yamnet).
### Model Details
- **Model Type:** Model_use_case.audio_classification
- **Model Stats:**
- Model checkpoint: yamnet.pth
- Input resolution: 1x1x96x64
- Number of parameters: 3.73M
- Model size (float): 14.2 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| YamNet | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 0.668 ms | 0 - 22 MB | NPU | [YamNet.tflite](https://huggingface.co/qualcomm/YamNet/blob/main/YamNet.tflite) |
| YamNet | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 0.644 ms | 0 - 16 MB | NPU | [YamNet.dlc](https://huggingface.co/qualcomm/YamNet/blob/main/YamNet.dlc) |
| YamNet | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 0.319 ms | 0 - 34 MB | NPU | [YamNet.tflite](https://huggingface.co/qualcomm/YamNet/blob/main/YamNet.tflite) |
| YamNet | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 0.35 ms | 0 - 25 MB | NPU | [YamNet.dlc](https://huggingface.co/qualcomm/YamNet/blob/main/YamNet.dlc) |
| YamNet | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 0.211 ms | 0 - 72 MB | NPU | [YamNet.tflite](https://huggingface.co/qualcomm/YamNet/blob/main/YamNet.tflite) |
| YamNet | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 0.217 ms | 0 - 51 MB | NPU | [YamNet.dlc](https://huggingface.co/qualcomm/YamNet/blob/main/YamNet.dlc) |
| YamNet | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | ONNX | 0.293 ms | 0 - 55 MB | NPU | [YamNet.onnx.zip](https://huggingface.co/qualcomm/YamNet/blob/main/YamNet.onnx.zip) |
| YamNet | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 0.368 ms | 0 - 22 MB | NPU | [YamNet.tflite](https://huggingface.co/qualcomm/YamNet/blob/main/YamNet.tflite) |
| YamNet | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 0.36 ms | 0 - 16 MB | NPU | [YamNet.dlc](https://huggingface.co/qualcomm/YamNet/blob/main/YamNet.dlc) |
| YamNet | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 0.668 ms | 0 - 22 MB | NPU | [YamNet.tflite](https://huggingface.co/qualcomm/YamNet/blob/main/YamNet.tflite) |
| YamNet | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 0.644 ms | 0 - 16 MB | NPU | [YamNet.dlc](https://huggingface.co/qualcomm/YamNet/blob/main/YamNet.dlc) |
| YamNet | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 0.22 ms | 0 - 70 MB | NPU | [YamNet.tflite](https://huggingface.co/qualcomm/YamNet/blob/main/YamNet.tflite) |
| YamNet | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 0.213 ms | 0 - 52 MB | NPU | [YamNet.dlc](https://huggingface.co/qualcomm/YamNet/blob/main/YamNet.dlc) |
| YamNet | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 0.547 ms | 0 - 28 MB | NPU | [YamNet.tflite](https://huggingface.co/qualcomm/YamNet/blob/main/YamNet.tflite) |
| YamNet | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 0.513 ms | 0 - 24 MB | NPU | [YamNet.dlc](https://huggingface.co/qualcomm/YamNet/blob/main/YamNet.dlc) |
| YamNet | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 0.207 ms | 0 - 73 MB | NPU | [YamNet.tflite](https://huggingface.co/qualcomm/YamNet/blob/main/YamNet.tflite) |
| YamNet | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 0.215 ms | 0 - 48 MB | NPU | [YamNet.dlc](https://huggingface.co/qualcomm/YamNet/blob/main/YamNet.dlc) |
| YamNet | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 0.368 ms | 0 - 22 MB | NPU | [YamNet.tflite](https://huggingface.co/qualcomm/YamNet/blob/main/YamNet.tflite) |
| YamNet | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 0.36 ms | 0 - 16 MB | NPU | [YamNet.dlc](https://huggingface.co/qualcomm/YamNet/blob/main/YamNet.dlc) |
| YamNet | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 0.212 ms | 0 - 73 MB | NPU | [YamNet.tflite](https://huggingface.co/qualcomm/YamNet/blob/main/YamNet.tflite) |
| YamNet | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 0.214 ms | 0 - 49 MB | NPU | [YamNet.dlc](https://huggingface.co/qualcomm/YamNet/blob/main/YamNet.dlc) |
| YamNet | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 0.309 ms | 0 - 53 MB | NPU | [YamNet.onnx.zip](https://huggingface.co/qualcomm/YamNet/blob/main/YamNet.onnx.zip) |
| YamNet | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 0.173 ms | 0 - 34 MB | NPU | [YamNet.tflite](https://huggingface.co/qualcomm/YamNet/blob/main/YamNet.tflite) |
| YamNet | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 0.169 ms | 0 - 25 MB | NPU | [YamNet.dlc](https://huggingface.co/qualcomm/YamNet/blob/main/YamNet.dlc) |
| YamNet | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 0.244 ms | 0 - 29 MB | NPU | [YamNet.onnx.zip](https://huggingface.co/qualcomm/YamNet/blob/main/YamNet.onnx.zip) |
| YamNet | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 0.174 ms | 0 - 29 MB | NPU | [YamNet.tflite](https://huggingface.co/qualcomm/YamNet/blob/main/YamNet.tflite) |
| YamNet | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 0.147 ms | 0 - 18 MB | NPU | [YamNet.dlc](https://huggingface.co/qualcomm/YamNet/blob/main/YamNet.dlc) |
| YamNet | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 0.25 ms | 0 - 19 MB | NPU | [YamNet.onnx.zip](https://huggingface.co/qualcomm/YamNet/blob/main/YamNet.onnx.zip) |
| YamNet | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 0.267 ms | 56 - 56 MB | NPU | [YamNet.dlc](https://huggingface.co/qualcomm/YamNet/blob/main/YamNet.dlc) |
| YamNet | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 0.274 ms | 8 - 8 MB | NPU | [YamNet.onnx.zip](https://huggingface.co/qualcomm/YamNet/blob/main/YamNet.onnx.zip) |
## Installation
Install the package via pip:
```bash
pip install "qai-hub-models[yamnet]"
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.yamnet.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.yamnet.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.yamnet.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/yamnet/qai_hub_models/models/YamNet/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.yamnet import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.yamnet.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.yamnet.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on YamNet's performance across various devices [here](https://aihub.qualcomm.com/models/yamnet).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of YamNet can be found
[here](https://github.com/w-hc/torch_audioset/blob/master/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [MobileNets Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861)
* [Source Model Implementation](https://github.com/w-hc/torch_audioset)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:[email protected]).
|
qualcomm/XLSR
|
qualcomm
| 2025-09-16T07:03:16Z | 318 | 11 |
pytorch
|
[
"pytorch",
"tflite",
"android",
"image-to-image",
"arxiv:2105.10288",
"license:other",
"region:us"
] |
image-to-image
| 2024-02-25T23:01:07Z |
---
library_name: pytorch
license: other
tags:
- android
pipeline_tag: image-to-image
---

# XLSR: Optimized for Mobile Deployment
## Upscale images in real time
XLSR is designed for lightweight real-time upscaling of images.
This model is an implementation of XLSR found [here](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/xlsr).
This repository provides scripts to run XLSR on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/xlsr).
### Model Details
- **Model Type:** Model_use_case.super_resolution
- **Model Stats:**
- Model checkpoint: xlsr_3x_checkpoint
- Input resolution: 128x128
- Number of parameters: 28.0K
- Model size (float): 115 KB
- Model size (w8a8): 45.6 KB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| XLSR | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 4.862 ms | 3 - 18 MB | NPU | [XLSR.tflite](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR.tflite) |
| XLSR | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 2.132 ms | 0 - 15 MB | NPU | [XLSR.dlc](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR.dlc) |
| XLSR | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 2.505 ms | 0 - 29 MB | NPU | [XLSR.tflite](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR.tflite) |
| XLSR | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 1.044 ms | 0 - 30 MB | NPU | [XLSR.dlc](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR.dlc) |
| XLSR | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 2.298 ms | 0 - 8 MB | NPU | [XLSR.tflite](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR.tflite) |
| XLSR | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 0.765 ms | 0 - 6 MB | NPU | [XLSR.dlc](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR.dlc) |
| XLSR | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | ONNX | 1.158 ms | 0 - 7 MB | NPU | [XLSR.onnx.zip](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR.onnx.zip) |
| XLSR | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 2.785 ms | 0 - 16 MB | NPU | [XLSR.tflite](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR.tflite) |
| XLSR | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 1.164 ms | 0 - 15 MB | NPU | [XLSR.dlc](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR.dlc) |
| XLSR | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 4.862 ms | 3 - 18 MB | NPU | [XLSR.tflite](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR.tflite) |
| XLSR | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 2.132 ms | 0 - 15 MB | NPU | [XLSR.dlc](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR.dlc) |
| XLSR | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 2.293 ms | 0 - 5 MB | NPU | [XLSR.tflite](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR.tflite) |
| XLSR | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 0.775 ms | 0 - 6 MB | NPU | [XLSR.dlc](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR.dlc) |
| XLSR | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 3.152 ms | 0 - 21 MB | NPU | [XLSR.tflite](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR.tflite) |
| XLSR | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 1.39 ms | 0 - 25 MB | NPU | [XLSR.dlc](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR.dlc) |
| XLSR | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 2.297 ms | 0 - 7 MB | NPU | [XLSR.tflite](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR.tflite) |
| XLSR | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 0.766 ms | 0 - 6 MB | NPU | [XLSR.dlc](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR.dlc) |
| XLSR | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 2.785 ms | 0 - 16 MB | NPU | [XLSR.tflite](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR.tflite) |
| XLSR | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 1.164 ms | 0 - 15 MB | NPU | [XLSR.dlc](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR.dlc) |
| XLSR | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 2.299 ms | 0 - 7 MB | NPU | [XLSR.tflite](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR.tflite) |
| XLSR | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 0.767 ms | 0 - 6 MB | NPU | [XLSR.dlc](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR.dlc) |
| XLSR | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 1.199 ms | 0 - 8 MB | NPU | [XLSR.onnx.zip](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR.onnx.zip) |
| XLSR | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 1.467 ms | 0 - 30 MB | NPU | [XLSR.tflite](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR.tflite) |
| XLSR | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 0.462 ms | 0 - 27 MB | NPU | [XLSR.dlc](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR.dlc) |
| XLSR | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 0.693 ms | 0 - 25 MB | NPU | [XLSR.onnx.zip](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR.onnx.zip) |
| XLSR | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 1.352 ms | 0 - 21 MB | NPU | [XLSR.tflite](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR.tflite) |
| XLSR | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 0.607 ms | 0 - 23 MB | NPU | [XLSR.dlc](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR.dlc) |
| XLSR | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 0.588 ms | 0 - 16 MB | NPU | [XLSR.onnx.zip](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR.onnx.zip) |
| XLSR | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 0.909 ms | 0 - 0 MB | NPU | [XLSR.dlc](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR.dlc) |
| XLSR | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 1.207 ms | 8 - 8 MB | NPU | [XLSR.onnx.zip](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR.onnx.zip) |
| XLSR | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 1.016 ms | 0 - 15 MB | NPU | [XLSR.tflite](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR_w8a8.tflite) |
| XLSR | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 0.897 ms | 0 - 15 MB | NPU | [XLSR.dlc](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR_w8a8.dlc) |
| XLSR | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 0.501 ms | 0 - 28 MB | NPU | [XLSR.tflite](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR_w8a8.tflite) |
| XLSR | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 0.52 ms | 0 - 25 MB | NPU | [XLSR.dlc](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR_w8a8.dlc) |
| XLSR | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 0.427 ms | 0 - 11 MB | NPU | [XLSR.tflite](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR_w8a8.tflite) |
| XLSR | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 0.367 ms | 0 - 10 MB | NPU | [XLSR.dlc](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR_w8a8.dlc) |
| XLSR | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 0.633 ms | 1 - 16 MB | NPU | [XLSR.tflite](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR_w8a8.tflite) |
| XLSR | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 0.57 ms | 0 - 16 MB | NPU | [XLSR.dlc](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR_w8a8.dlc) |
| XLSR | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | TFLITE | 1.019 ms | 0 - 19 MB | NPU | [XLSR.tflite](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR_w8a8.tflite) |
| XLSR | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | QNN_DLC | 1.055 ms | 0 - 19 MB | NPU | [XLSR.dlc](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR_w8a8.dlc) |
| XLSR | w8a8 | RB5 (Proxy) | Qualcomm® QCS8250 (Proxy) | TFLITE | 4.976 ms | 3 - 10 MB | GPU | [XLSR.tflite](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR_w8a8.tflite) |
| XLSR | w8a8 | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 1.016 ms | 0 - 15 MB | NPU | [XLSR.tflite](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR_w8a8.tflite) |
| XLSR | w8a8 | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 0.897 ms | 0 - 15 MB | NPU | [XLSR.dlc](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR_w8a8.dlc) |
| XLSR | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 0.433 ms | 0 - 10 MB | NPU | [XLSR.tflite](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR_w8a8.tflite) |
| XLSR | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 0.371 ms | 0 - 10 MB | NPU | [XLSR.dlc](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR_w8a8.dlc) |
| XLSR | w8a8 | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 0.862 ms | 0 - 23 MB | NPU | [XLSR.tflite](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR_w8a8.tflite) |
| XLSR | w8a8 | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 0.734 ms | 0 - 24 MB | NPU | [XLSR.dlc](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR_w8a8.dlc) |
| XLSR | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 0.434 ms | 0 - 9 MB | NPU | [XLSR.tflite](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR_w8a8.tflite) |
| XLSR | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 0.385 ms | 0 - 10 MB | NPU | [XLSR.dlc](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR_w8a8.dlc) |
| XLSR | w8a8 | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 0.633 ms | 1 - 16 MB | NPU | [XLSR.tflite](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR_w8a8.tflite) |
| XLSR | w8a8 | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 0.57 ms | 0 - 16 MB | NPU | [XLSR.dlc](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR_w8a8.dlc) |
| XLSR | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 0.424 ms | 0 - 10 MB | NPU | [XLSR.tflite](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR_w8a8.tflite) |
| XLSR | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 0.385 ms | 0 - 10 MB | NPU | [XLSR.dlc](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR_w8a8.dlc) |
| XLSR | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 0.276 ms | 0 - 23 MB | NPU | [XLSR.tflite](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR_w8a8.tflite) |
| XLSR | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 0.242 ms | 0 - 26 MB | NPU | [XLSR.dlc](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR_w8a8.dlc) |
| XLSR | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 0.387 ms | 0 - 21 MB | NPU | [XLSR.tflite](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR_w8a8.tflite) |
| XLSR | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 0.269 ms | 0 - 25 MB | NPU | [XLSR.dlc](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR_w8a8.dlc) |
| XLSR | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 0.477 ms | 4 - 4 MB | NPU | [XLSR.dlc](https://huggingface.co/qualcomm/XLSR/blob/main/XLSR_w8a8.dlc) |
## Installation
Install the package via pip:
```bash
pip install qai-hub-models
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.xlsr.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.xlsr.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.xlsr.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/xlsr/qai_hub_models/models/XLSR/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.xlsr import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.xlsr.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.xlsr.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on XLSR's performance across various devices [here](https://aihub.qualcomm.com/models/xlsr).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of XLSR can be found
[here](https://github.com/quic/aimet-model-zoo/blob/develop/LICENSE.pdf).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [Extremely Lightweight Quantization Robust Real-Time Single-Image Super Resolution for Mobile Devices](https://arxiv.org/abs/2105.10288)
* [Source Model Implementation](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/xlsr)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:[email protected]).
|
luckeciano/Qwen-2.5-7B-GRPO-Base-KL-0.1-v2_4688
|
luckeciano
| 2025-09-16T07:03:15Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-16T02:59:34Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-Base-KL-0.1-v2_4688
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-Base-KL-0.1-v2_4688
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-Base-KL-0.1-v2_4688", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/4k2hx02f)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.4.1
- Tokenizers: 0.21.2
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
qualcomm/Whisper-Tiny
|
qualcomm
| 2025-09-16T07:01:06Z | 0 | 0 |
pytorch
|
[
"pytorch",
"foundation",
"android",
"automatic-speech-recognition",
"license:other",
"region:us"
] |
automatic-speech-recognition
| 2025-08-30T00:13:13Z |
---
library_name: pytorch
license: other
tags:
- foundation
- android
pipeline_tag: automatic-speech-recognition
---

# Whisper-Tiny: Optimized for Mobile Deployment
## Transformer-based automatic speech recognition (ASR) model for multilingual transcription and translation available on HuggingFace
HuggingFace Whisper-Small ASR (Automatic Speech Recognition) model is a state-of-the-art system designed for transcribing spoken language into written text. This model is based on the transformer architecture and has been optimized for edge inference by replacing Multi-Head Attention (MHA) with Single-Head Attention (SHA) and linear layers with convolutional (conv) layers. It exhibits robust performance in realistic, noisy environments, making it highly reliable for real-world applications. Specifically, it excels in long-form transcription, capable of accurately transcribing audio clips up to 30 seconds long. Time to the first token is the encoder's latency, while time to each additional token is decoder's latency, where we assume a max decoded length specified below.
This model is an implementation of Whisper-Tiny found [here](https://github.com/huggingface/transformers/tree/v4.42.3/src/transformers/models/whisper).
This repository provides scripts to run Whisper-Tiny on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/whisper_tiny).
### Model Details
- **Model Type:** Model_use_case.speech_recognition
- **Model Stats:**
- Model checkpoint: openai/whisper-tiny
- Input resolution: 80x3000 (30 seconds audio)
- Max decoded sequence length: 200 tokens
- Number of parameters (HfWhisperEncoder): 9.39M
- Model size (HfWhisperEncoder) (float): 35.9 MB
- Number of parameters (HfWhisperDecoder): 28.4M
- Model size (HfWhisperDecoder) (float): 109 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| HfWhisperEncoder | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_CONTEXT_BINARY | 61.502 ms | 0 - 9 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_CONTEXT_BINARY | 53.824 ms | 1 - 22 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_CONTEXT_BINARY | 20.104 ms | 1 - 3 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | PRECOMPILED_QNN_ONNX | 20.588 ms | 4 - 38 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_CONTEXT_BINARY | 23.543 ms | 1 - 11 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | SA7255P ADP | Qualcomm® SA7255P | QNN_CONTEXT_BINARY | 61.502 ms | 0 - 9 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_CONTEXT_BINARY | 20.282 ms | 1 - 2 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | SA8295P ADP | Qualcomm® SA8295P | QNN_CONTEXT_BINARY | 50.611 ms | 1 - 18 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_CONTEXT_BINARY | 20.333 ms | 0 - 3 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | SA8775P ADP | Qualcomm® SA8775P | QNN_CONTEXT_BINARY | 23.543 ms | 1 - 11 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_CONTEXT_BINARY | 20.132 ms | 1 - 3 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | PRECOMPILED_QNN_ONNX | 20.56 ms | 4 - 38 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_CONTEXT_BINARY | 15.35 ms | 0 - 19 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | PRECOMPILED_QNN_ONNX | 16.038 ms | 20 - 39 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_CONTEXT_BINARY | 12.479 ms | 0 - 15 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | PRECOMPILED_QNN_ONNX | 13.651 ms | 19 - 33 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_CONTEXT_BINARY | 19.708 ms | 0 - 0 MB | NPU | Use Export Script |
| HfWhisperEncoder | float | Snapdragon X Elite CRD | Snapdragon® X Elite | PRECOMPILED_QNN_ONNX | 20.027 ms | 34 - 34 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_CONTEXT_BINARY | 3.589 ms | 10 - 19 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_CONTEXT_BINARY | 2.647 ms | 10 - 27 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_CONTEXT_BINARY | 2.144 ms | 9 - 12 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | PRECOMPILED_QNN_ONNX | 2.4 ms | 0 - 87 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_CONTEXT_BINARY | 2.647 ms | 9 - 18 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | SA7255P ADP | Qualcomm® SA7255P | QNN_CONTEXT_BINARY | 3.589 ms | 10 - 19 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_CONTEXT_BINARY | 2.179 ms | 10 - 12 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | SA8295P ADP | Qualcomm® SA8295P | QNN_CONTEXT_BINARY | 3.028 ms | 10 - 25 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_CONTEXT_BINARY | 2.15 ms | 6 - 8 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | SA8775P ADP | Qualcomm® SA8775P | QNN_CONTEXT_BINARY | 2.647 ms | 9 - 18 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_CONTEXT_BINARY | 2.154 ms | 1 - 4 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | PRECOMPILED_QNN_ONNX | 2.438 ms | 9 - 18 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_CONTEXT_BINARY | 1.653 ms | 0 - 22 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | PRECOMPILED_QNN_ONNX | 1.858 ms | 10 - 29 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_CONTEXT_BINARY | 1.336 ms | 1 - 16 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | PRECOMPILED_QNN_ONNX | 1.637 ms | 0 - 14 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_CONTEXT_BINARY | 1.946 ms | 10 - 10 MB | NPU | Use Export Script |
| HfWhisperDecoder | float | Snapdragon X Elite CRD | Snapdragon® X Elite | PRECOMPILED_QNN_ONNX | 2.005 ms | 83 - 83 MB | NPU | Use Export Script |
## Installation
Install the package via pip:
```bash
pip install "qai-hub-models[whisper-tiny]"
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.whisper_tiny.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.whisper_tiny.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.whisper_tiny.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/whisper_tiny/qai_hub_models/models/Whisper-Tiny/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.whisper_tiny import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on Whisper-Tiny's performance across various devices [here](https://aihub.qualcomm.com/models/whisper_tiny).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of Whisper-Tiny can be found
[here](https://github.com/huggingface/transformers/blob/v4.42.3/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [Robust Speech Recognition via Large-Scale Weak Supervision](https://cdn.openai.com/papers/whisper.pdf)
* [Source Model Implementation](https://github.com/huggingface/transformers/tree/v4.42.3/src/transformers/models/whisper)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:[email protected]).
|
Hapiness/blockassist
|
Hapiness
| 2025-09-16T07:01:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"downy vicious mammoth",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-16T07:00:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- downy vicious mammoth
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sunipun/rooster-art-lora
|
Sunipun
| 2025-09-16T06:58:15Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2025-09-16T06:56:53Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/33d114_3c3920b0cbd44516afdfad7531ebae99~mv2_d_8100_11700_s_4_2.jpg
text: A rooster driving a car
- output:
url: images/image (3).webp
text: A rooster driving a car
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: rooster_art
---
# rooster-art
<Gallery />
## Trigger words
You should use `rooster_art` to trigger the image generation.
## Download model
[Download](/Sunipun/rooster-art-lora/tree/main) them in the Files & versions tab.
|
yujiangw/Qwen3-1.7B-GRPO
|
yujiangw
| 2025-09-16T06:56:25Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-01T22:05:46Z |
---
library_name: transformers
model_name: Qwen3-1.7B-GRPO
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen3-1.7B-GRPO
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="yujiangw/Qwen3-1.7B-GRPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yujiangw-carnegie-mellon-university/huggingface/runs/t2iq960s)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.0
- Transformers: 4.52.3
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.2
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
PlayerBPlaytime/MJ-Models
|
PlayerBPlaytime
| 2025-09-16T06:55:23Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-01-18T21:52:38Z |
---
license: apache-2.0
---
|
csikasote/mms-1b-all-bemgen-combined-m100f50-42-DAT-2e-1
|
csikasote
| 2025-09-16T06:54:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"bemgen",
"mms",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-16T06:15:06Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- automatic-speech-recognition
- bemgen
- mms
- generated_from_trainer
model-index:
- name: mms-1b-all-bemgen-combined-m100f50-42-DAT-2e-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1b-all-bemgen-combined-m100f50-42-DAT-2e-1
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the BEMGEN - BEM dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3243
- Cer: 0.0901
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 1.9855 | 0.5618 | 100 | 3.0498 | 1.0 |
| 0.7152 | 1.1236 | 200 | 1.9710 | 0.6918 |
| 0.542 | 1.6854 | 300 | 0.4863 | 0.1206 |
| 0.6118 | 2.2472 | 400 | 0.3793 | 0.1011 |
| 0.6607 | 2.8090 | 500 | 0.3603 | 0.1010 |
| 0.6371 | 3.3708 | 600 | 0.3529 | 0.0966 |
| 0.641 | 3.9326 | 700 | 0.3425 | 0.0961 |
| 0.6864 | 4.4944 | 800 | 0.3243 | 0.0901 |
| 0.6732 | 5.0562 | 900 | 0.3308 | 0.0982 |
| 0.6498 | 5.6180 | 1000 | 0.3235 | 0.0891 |
| 0.6708 | 6.1798 | 1100 | 0.3160 | 0.0897 |
| 0.6404 | 6.7416 | 1200 | 0.3401 | 0.0988 |
| 0.6552 | 7.3034 | 1300 | 0.3309 | 0.0945 |
| 0.6504 | 7.8652 | 1400 | 0.3266 | 0.0946 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
mradermacher/Impish-Irix-Kitsune-GGUF
|
mradermacher
| 2025-09-16T06:52:55Z | 0 | 1 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"RP",
"roleplay",
"NSFW",
"en",
"base_model:MrRikyz/Impish-Irix-Kitsune",
"base_model:quantized:MrRikyz/Impish-Irix-Kitsune",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-16T05:15:08Z |
---
base_model: MrRikyz/Impish-Irix-Kitsune
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
- RP
- roleplay
- NSFW
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/MrRikyz/Impish-Irix-Kitsune
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Impish-Irix-Kitsune-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Impish-Irix-Kitsune-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Impish-Irix-Kitsune-GGUF/resolve/main/Impish-Irix-Kitsune.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Impish-Irix-Kitsune-GGUF/resolve/main/Impish-Irix-Kitsune.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/Impish-Irix-Kitsune-GGUF/resolve/main/Impish-Irix-Kitsune.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Impish-Irix-Kitsune-GGUF/resolve/main/Impish-Irix-Kitsune.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Impish-Irix-Kitsune-GGUF/resolve/main/Impish-Irix-Kitsune.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/Impish-Irix-Kitsune-GGUF/resolve/main/Impish-Irix-Kitsune.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Impish-Irix-Kitsune-GGUF/resolve/main/Impish-Irix-Kitsune.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Impish-Irix-Kitsune-GGUF/resolve/main/Impish-Irix-Kitsune.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Impish-Irix-Kitsune-GGUF/resolve/main/Impish-Irix-Kitsune.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Impish-Irix-Kitsune-GGUF/resolve/main/Impish-Irix-Kitsune.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Impish-Irix-Kitsune-GGUF/resolve/main/Impish-Irix-Kitsune.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
csikasote/mms-1b-all-bemgen-combined-m100f50-42-DAT-1e-1
|
csikasote
| 2025-09-16T06:45:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"bemgen",
"mms",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-16T06:15:06Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- automatic-speech-recognition
- bemgen
- mms
- generated_from_trainer
model-index:
- name: mms-1b-all-bemgen-combined-m100f50-42-DAT-1e-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1b-all-bemgen-combined-m100f50-42-DAT-1e-1
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the BEMGEN - BEM dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3757
- Cer: 0.1056
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 1.1872 | 0.5618 | 100 | 3.1092 | 1.0000 |
| 0.4587 | 1.1236 | 200 | 2.5617 | 0.9640 |
| 0.4541 | 1.6854 | 300 | 1.2329 | 0.3923 |
| 0.5993 | 2.2472 | 400 | 0.6113 | 0.1642 |
| 0.6091 | 2.8090 | 500 | 0.4449 | 0.1147 |
| 0.5945 | 3.3708 | 600 | 0.4128 | 0.1110 |
| 0.6171 | 3.9326 | 700 | 0.4027 | 0.1157 |
| 0.6225 | 4.4944 | 800 | 0.3757 | 0.1057 |
| 0.6166 | 5.0562 | 900 | 0.3911 | 0.1320 |
| 0.5993 | 5.6180 | 1000 | 0.4025 | 0.1280 |
| 0.653 | 6.1798 | 1100 | 0.4031 | 0.1317 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
qualcomm/VIT
|
qualcomm
| 2025-09-16T06:43:46Z | 232 | 16 |
pytorch
|
[
"pytorch",
"tflite",
"backbone",
"android",
"image-classification",
"arxiv:2010.11929",
"license:other",
"region:us"
] |
image-classification
| 2024-02-25T23:09:22Z |
---
library_name: pytorch
license: other
tags:
- backbone
- android
pipeline_tag: image-classification
---

# VIT: Optimized for Mobile Deployment
## Imagenet classifier and general purpose backbone
VIT is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.
This model is an implementation of VIT found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/vision_transformer.py).
This repository provides scripts to run VIT on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/vit).
### Model Details
- **Model Type:** Model_use_case.image_classification
- **Model Stats:**
- Model checkpoint: Imagenet
- Input resolution: 224x224
- Number of parameters: 86.6M
- Model size (float): 330 MB
- Model size (w8a16): 86.2 MB
- Model size (w8a8): 83.2 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| VIT | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 42.876 ms | 0 - 306 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT.tflite) |
| VIT | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 45.163 ms | 0 - 320 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT.dlc) |
| VIT | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 17.073 ms | 0 - 299 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT.tflite) |
| VIT | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 21.48 ms | 1 - 324 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT.dlc) |
| VIT | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 12.48 ms | 0 - 23 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT.tflite) |
| VIT | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 13.839 ms | 0 - 31 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT.dlc) |
| VIT | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | ONNX | 13.477 ms | 0 - 23 MB | NPU | [VIT.onnx.zip](https://huggingface.co/qualcomm/VIT/blob/main/VIT.onnx.zip) |
| VIT | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 15.25 ms | 0 - 306 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT.tflite) |
| VIT | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 16.627 ms | 1 - 327 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT.dlc) |
| VIT | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 42.876 ms | 0 - 306 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT.tflite) |
| VIT | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 45.163 ms | 0 - 320 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT.dlc) |
| VIT | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 12.452 ms | 0 - 16 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT.tflite) |
| VIT | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 13.834 ms | 0 - 33 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT.dlc) |
| VIT | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 19.267 ms | 0 - 290 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT.tflite) |
| VIT | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 19.77 ms | 0 - 320 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT.dlc) |
| VIT | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 12.492 ms | 0 - 14 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT.tflite) |
| VIT | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 13.835 ms | 0 - 31 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT.dlc) |
| VIT | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 15.25 ms | 0 - 306 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT.tflite) |
| VIT | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 16.627 ms | 1 - 327 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT.dlc) |
| VIT | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 12.462 ms | 0 - 20 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT.tflite) |
| VIT | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 13.849 ms | 0 - 29 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT.dlc) |
| VIT | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 13.325 ms | 1 - 25 MB | NPU | [VIT.onnx.zip](https://huggingface.co/qualcomm/VIT/blob/main/VIT.onnx.zip) |
| VIT | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 8.515 ms | 0 - 311 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT.tflite) |
| VIT | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 9.54 ms | 1 - 324 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT.dlc) |
| VIT | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 9.146 ms | 0 - 323 MB | NPU | [VIT.onnx.zip](https://huggingface.co/qualcomm/VIT/blob/main/VIT.onnx.zip) |
| VIT | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 7.282 ms | 0 - 309 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT.tflite) |
| VIT | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 7.977 ms | 1 - 313 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT.dlc) |
| VIT | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 7.606 ms | 1 - 323 MB | NPU | [VIT.onnx.zip](https://huggingface.co/qualcomm/VIT/blob/main/VIT.onnx.zip) |
| VIT | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 14.618 ms | 1094 - 1094 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT.dlc) |
| VIT | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 13.843 ms | 171 - 171 MB | NPU | [VIT.onnx.zip](https://huggingface.co/qualcomm/VIT/blob/main/VIT.onnx.zip) |
| VIT | w8a16 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 65.524 ms | 0 - 196 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a16.dlc) |
| VIT | w8a16 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 50.065 ms | 0 - 223 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a16.dlc) |
| VIT | w8a16 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 26.186 ms | 0 - 47 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a16.dlc) |
| VIT | w8a16 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | ONNX | 168.292 ms | 536 - 787 MB | NPU | [VIT.onnx.zip](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a16.onnx.zip) |
| VIT | w8a16 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 23.047 ms | 0 - 197 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a16.dlc) |
| VIT | w8a16 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | QNN_DLC | 196.718 ms | 0 - 1520 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a16.dlc) |
| VIT | w8a16 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | ONNX | 536.174 ms | 70 - 87 MB | CPU | [VIT.onnx.zip](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a16.onnx.zip) |
| VIT | w8a16 | RB5 (Proxy) | Qualcomm® QCS8250 (Proxy) | ONNX | 631.549 ms | 44 - 129 MB | CPU | [VIT.onnx.zip](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a16.onnx.zip) |
| VIT | w8a16 | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 65.524 ms | 0 - 196 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a16.dlc) |
| VIT | w8a16 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 26.173 ms | 0 - 48 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a16.dlc) |
| VIT | w8a16 | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 36.991 ms | 0 - 215 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a16.dlc) |
| VIT | w8a16 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 26.05 ms | 0 - 48 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a16.dlc) |
| VIT | w8a16 | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 23.047 ms | 0 - 197 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a16.dlc) |
| VIT | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 26.057 ms | 0 - 48 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a16.dlc) |
| VIT | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 159.216 ms | 653 - 900 MB | NPU | [VIT.onnx.zip](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a16.onnx.zip) |
| VIT | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 19.859 ms | 0 - 201 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a16.dlc) |
| VIT | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 130.896 ms | 663 - 829 MB | NPU | [VIT.onnx.zip](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a16.onnx.zip) |
| VIT | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 16.724 ms | 0 - 188 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a16.dlc) |
| VIT | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 106.079 ms | 692 - 841 MB | NPU | [VIT.onnx.zip](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a16.onnx.zip) |
| VIT | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 25.769 ms | 325 - 325 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a16.dlc) |
| VIT | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 156.58 ms | 926 - 926 MB | NPU | [VIT.onnx.zip](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a16.onnx.zip) |
| VIT | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 15.863 ms | 0 - 47 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8.tflite) |
| VIT | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 8.21 ms | 0 - 56 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8.tflite) |
| VIT | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 7.618 ms | 0 - 22 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8.tflite) |
| VIT | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | ONNX | 159.971 ms | 662 - 842 MB | NPU | [VIT.onnx.zip](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8.onnx.zip) |
| VIT | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 8.002 ms | 0 - 47 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8.tflite) |
| VIT | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | TFLITE | 62.138 ms | 2 - 45 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8.tflite) |
| VIT | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | ONNX | 446.395 ms | 29 - 47 MB | CPU | [VIT.onnx.zip](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8.onnx.zip) |
| VIT | w8a8 | RB5 (Proxy) | Qualcomm® QCS8250 (Proxy) | ONNX | 467.202 ms | 29 - 83 MB | CPU | [VIT.onnx.zip](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8.onnx.zip) |
| VIT | w8a8 | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 15.863 ms | 0 - 47 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8.tflite) |
| VIT | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 7.628 ms | 0 - 21 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8.tflite) |
| VIT | w8a8 | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 9.885 ms | 0 - 49 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8.tflite) |
| VIT | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 7.618 ms | 0 - 81 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8.tflite) |
| VIT | w8a8 | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 8.002 ms | 0 - 47 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8.tflite) |
| VIT | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 7.663 ms | 0 - 80 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8.tflite) |
| VIT | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 151.575 ms | 598 - 837 MB | NPU | [VIT.onnx.zip](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8.onnx.zip) |
| VIT | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 5.385 ms | 0 - 53 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8.tflite) |
| VIT | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 127.612 ms | 675 - 832 MB | NPU | [VIT.onnx.zip](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8.onnx.zip) |
| VIT | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 4.991 ms | 0 - 56 MB | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8.tflite) |
| VIT | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 96.727 ms | 668 - 803 MB | NPU | [VIT.onnx.zip](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8.onnx.zip) |
| VIT | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 144.921 ms | 926 - 926 MB | NPU | [VIT.onnx.zip](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8.onnx.zip) |
| VIT | w8a8_mixed_int16 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 53.742 ms | 0 - 242 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8_mixed_int16.dlc) |
| VIT | w8a8_mixed_int16 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 21.16 ms | 3 - 44 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8_mixed_int16.dlc) |
| VIT | w8a8_mixed_int16 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | ONNX | 185.126 ms | 479 - 727 MB | NPU | [VIT.onnx.zip](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8_mixed_int16.onnx.zip) |
| VIT | w8a8_mixed_int16 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 20.135 ms | 0 - 229 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8_mixed_int16.dlc) |
| VIT | w8a8_mixed_int16 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | ONNX | 459.794 ms | 47 - 65 MB | CPU | [VIT.onnx.zip](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8_mixed_int16.onnx.zip) |
| VIT | w8a8_mixed_int16 | RB5 (Proxy) | Qualcomm® QCS8250 (Proxy) | ONNX | 527.283 ms | 48 - 62 MB | CPU | [VIT.onnx.zip](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8_mixed_int16.onnx.zip) |
| VIT | w8a8_mixed_int16 | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 53.742 ms | 0 - 242 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8_mixed_int16.dlc) |
| VIT | w8a8_mixed_int16 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 21.252 ms | 0 - 41 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8_mixed_int16.dlc) |
| VIT | w8a8_mixed_int16 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 21.119 ms | 0 - 41 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8_mixed_int16.dlc) |
| VIT | w8a8_mixed_int16 | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 20.135 ms | 0 - 229 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8_mixed_int16.dlc) |
| VIT | w8a8_mixed_int16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 21.121 ms | 0 - 41 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8_mixed_int16.dlc) |
| VIT | w8a8_mixed_int16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 194.107 ms | 524 - 807 MB | NPU | [VIT.onnx.zip](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8_mixed_int16.onnx.zip) |
| VIT | w8a8_mixed_int16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 16.395 ms | 3 - 252 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8_mixed_int16.dlc) |
| VIT | w8a8_mixed_int16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 172.624 ms | 541 - 721 MB | NPU | [VIT.onnx.zip](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8_mixed_int16.onnx.zip) |
| VIT | w8a8_mixed_int16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 12.962 ms | 0 - 260 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8_mixed_int16.dlc) |
| VIT | w8a8_mixed_int16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 119.534 ms | 557 - 737 MB | NPU | [VIT.onnx.zip](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8_mixed_int16.onnx.zip) |
| VIT | w8a8_mixed_int16 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 20.932 ms | 398 - 398 MB | NPU | [VIT.dlc](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8_mixed_int16.dlc) |
| VIT | w8a8_mixed_int16 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 170.989 ms | 922 - 922 MB | NPU | [VIT.onnx.zip](https://huggingface.co/qualcomm/VIT/blob/main/VIT_w8a8_mixed_int16.onnx.zip) |
## Installation
Install the package via pip:
```bash
pip install qai-hub-models
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.vit.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.vit.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.vit.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/vit/qai_hub_models/models/VIT/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.vit import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.vit.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.vit.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on VIT's performance across various devices [here](https://aihub.qualcomm.com/models/vit).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of VIT can be found
[here](https://github.com/pytorch/vision/blob/main/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929)
* [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/vision_transformer.py)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:[email protected]).
|
haznitrama/babybabellm-gpt_bert-min-causal
|
haznitrama
| 2025-09-16T06:43:26Z | 0 | 0 | null |
[
"pytorch",
"safetensors",
"gpt_bert",
"custom_code",
"region:us"
] | null | 2025-09-16T06:42:41Z |
# haznitrama/babybabellm-gpt_bert-min-causal
GPT-BERT style BabyBabyLLM monolingual model for language **min**.
This repository mirrors the layout of the multi-all reference models: it may contain both *main* and *EMA* variants.
**Default variant exposed to generic loaders:** `ema`
## Variants Available
ema, main
## Files
- model.safetensors (alias of default variant)
- model_ema.safetensors
- pytorch_model.bin (legacy PyTorch format)
## Configuration
```json
{
"attention_probs_dropout_prob": 0.1,
"hidden_dropout_prob": 0.1,
"hidden_size": 384,
"intermediate_size": 1280,
"max_position_embeddings": 512,
"position_bucket_size": 32,
"num_attention_heads": 6,
"num_hidden_layers": 12,
"vocab_size": 8192,
"layer_norm_eps": 1e-05,
"auto_map": {
"AutoConfig": "configuration_gpt_bert.GPTBertConfig",
"AutoModel": "modeling_gpt_bert.GPTBertForMaskedLM",
"AutoModelForCausalLM": "modeling_gpt_bert.GPTBertForMaskedLM",
"AutoModelForMaskedLM": "modeling_gpt_bert.GPTBertForMaskedLM"
},
"return_dict": true,
"output_hidden_states": false,
"torchscript": false,
"dtype": "float32",
"pruned_heads": {},
"tie_word_embeddings": true,
"chunk_size_feed_forward": 0,
"is_encoder_decoder": false,
"is_decoder": false,
"cross_attention_hidden_size": null,
"add_cross_attention": false,
"tie_encoder_decoder": false,
"architectures": [
"GPTBertForMaskedLM"
],
"finetuning_task": null,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"task_specific_params": null,
"problem_type": null,
"tokenizer_class": null,
"prefix": null,
"bos_token_id": null,
"pad_token_id": null,
"eos_token_id": null,
"sep_token_id": null,
"decoder_start_token_id": null,
"max_length": 20,
"min_length": 0,
"do_sample": false,
"early_stopping": false,
"num_beams": 1,
"num_beam_groups": 1,
"diversity_penalty": 0.0,
"temperature": 1.0,
"top_k": 50,
"top_p": 1.0,
"typical_p": 1.0,
"repetition_penalty": 1.0,
"length_penalty": 1.0,
"no_repeat_ngram_size": 0,
"encoder_no_repeat_ngram_size": 0,
"bad_words_ids": null,
"num_return_sequences": 1,
"output_scores": false,
"return_dict_in_generate": false,
"forced_bos_token_id": null,
"forced_eos_token_id": null,
"remove_invalid_values": false,
"exponential_decay_length_penalty": null,
"suppress_tokens": null,
"begin_suppress_tokens": null,
"_name_or_path": "",
"transformers_version": "4.56.1",
"tf_legacy_loss": false,
"use_bfloat16": false,
"model_type": "gpt_bert",
"output_attentions": false
}
```
Tokenizer file: `tokenizer_min_vs8192.json`
## Quick Usage
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
model_id = 'haznitrama/babybabellm-gpt_bert-min-causal'
tok = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForMaskedLM.from_pretrained(model_id, trust_remote_code=True)
out = model(**tok('Hello world', return_tensors='pt'))
```
Select a specific variant explicitly (when both present):
```python
# Load EMA weights explicitly if both are present
from safetensors.torch import load_file
import torch
from transformers import AutoConfig, AutoModelForMaskedLM
model_id = 'haznitrama/babybabellm-gpt_bert-min-causal'
config = AutoConfig.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForMaskedLM.from_config(config, trust_remote_code=True)
state_dict = torch.load('pytorch_model.bin') # or load_file('model_ema.safetensors')
model.load_state_dict(state_dict, strict=False)
```
### Causal LM Wrapper
This repo includes a lightweight GPTBertForCausalLM wrapper.
Generation example:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
mid='haznitrama/babybabellm-gpt_bert-min-causal'
tok=AutoTokenizer.from_pretrained(mid)
model=AutoModelForCausalLM.from_pretrained(mid, trust_remote_code=True)
print(tok.decode(model.generate(**tok('Hello', return_tensors='pt'), max_new_tokens=20)[0], skip_special_tokens=True))
```
## Notes
- Converted on 2025-09-16T06:42:41.543885Z
- Safe serialization (safetensors) used; `pytorch_model.bin` added for legacy tools.
- Requires `trust_remote_code=True` due to custom architecture.
- EMA (Exponential Moving Average) weights can yield slightly better evaluation metrics; choose according to your needs.
|
JustATalentedGuy/Medico_2025_1_1
|
JustATalentedGuy
| 2025-09-16T06:43:21Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-16T06:43:21Z |
---
license: apache-2.0
---
|
ChenWu98/numina_qwen_2.5_0.5b_sft_teachers_no_reasoning_source_anneal_condition_split_1_from_122
|
ChenWu98
| 2025-09-16T06:42:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:ChenWu98/numina_qwen_2.5_0.5b_sft_teachers_no_reasoning_source_condition_2048",
"base_model:finetune:ChenWu98/numina_qwen_2.5_0.5b_sft_teachers_no_reasoning_source_condition_2048",
"endpoints_compatible",
"region:us"
] | null | 2025-09-16T06:42:31Z |
---
base_model: ChenWu98/numina_qwen_2.5_0.5b_sft_teachers_no_reasoning_source_condition_2048
library_name: transformers
model_name: numina_qwen_2.5_0.5b_sft_teachers_no_reasoning_source_anneal_condition_split_1_from_122
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for numina_qwen_2.5_0.5b_sft_teachers_no_reasoning_source_anneal_condition_split_1_from_122
This model is a fine-tuned version of [ChenWu98/numina_qwen_2.5_0.5b_sft_teachers_no_reasoning_source_condition_2048](https://huggingface.co/ChenWu98/numina_qwen_2.5_0.5b_sft_teachers_no_reasoning_source_condition_2048).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/jp4qt4yy)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.51.1
- Pytorch: 2.7.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
qualcomm/Video-MAE
|
qualcomm
| 2025-09-16T06:42:32Z | 70 | 1 |
pytorch
|
[
"pytorch",
"tflite",
"backbone",
"android",
"video-classification",
"arxiv:2203.12602",
"license:other",
"region:us"
] |
video-classification
| 2025-03-14T02:13:17Z |
---
library_name: pytorch
license: other
tags:
- backbone
- android
pipeline_tag: video-classification
---

# Video-MAE: Optimized for Mobile Deployment
## Sports and human action recognition in videos
Video MAE (Masked Auto Encoder) is a network for doing video classification that uses the ViT (Vision Transformer) backbone.
This model is an implementation of Video-MAE found [here](https://github.com/MCG-NJU/VideoMAE).
This repository provides scripts to run Video-MAE on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/video_mae).
### Model Details
- **Model Type:** Model_use_case.video_classification
- **Model Stats:**
- Model checkpoint: Kinectics-400
- Input resolution: 224x224
- Number of parameters: 87.7M
- Model size (float): 335 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| Video-MAE | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 442.754 ms | 0 - 522 MB | NPU | [Video-MAE.tflite](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.tflite) |
| Video-MAE | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 1205.047 ms | 3 - 552 MB | NPU | [Video-MAE.dlc](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.dlc) |
| Video-MAE | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 219.79 ms | 3 - 486 MB | NPU | [Video-MAE.tflite](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.tflite) |
| Video-MAE | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 1403.109 ms | 9 - 427 MB | NPU | [Video-MAE.dlc](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.dlc) |
| Video-MAE | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 149.154 ms | 0 - 37 MB | NPU | [Video-MAE.tflite](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.tflite) |
| Video-MAE | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 469.676 ms | 9 - 45 MB | NPU | [Video-MAE.dlc](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.dlc) |
| Video-MAE | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | ONNX | 585.788 ms | 0 - 249 MB | NPU | [Video-MAE.onnx.zip](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.onnx.zip) |
| Video-MAE | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 171.607 ms | 0 - 522 MB | NPU | [Video-MAE.tflite](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.tflite) |
| Video-MAE | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 512.953 ms | 2 - 455 MB | NPU | [Video-MAE.dlc](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.dlc) |
| Video-MAE | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 442.754 ms | 0 - 522 MB | NPU | [Video-MAE.tflite](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.tflite) |
| Video-MAE | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 1205.047 ms | 3 - 552 MB | NPU | [Video-MAE.dlc](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.dlc) |
| Video-MAE | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 149.07 ms | 0 - 36 MB | NPU | [Video-MAE.tflite](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.tflite) |
| Video-MAE | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 469.042 ms | 9 - 48 MB | NPU | [Video-MAE.dlc](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.dlc) |
| Video-MAE | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 239.774 ms | 0 - 480 MB | NPU | [Video-MAE.tflite](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.tflite) |
| Video-MAE | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 740.755 ms | 6 - 423 MB | NPU | [Video-MAE.dlc](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.dlc) |
| Video-MAE | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 148.626 ms | 0 - 36 MB | NPU | [Video-MAE.tflite](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.tflite) |
| Video-MAE | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 469.418 ms | 9 - 49 MB | NPU | [Video-MAE.dlc](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.dlc) |
| Video-MAE | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 171.607 ms | 0 - 522 MB | NPU | [Video-MAE.tflite](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.tflite) |
| Video-MAE | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 512.953 ms | 2 - 455 MB | NPU | [Video-MAE.dlc](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.dlc) |
| Video-MAE | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 148.891 ms | 0 - 44 MB | NPU | [Video-MAE.tflite](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.tflite) |
| Video-MAE | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 469.928 ms | 9 - 47 MB | NPU | [Video-MAE.dlc](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.dlc) |
| Video-MAE | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 586.169 ms | 9 - 48 MB | NPU | [Video-MAE.onnx.zip](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.onnx.zip) |
| Video-MAE | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 109.661 ms | 46 - 568 MB | NPU | [Video-MAE.tflite](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.tflite) |
| Video-MAE | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 372.772 ms | 9 - 565 MB | NPU | [Video-MAE.dlc](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.dlc) |
| Video-MAE | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 386.596 ms | 9 - 489 MB | NPU | [Video-MAE.onnx.zip](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.onnx.zip) |
| Video-MAE | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 337.054 ms | 1 - 489 MB | NPU | [Video-MAE.dlc](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.dlc) |
| Video-MAE | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 574.665 ms | 9 - 530 MB | NPU | [Video-MAE.onnx.zip](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.onnx.zip) |
| Video-MAE | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 483.963 ms | 636 - 636 MB | NPU | [Video-MAE.dlc](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.dlc) |
| Video-MAE | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 589.06 ms | 188 - 188 MB | NPU | [Video-MAE.onnx.zip](https://huggingface.co/qualcomm/Video-MAE/blob/main/Video-MAE.onnx.zip) |
## Installation
Install the package via pip:
```bash
pip install "qai-hub-models[video-mae]"
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.video_mae.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.video_mae.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.video_mae.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/video_mae/qai_hub_models/models/Video-MAE/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.video_mae import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on Video-MAE's performance across various devices [here](https://aihub.qualcomm.com/models/video_mae).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of Video-MAE can be found
[here](https://github.com/MCG-NJU/VideoMAE/blob/main/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602)
* [Source Model Implementation](https://github.com/MCG-NJU/VideoMAE)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:[email protected]).
|
qualcomm/Unet-Segmentation
|
qualcomm
| 2025-09-16T06:41:58Z | 231 | 7 |
pytorch
|
[
"pytorch",
"tflite",
"backbone",
"real_time",
"android",
"image-segmentation",
"arxiv:1505.04597",
"license:other",
"region:us"
] |
image-segmentation
| 2024-02-25T23:01:41Z |
---
library_name: pytorch
license: other
tags:
- backbone
- real_time
- android
pipeline_tag: image-segmentation
---

# Unet-Segmentation: Optimized for Mobile Deployment
## Real-time segmentation optimized for mobile and edge
UNet is a machine learning model that produces a segmentation mask for an image. The most basic use case will label each pixel in the image as being in the foreground or the background. More advanced usage will assign a class label to each pixel. This version of the model was trained on the data from Kaggle's Carvana Image Masking Challenge (see https://www.kaggle.com/c/carvana-image-masking-challenge) and is used for vehicle segmentation.
This model is an implementation of Unet-Segmentation found [here](https://github.com/milesial/Pytorch-UNet).
This repository provides scripts to run Unet-Segmentation on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/unet_segmentation).
### Model Details
- **Model Type:** Model_use_case.semantic_segmentation
- **Model Stats:**
- Model checkpoint: unet_carvana_scale1.0_epoch2
- Input resolution: 224x224
- Number of output classes: 2 (foreground / background)
- Number of parameters: 31.0M
- Model size (float): 118 MB
- Model size (w8a8): 29.8 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| Unet-Segmentation | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 957.723 ms | 0 - 115 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.tflite) |
| Unet-Segmentation | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 943.86 ms | 3 - 131 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.dlc) |
| Unet-Segmentation | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 292.328 ms | 6 - 140 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.tflite) |
| Unet-Segmentation | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 295.194 ms | 9 - 159 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.dlc) |
| Unet-Segmentation | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 160.967 ms | 6 - 465 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.tflite) |
| Unet-Segmentation | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 137.595 ms | 10 - 57 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.dlc) |
| Unet-Segmentation | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | ONNX | 154.299 ms | 0 - 92 MB | NPU | [Unet-Segmentation.onnx.zip](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.onnx.zip) |
| Unet-Segmentation | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 248.534 ms | 6 - 121 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.tflite) |
| Unet-Segmentation | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 238.794 ms | 1 - 129 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.dlc) |
| Unet-Segmentation | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 957.723 ms | 0 - 115 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.tflite) |
| Unet-Segmentation | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 943.86 ms | 3 - 131 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.dlc) |
| Unet-Segmentation | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 152.468 ms | 6 - 465 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.tflite) |
| Unet-Segmentation | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 139.377 ms | 10 - 54 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.dlc) |
| Unet-Segmentation | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 274.528 ms | 6 - 121 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.tflite) |
| Unet-Segmentation | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 259.093 ms | 1 - 128 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.dlc) |
| Unet-Segmentation | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 156.443 ms | 6 - 463 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.tflite) |
| Unet-Segmentation | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 139.792 ms | 9 - 53 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.dlc) |
| Unet-Segmentation | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 248.534 ms | 6 - 121 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.tflite) |
| Unet-Segmentation | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 238.794 ms | 1 - 129 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.dlc) |
| Unet-Segmentation | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 153.363 ms | 6 - 238 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.tflite) |
| Unet-Segmentation | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 137.703 ms | 9 - 53 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.dlc) |
| Unet-Segmentation | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 153.47 ms | 15 - 51 MB | NPU | [Unet-Segmentation.onnx.zip](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.onnx.zip) |
| Unet-Segmentation | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 111.582 ms | 6 - 117 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.tflite) |
| Unet-Segmentation | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 103.897 ms | 9 - 133 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.dlc) |
| Unet-Segmentation | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 110.262 ms | 24 - 135 MB | NPU | [Unet-Segmentation.onnx.zip](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.onnx.zip) |
| Unet-Segmentation | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 102.945 ms | 6 - 122 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.tflite) |
| Unet-Segmentation | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 95.65 ms | 9 - 141 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.dlc) |
| Unet-Segmentation | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 102.406 ms | 25 - 142 MB | NPU | [Unet-Segmentation.onnx.zip](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.onnx.zip) |
| Unet-Segmentation | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 132.217 ms | 74 - 74 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.dlc) |
| Unet-Segmentation | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 141.007 ms | 54 - 54 MB | NPU | [Unet-Segmentation.onnx.zip](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation.onnx.zip) |
| Unet-Segmentation | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 126.729 ms | 2 - 46 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.tflite) |
| Unet-Segmentation | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 1431.388 ms | 2 - 60 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.dlc) |
| Unet-Segmentation | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 53.926 ms | 2 - 84 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.tflite) |
| Unet-Segmentation | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 63.279 ms | 2 - 99 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.dlc) |
| Unet-Segmentation | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 37.74 ms | 0 - 902 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.tflite) |
| Unet-Segmentation | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 599.499 ms | 1 - 27 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.dlc) |
| Unet-Segmentation | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | ONNX | 423.569 ms | 259 - 756 MB | NPU | [Unet-Segmentation.onnx.zip](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.onnx.zip) |
| Unet-Segmentation | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 34.745 ms | 2 - 45 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.tflite) |
| Unet-Segmentation | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 699.109 ms | 2 - 59 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.dlc) |
| Unet-Segmentation | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | TFLITE | 312.114 ms | 2 - 291 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.tflite) |
| Unet-Segmentation | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | QNN_DLC | 336.184 ms | 2 - 364 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.dlc) |
| Unet-Segmentation | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | ONNX | 10707.488 ms | 1438 - 1450 MB | CPU | [Unet-Segmentation.onnx.zip](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.onnx.zip) |
| Unet-Segmentation | w8a8 | RB5 (Proxy) | Qualcomm® QCS8250 (Proxy) | TFLITE | 3161.088 ms | 0 - 846 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.tflite) |
| Unet-Segmentation | w8a8 | RB5 (Proxy) | Qualcomm® QCS8250 (Proxy) | ONNX | 12381.268 ms | 1377 - 1441 MB | CPU | [Unet-Segmentation.onnx.zip](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.onnx.zip) |
| Unet-Segmentation | w8a8 | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 126.729 ms | 2 - 46 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.tflite) |
| Unet-Segmentation | w8a8 | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 1431.388 ms | 2 - 60 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.dlc) |
| Unet-Segmentation | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 36.017 ms | 0 - 899 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.tflite) |
| Unet-Segmentation | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 599.712 ms | 1 - 25 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.dlc) |
| Unet-Segmentation | w8a8 | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 67.433 ms | 2 - 47 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.tflite) |
| Unet-Segmentation | w8a8 | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 64.285 ms | 2 - 64 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.dlc) |
| Unet-Segmentation | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 38.312 ms | 0 - 900 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.tflite) |
| Unet-Segmentation | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 599.83 ms | 2 - 19 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.dlc) |
| Unet-Segmentation | w8a8 | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 34.745 ms | 2 - 45 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.tflite) |
| Unet-Segmentation | w8a8 | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 699.109 ms | 2 - 59 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.dlc) |
| Unet-Segmentation | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 35.529 ms | 0 - 899 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.tflite) |
| Unet-Segmentation | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 600.26 ms | 0 - 26 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.dlc) |
| Unet-Segmentation | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 428.766 ms | 261 - 777 MB | NPU | [Unet-Segmentation.onnx.zip](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.onnx.zip) |
| Unet-Segmentation | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 29.024 ms | 1 - 82 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.tflite) |
| Unet-Segmentation | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 528.218 ms | 2 - 97 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.dlc) |
| Unet-Segmentation | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 347.337 ms | 847 - 3929 MB | NPU | [Unet-Segmentation.onnx.zip](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.onnx.zip) |
| Unet-Segmentation | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 25.802 ms | 1 - 51 MB | NPU | [Unet-Segmentation.tflite](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.tflite) |
| Unet-Segmentation | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 559.333 ms | 2 - 69 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.dlc) |
| Unet-Segmentation | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 318.382 ms | 325 - 2365 MB | NPU | [Unet-Segmentation.onnx.zip](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.onnx.zip) |
| Unet-Segmentation | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 671.598 ms | 60 - 60 MB | NPU | [Unet-Segmentation.dlc](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.dlc) |
| Unet-Segmentation | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 404.127 ms | 463 - 463 MB | NPU | [Unet-Segmentation.onnx.zip](https://huggingface.co/qualcomm/Unet-Segmentation/blob/main/Unet-Segmentation_w8a8.onnx.zip) |
## Installation
Install the package via pip:
```bash
pip install qai-hub-models
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.unet_segmentation.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.unet_segmentation.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.unet_segmentation.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/unet_segmentation/qai_hub_models/models/Unet-Segmentation/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.unet_segmentation import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.unet_segmentation.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.unet_segmentation.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on Unet-Segmentation's performance across various devices [here](https://aihub.qualcomm.com/models/unet_segmentation).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of Unet-Segmentation can be found
[here](https://github.com/milesial/Pytorch-UNet/blob/master/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://github.com/milesial/Pytorch-UNet/blob/master/LICENSE)
## References
* [U-Net: Convolutional Networks for Biomedical Image Segmentation](https://arxiv.org/abs/1505.04597)
* [Source Model Implementation](https://github.com/milesial/Pytorch-UNet)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:[email protected]).
|
qualcomm/Swin-Tiny
|
qualcomm
| 2025-09-16T06:40:56Z | 112 | 1 |
pytorch
|
[
"pytorch",
"tflite",
"backbone",
"android",
"image-classification",
"arxiv:2103.14030",
"license:other",
"region:us"
] |
image-classification
| 2024-02-25T22:56:55Z |
---
library_name: pytorch
license: other
tags:
- backbone
- android
pipeline_tag: image-classification
---

# Swin-Tiny: Optimized for Mobile Deployment
## Imagenet classifier and general purpose backbone
SwinTiny is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.
This model is an implementation of Swin-Tiny found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/swin_transformer.py).
This repository provides scripts to run Swin-Tiny on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/swin_tiny).
### Model Details
- **Model Type:** Model_use_case.image_classification
- **Model Stats:**
- Model checkpoint: Imagenet
- Input resolution: 224x224
- Number of parameters: 28.8M
- Model size (float): 110 MB
- Model size (w8a16): 29.9 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| Swin-Tiny | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 24.223 ms | 0 - 161 MB | NPU | [Swin-Tiny.tflite](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.tflite) |
| Swin-Tiny | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 21.286 ms | 1 - 151 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.dlc) |
| Swin-Tiny | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 13.71 ms | 0 - 162 MB | NPU | [Swin-Tiny.tflite](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.tflite) |
| Swin-Tiny | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 14.741 ms | 1 - 153 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.dlc) |
| Swin-Tiny | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 10.679 ms | 0 - 21 MB | NPU | [Swin-Tiny.tflite](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.tflite) |
| Swin-Tiny | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 9.272 ms | 0 - 38 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.dlc) |
| Swin-Tiny | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | ONNX | 9.06 ms | 0 - 163 MB | NPU | [Swin-Tiny.onnx.zip](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.onnx.zip) |
| Swin-Tiny | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 12.01 ms | 0 - 161 MB | NPU | [Swin-Tiny.tflite](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.tflite) |
| Swin-Tiny | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 10.568 ms | 1 - 385 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.dlc) |
| Swin-Tiny | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 24.223 ms | 0 - 161 MB | NPU | [Swin-Tiny.tflite](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.tflite) |
| Swin-Tiny | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 21.286 ms | 1 - 151 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.dlc) |
| Swin-Tiny | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 10.831 ms | 0 - 17 MB | NPU | [Swin-Tiny.tflite](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.tflite) |
| Swin-Tiny | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 9.301 ms | 0 - 38 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.dlc) |
| Swin-Tiny | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 15.657 ms | 0 - 156 MB | NPU | [Swin-Tiny.tflite](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.tflite) |
| Swin-Tiny | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 14.044 ms | 0 - 373 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.dlc) |
| Swin-Tiny | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 10.813 ms | 0 - 18 MB | NPU | [Swin-Tiny.tflite](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.tflite) |
| Swin-Tiny | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 9.319 ms | 0 - 40 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.dlc) |
| Swin-Tiny | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 12.01 ms | 0 - 161 MB | NPU | [Swin-Tiny.tflite](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.tflite) |
| Swin-Tiny | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 10.568 ms | 1 - 385 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.dlc) |
| Swin-Tiny | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 10.879 ms | 0 - 17 MB | NPU | [Swin-Tiny.tflite](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.tflite) |
| Swin-Tiny | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 9.328 ms | 0 - 40 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.dlc) |
| Swin-Tiny | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 9.182 ms | 0 - 165 MB | NPU | [Swin-Tiny.onnx.zip](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.onnx.zip) |
| Swin-Tiny | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 7.133 ms | 0 - 166 MB | NPU | [Swin-Tiny.tflite](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.tflite) |
| Swin-Tiny | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 6.102 ms | 1 - 160 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.dlc) |
| Swin-Tiny | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 6.113 ms | 1 - 157 MB | NPU | [Swin-Tiny.onnx.zip](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.onnx.zip) |
| Swin-Tiny | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 6.786 ms | 0 - 158 MB | NPU | [Swin-Tiny.tflite](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.tflite) |
| Swin-Tiny | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 5.5 ms | 1 - 371 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.dlc) |
| Swin-Tiny | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 5.662 ms | 0 - 149 MB | NPU | [Swin-Tiny.onnx.zip](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.onnx.zip) |
| Swin-Tiny | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 9.985 ms | 301 - 301 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.dlc) |
| Swin-Tiny | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 9.547 ms | 57 - 57 MB | NPU | [Swin-Tiny.onnx.zip](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny.onnx.zip) |
| Swin-Tiny | w8a16 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 16.828 ms | 0 - 161 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny_w8a16.dlc) |
| Swin-Tiny | w8a16 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 11.156 ms | 0 - 127 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny_w8a16.dlc) |
| Swin-Tiny | w8a16 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 9.409 ms | 0 - 51 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny_w8a16.dlc) |
| Swin-Tiny | w8a16 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | ONNX | 58.485 ms | 133 - 234 MB | NPU | [Swin-Tiny.onnx.zip](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny_w8a16.onnx.zip) |
| Swin-Tiny | w8a16 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 10.014 ms | 0 - 165 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny_w8a16.dlc) |
| Swin-Tiny | w8a16 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | QNN_DLC | 30.265 ms | 0 - 482 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny_w8a16.dlc) |
| Swin-Tiny | w8a16 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | ONNX | 193.605 ms | 88 - 110 MB | CPU | [Swin-Tiny.onnx.zip](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny_w8a16.onnx.zip) |
| Swin-Tiny | w8a16 | RB5 (Proxy) | Qualcomm® QCS8250 (Proxy) | ONNX | 206.645 ms | 76 - 104 MB | CPU | [Swin-Tiny.onnx.zip](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny_w8a16.onnx.zip) |
| Swin-Tiny | w8a16 | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 16.828 ms | 0 - 161 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny_w8a16.dlc) |
| Swin-Tiny | w8a16 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 9.411 ms | 0 - 51 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny_w8a16.dlc) |
| Swin-Tiny | w8a16 | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 11.19 ms | 0 - 122 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny_w8a16.dlc) |
| Swin-Tiny | w8a16 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 9.419 ms | 0 - 52 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny_w8a16.dlc) |
| Swin-Tiny | w8a16 | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 10.014 ms | 0 - 165 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny_w8a16.dlc) |
| Swin-Tiny | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 9.446 ms | 0 - 54 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny_w8a16.dlc) |
| Swin-Tiny | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 65.949 ms | 147 - 245 MB | NPU | [Swin-Tiny.onnx.zip](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny_w8a16.onnx.zip) |
| Swin-Tiny | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 6.215 ms | 0 - 175 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny_w8a16.dlc) |
| Swin-Tiny | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 43.361 ms | 160 - 316 MB | NPU | [Swin-Tiny.onnx.zip](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny_w8a16.onnx.zip) |
| Swin-Tiny | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 5.626 ms | 0 - 161 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny_w8a16.dlc) |
| Swin-Tiny | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 35.619 ms | 156 - 290 MB | NPU | [Swin-Tiny.onnx.zip](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny_w8a16.onnx.zip) |
| Swin-Tiny | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 10.213 ms | 90 - 90 MB | NPU | [Swin-Tiny.dlc](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny_w8a16.dlc) |
| Swin-Tiny | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 52.964 ms | 232 - 232 MB | NPU | [Swin-Tiny.onnx.zip](https://huggingface.co/qualcomm/Swin-Tiny/blob/main/Swin-Tiny_w8a16.onnx.zip) |
## Installation
Install the package via pip:
```bash
pip install qai-hub-models
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.swin_tiny.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.swin_tiny.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.swin_tiny.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/swin_tiny/qai_hub_models/models/Swin-Tiny/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.swin_tiny import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.swin_tiny.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.swin_tiny.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on Swin-Tiny's performance across various devices [here](https://aihub.qualcomm.com/models/swin_tiny).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of Swin-Tiny can be found
[here](https://github.com/pytorch/vision/blob/main/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030)
* [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/swin_transformer.py)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:[email protected]).
|
LarryAIDraw/TrueKafka
|
LarryAIDraw
| 2025-09-16T06:40:41Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-09-16T06:20:57Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/528096/ponyv6-xl-kafka-from-honkai-star-rail-lora
|
Jessie09/ppo_Qwen3-14B_rsa
|
Jessie09
| 2025-09-16T06:40:39Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"region:us"
] | null | 2025-09-16T06:33:22Z |
# Model Card for Model ppo_Qwen3-14B_rsa
## Model Details
### Model Description
* Developed by: Foresight-based Optimization Authors
* Backbone model: im_Qwen3-14B_rsa
* Training method: SFT with KL divergence
* Training data: Qwen3-14B_train_selfplay_data.json
* Training task: RSA
### Training Parameters
{
"output_dir": "/home/jiashuo/codes/ForesightOptim/checkpoints/ppo_Qwen3-14B_rsa",
"overwrite_output_dir": false,
"do_train": false,
"do_eval": false,
"do_predict": false,
"eval_strategy": {
"_value_": "no",
"_name_": "NO",
"__objclass__": "{'_generate_next_value_': <function Enum._generate_next_value_ at 0x73526f9c2320>, '__module__': 'transformers.trainer_utils', '__doc__': 'An enumeration.', '_member_names_': ['NO', 'STEPS', 'EPOCH'], '_member_map_': {'NO': <IntervalStrategy.NO: 'no'>, 'STEPS': <IntervalStrategy.STEPS: 'steps'>, 'EPOCH': <IntervalStrategy.EPOCH: 'epoch'>}, '_member_type_': <class 'str'>, '_value2member_map_': {'no': <IntervalStrategy.NO: 'no'>, 'steps': <IntervalStrategy.STEPS: 'steps'>, 'epoch': <IntervalStrategy.EPOCH: 'epoch'>}, 'NO': <IntervalStrategy.NO: 'no'>, 'STEPS': <IntervalStrategy.STEPS: 'steps'>, 'EPOCH': <IntervalStrategy.EPOCH: 'epoch'>, '__new__': <function Enum.__new__ at 0x73526f9c0940>}"
},
"prediction_loss_only": false,
"per_device_train_batch_size": 2,
"per_device_eval_batch_size": 8,
"per_gpu_train_batch_size": null,
"per_gpu_eval_batch_size": null,
"gradient_accumulation_steps": 8,
"eval_accumulation_steps": null,
"eval_delay": 0,
"torch_empty_cache_steps": null,
"learning_rate": 1e-05,
"weight_decay": 0.0,
"adam_beta1": 0.9,
"adam_beta2": 0.999,
"adam_epsilon": 1e-08,
"max_grad_norm": 1.0,
"num_train_epochs": 1.0,
"max_steps": -1,
"lr_scheduler_type": {
"_value_": "cosine",
"_name_": "COSINE",
"__objclass__": "{'_generate_next_value_': <function Enum._generate_next_value_ at 0x73526f9c2320>, '__module__': 'transformers.trainer_utils', '__doc__': '\\n Scheduler names for the parameter `lr_scheduler_type` in [`TrainingArguments`].\\n By default, it uses \"linear\". Internally, this retrieves `get_linear_schedule_with_warmup` scheduler from [`Trainer`].\\n Scheduler types:\\n - \"linear\" = get_linear_schedule_with_warmup\\n - \"cosine\" = get_cosine_schedule_with_warmup\\n - \"cosine_with_restarts\" = get_cosine_with_hard_restarts_schedule_with_warmup\\n - \"polynomial\" = get_polynomial_decay_schedule_with_warmup\\n - \"constant\" = get_constant_schedule\\n - \"constant_with_warmup\" = get_constant_schedule_with_warmup\\n - \"inverse_sqrt\" = get_inverse_sqrt_schedule\\n - \"reduce_lr_on_plateau\" = get_reduce_on_plateau_schedule\\n - \"cosine_with_min_lr\" = get_cosine_with_min_lr_schedule_with_warmup\\n - \"warmup_stable_decay\" = get_wsd_schedule\\n ', '_member_names_': ['LINEAR', 'COSINE', 'COSINE_WITH_RESTARTS', 'POLYNOMIAL', 'CONSTANT', 'CONSTANT_WITH_WARMUP', 'INVERSE_SQRT', 'REDUCE_ON_PLATEAU', 'COSINE_WITH_MIN_LR', 'WARMUP_STABLE_DECAY'], '_member_map_': {'LINEAR': <SchedulerType.LINEAR: 'linear'>, 'COSINE': <SchedulerType.COSINE: 'cosine'>, 'COSINE_WITH_RESTARTS': <SchedulerType.COSINE_WITH_RESTARTS: 'cosine_with_restarts'>, 'POLYNOMIAL': <SchedulerType.POLYNOMIAL: 'polynomial'>, 'CONSTANT': <SchedulerType.CONSTANT: 'constant'>, 'CONSTANT_WITH_WARMUP': <SchedulerType.CONSTANT_WITH_WARMUP: 'constant_with_warmup'>, 'INVERSE_SQRT': <SchedulerType.INVERSE_SQRT: 'inverse_sqrt'>, 'REDUCE_ON_PLATEAU': <SchedulerType.REDUCE_ON_PLATEAU: 'reduce_lr_on_plateau'>, 'COSINE_WITH_MIN_LR': <SchedulerType.COSINE_WITH_MIN_LR: 'cosine_with_min_lr'>, 'WARMUP_STABLE_DECAY': <SchedulerType.WARMUP_STABLE_DECAY: 'warmup_stable_decay'>}, '_member_type_': <class 'str'>, '_value2member_map_': {'linear': <SchedulerType.LINEAR: 'linear'>, 'cosine': <SchedulerType.COSINE: 'cosine'>, 'cosine_with_restarts': <SchedulerType.COSINE_WITH_RESTARTS: 'cosine_with_restarts'>, 'polynomial': <SchedulerType.POLYNOMIAL: 'polynomial'>, 'constant': <SchedulerType.CONSTANT: 'constant'>, 'constant_with_warmup': <SchedulerType.CONSTANT_WITH_WARMUP: 'constant_with_warmup'>, 'inverse_sqrt': <SchedulerType.INVERSE_SQRT: 'inverse_sqrt'>, 'reduce_lr_on_plateau': <SchedulerType.REDUCE_ON_PLATEAU: 'reduce_lr_on_plateau'>, 'cosine_with_min_lr': <SchedulerType.COSINE_WITH_MIN_LR: 'cosine_with_min_lr'>, 'warmup_stable_decay': <SchedulerType.WARMUP_STABLE_DECAY: 'warmup_stable_decay'>}, 'LINEAR': <SchedulerType.LINEAR: 'linear'>, 'COSINE': <SchedulerType.COSINE: 'cosine'>, 'COSINE_WITH_RESTARTS': <SchedulerType.COSINE_WITH_RESTARTS: 'cosine_with_restarts'>, 'POLYNOMIAL': <SchedulerType.POLYNOMIAL: 'polynomial'>, 'CONSTANT': <SchedulerType.CONSTANT: 'constant'>, 'CONSTANT_WITH_WARMUP': <SchedulerType.CONSTANT_WITH_WARMUP: 'constant_with_warmup'>, 'INVERSE_SQRT': <SchedulerType.INVERSE_SQRT: 'inverse_sqrt'>, 'REDUCE_ON_PLATEAU': <SchedulerType.REDUCE_ON_PLATEAU: 'reduce_lr_on_plateau'>, 'COSINE_WITH_MIN_LR': <SchedulerType.COSINE_WITH_MIN_LR: 'cosine_with_min_lr'>, 'WARMUP_STABLE_DECAY': <SchedulerType.WARMUP_STABLE_DECAY: 'warmup_stable_decay'>, '__new__': <function Enum.__new__ at 0x73526f9c0940>}"
},
"lr_scheduler_kwargs": {},
"warmup_ratio": 0.03,
"warmup_steps": 0,
"log_level": "passive",
"log_level_replica": "warning",
"log_on_each_node": true,
"logging_dir": "/home/jiashuo/codes/ForesightOptim/checkpoints/ppo_Qwen3-14B_rsa/runs/Sep06_23-35-38_super-Rack-Server",
"logging_strategy": {
"_value_": "steps",
"_name_": "STEPS",
"__objclass__": "{'_generate_next_value_': <function Enum._generate_next_value_ at 0x73526f9c2320>, '__module__': 'transformers.trainer_utils', '__doc__': 'An enumeration.', '_member_names_': ['NO', 'STEPS', 'EPOCH'], '_member_map_': {'NO': <IntervalStrategy.NO: 'no'>, 'STEPS': <IntervalStrategy.STEPS: 'steps'>, 'EPOCH': <IntervalStrategy.EPOCH: 'epoch'>}, '_member_type_': <class 'str'>, '_value2member_map_': {'no': <IntervalStrategy.NO: 'no'>, 'steps': <IntervalStrategy.STEPS: 'steps'>, 'epoch': <IntervalStrategy.EPOCH: 'epoch'>}, 'NO': <IntervalStrategy.NO: 'no'>, 'STEPS': <IntervalStrategy.STEPS: 'steps'>, 'EPOCH': <IntervalStrategy.EPOCH: 'epoch'>, '__new__': <function Enum.__new__ at 0x73526f9c0940>}"
},
"logging_first_step": false,
"logging_steps": 1.0,
"logging_nan_inf_filter": true,
"save_strategy": {
"_value_": "steps",
"_name_": "STEPS",
"__objclass__": "{'_generate_next_value_': <function Enum._generate_next_value_ at 0x73526f9c2320>, '__module__': 'transformers.trainer_utils', '__doc__': 'An enumeration.', '_member_names_': ['NO', 'STEPS', 'EPOCH', 'BEST'], '_member_map_': {'NO': <SaveStrategy.NO: 'no'>, 'STEPS': <SaveStrategy.STEPS: 'steps'>, 'EPOCH': <SaveStrategy.EPOCH: 'epoch'>, 'BEST': <SaveStrategy.BEST: 'best'>}, '_member_type_': <class 'str'>, '_value2member_map_': {'no': <SaveStrategy.NO: 'no'>, 'steps': <SaveStrategy.STEPS: 'steps'>, 'epoch': <SaveStrategy.EPOCH: 'epoch'>, 'best': <SaveStrategy.BEST: 'best'>}, 'NO': <SaveStrategy.NO: 'no'>, 'STEPS': <SaveStrategy.STEPS: 'steps'>, 'EPOCH': <SaveStrategy.EPOCH: 'epoch'>, 'BEST': <SaveStrategy.BEST: 'best'>, '__new__': <function Enum.__new__ at 0x73526f9c0940>}"
},
"save_steps": 200,
"save_total_limit": null,
"save_safetensors": true,
"save_on_each_node": false,
"save_only_model": false,
"restore_callback_states_from_checkpoint": false,
"no_cuda": false,
"use_cpu": false,
"use_mps_device": false,
"seed": 42,
"data_seed": null,
"jit_mode_eval": false,
"use_ipex": false,
"bf16": true,
"fp16": false,
"fp16_opt_level": "O1",
"half_precision_backend": "auto",
"bf16_full_eval": false,
"fp16_full_eval": false,
"tf32": true,
"local_rank": 4,
"ddp_backend": null,
"tpu_num_cores": null,
"tpu_metrics_debug": false,
"debug": [],
"dataloader_drop_last": false,
"eval_steps": null,
"dataloader_num_workers": 0,
"dataloader_prefetch_factor": null,
"past_index": -1,
"run_name": "/home/jiashuo/codes/ForesightOptim/checkpoints/ppo_Qwen3-14B_rsa",
"disable_tqdm": false,
"remove_unused_columns": false,
"label_names": null,
"load_best_model_at_end": false,
"metric_for_best_model": null,
"greater_is_better": null,
"ignore_data_skip": false,
"fsdp": [],
"fsdp_min_num_params": 0,
"fsdp_config": {
"min_num_params": 0,
"xla": false,
"xla_fsdp_v2": false,
"xla_fsdp_grad_ckpt": false
},
"fsdp_transformer_layer_cls_to_wrap": null,
"accelerator_config": {
"split_batches": false,
"dispatch_batches": null,
"even_batches": true,
"use_seedable_sampler": true,
"non_blocking": false,
"gradient_accumulation_kwargs": null
},
"deepspeed": null,
"label_smoothing_factor": 0.0,
"optim": {
"_value_": "adamw_torch",
"_name_": "ADAMW_TORCH",
"__objclass__": "{'_generate_next_value_': <function Enum._generate_next_value_ at 0x73526f9c2320>, '__module__': 'transformers.training_args', '__doc__': '\\n Stores the acceptable string identifiers for optimizers.\\n ', '_member_names_': ['ADAMW_TORCH', 'ADAMW_TORCH_FUSED', 'ADAMW_TORCH_XLA', 'ADAMW_TORCH_NPU_FUSED', 'ADAMW_APEX_FUSED', 'ADAFACTOR', 'ADAMW_ANYPRECISION', 'ADAMW_TORCH_4BIT', 'ADAMW_TORCH_8BIT', 'ADEMAMIX', 'SGD', 'ADAGRAD', 'ADAMW_BNB', 'ADAMW_8BIT', 'ADEMAMIX_8BIT', 'LION_8BIT', 'LION', 'PAGED_ADAMW', 'PAGED_ADAMW_8BIT', 'PAGED_ADEMAMIX', 'PAGED_ADEMAMIX_8BIT', 'PAGED_LION', 'PAGED_LION_8BIT', 'RMSPROP', 'RMSPROP_BNB', 'RMSPROP_8BIT', 'RMSPROP_32BIT', 'GALORE_ADAMW', 'GALORE_ADAMW_8BIT', 'GALORE_ADAFACTOR', 'GALORE_ADAMW_LAYERWISE', 'GALORE_ADAMW_8BIT_LAYERWISE', 'GALORE_ADAFACTOR_LAYERWISE', 'LOMO', 'ADALOMO', 'GROKADAMW', 'SCHEDULE_FREE_RADAM', 'SCHEDULE_FREE_ADAMW', 'SCHEDULE_FREE_SGD', 'APOLLO_ADAMW', 'APOLLO_ADAMW_LAYERWISE'], '_member_map_': {'ADAMW_TORCH': <OptimizerNames.ADAMW_TORCH: 'adamw_torch'>, 'ADAMW_TORCH_FUSED': <OptimizerNames.ADAMW_TORCH_FUSED: 'adamw_torch_fused'>, 'ADAMW_TORCH_XLA': <OptimizerNames.ADAMW_TORCH_XLA: 'adamw_torch_xla'>, 'ADAMW_TORCH_NPU_FUSED': <OptimizerNames.ADAMW_TORCH_NPU_FUSED: 'adamw_torch_npu_fused'>, 'ADAMW_APEX_FUSED': <OptimizerNames.ADAMW_APEX_FUSED: 'adamw_apex_fused'>, 'ADAFACTOR': <OptimizerNames.ADAFACTOR: 'adafactor'>, 'ADAMW_ANYPRECISION': <OptimizerNames.ADAMW_ANYPRECISION: 'adamw_anyprecision'>, 'ADAMW_TORCH_4BIT': <OptimizerNames.ADAMW_TORCH_4BIT: 'adamw_torch_4bit'>, 'ADAMW_TORCH_8BIT': <OptimizerNames.ADAMW_TORCH_8BIT: 'adamw_torch_8bit'>, 'ADEMAMIX': <OptimizerNames.ADEMAMIX: 'ademamix'>, 'SGD': <OptimizerNames.SGD: 'sgd'>, 'ADAGRAD': <OptimizerNames.ADAGRAD: 'adagrad'>, 'ADAMW_BNB': <OptimizerNames.ADAMW_BNB: 'adamw_bnb_8bit'>, 'ADAMW_8BIT': <OptimizerNames.ADAMW_8BIT: 'adamw_8bit'>, 'ADEMAMIX_8BIT': <OptimizerNames.ADEMAMIX_8BIT: 'ademamix_8bit'>, 'LION_8BIT': <OptimizerNames.LION_8BIT: 'lion_8bit'>, 'LION': <OptimizerNames.LION: 'lion_32bit'>, 'PAGED_ADAMW': <OptimizerNames.PAGED_ADAMW: 'paged_adamw_32bit'>, 'PAGED_ADAMW_8BIT': <OptimizerNames.PAGED_ADAMW_8BIT: 'paged_adamw_8bit'>, 'PAGED_ADEMAMIX': <OptimizerNames.PAGED_ADEMAMIX: 'paged_ademamix_32bit'>, 'PAGED_ADEMAMIX_8BIT': <OptimizerNames.PAGED_ADEMAMIX_8BIT: 'paged_ademamix_8bit'>, 'PAGED_LION': <OptimizerNames.PAGED_LION: 'paged_lion_32bit'>, 'PAGED_LION_8BIT': <OptimizerNames.PAGED_LION_8BIT: 'paged_lion_8bit'>, 'RMSPROP': <OptimizerNames.RMSPROP: 'rmsprop'>, 'RMSPROP_BNB': <OptimizerNames.RMSPROP_BNB: 'rmsprop_bnb'>, 'RMSPROP_8BIT': <OptimizerNames.RMSPROP_8BIT: 'rmsprop_bnb_8bit'>, 'RMSPROP_32BIT': <OptimizerNames.RMSPROP_32BIT: 'rmsprop_bnb_32bit'>, 'GALORE_ADAMW': <OptimizerNames.GALORE_ADAMW: 'galore_adamw'>, 'GALORE_ADAMW_8BIT': <OptimizerNames.GALORE_ADAMW_8BIT: 'galore_adamw_8bit'>, 'GALORE_ADAFACTOR': <OptimizerNames.GALORE_ADAFACTOR: 'galore_adafactor'>, 'GALORE_ADAMW_LAYERWISE': <OptimizerNames.GALORE_ADAMW_LAYERWISE: 'galore_adamw_layerwise'>, 'GALORE_ADAMW_8BIT_LAYERWISE': <OptimizerNames.GALORE_ADAMW_8BIT_LAYERWISE: 'galore_adamw_8bit_layerwise'>, 'GALORE_ADAFACTOR_LAYERWISE': <OptimizerNames.GALORE_ADAFACTOR_LAYERWISE: 'galore_adafactor_layerwise'>, 'LOMO': <OptimizerNames.LOMO: 'lomo'>, 'ADALOMO': <OptimizerNames.ADALOMO: 'adalomo'>, 'GROKADAMW': <OptimizerNames.GROKADAMW: 'grokadamw'>, 'SCHEDULE_FREE_RADAM': <OptimizerNames.SCHEDULE_FREE_RADAM: 'schedule_free_radam'>, 'SCHEDULE_FREE_ADAMW': <OptimizerNames.SCHEDULE_FREE_ADAMW: 'schedule_free_adamw'>, 'SCHEDULE_FREE_SGD': <OptimizerNames.SCHEDULE_FREE_SGD: 'schedule_free_sgd'>, 'APOLLO_ADAMW': <OptimizerNames.APOLLO_ADAMW: 'apollo_adamw'>, 'APOLLO_ADAMW_LAYERWISE': <OptimizerNames.APOLLO_ADAMW_LAYERWISE: 'apollo_adamw_layerwise'>}, '_member_type_': <class 'str'>, '_value2member_map_': {'adamw_torch': <OptimizerNames.ADAMW_TORCH: 'adamw_torch'>, 'adamw_torch_fused': <OptimizerNames.ADAMW_TORCH_FUSED: 'adamw_torch_fused'>, 'adamw_torch_xla': <OptimizerNames.ADAMW_TORCH_XLA: 'adamw_torch_xla'>, 'adamw_torch_npu_fused': <OptimizerNames.ADAMW_TORCH_NPU_FUSED: 'adamw_torch_npu_fused'>, 'adamw_apex_fused': <OptimizerNames.ADAMW_APEX_FUSED: 'adamw_apex_fused'>, 'adafactor': <OptimizerNames.ADAFACTOR: 'adafactor'>, 'adamw_anyprecision': <OptimizerNames.ADAMW_ANYPRECISION: 'adamw_anyprecision'>, 'adamw_torch_4bit': <OptimizerNames.ADAMW_TORCH_4BIT: 'adamw_torch_4bit'>, 'adamw_torch_8bit': <OptimizerNames.ADAMW_TORCH_8BIT: 'adamw_torch_8bit'>, 'ademamix': <OptimizerNames.ADEMAMIX: 'ademamix'>, 'sgd': <OptimizerNames.SGD: 'sgd'>, 'adagrad': <OptimizerNames.ADAGRAD: 'adagrad'>, 'adamw_bnb_8bit': <OptimizerNames.ADAMW_BNB: 'adamw_bnb_8bit'>, 'adamw_8bit': <OptimizerNames.ADAMW_8BIT: 'adamw_8bit'>, 'ademamix_8bit': <OptimizerNames.ADEMAMIX_8BIT: 'ademamix_8bit'>, 'lion_8bit': <OptimizerNames.LION_8BIT: 'lion_8bit'>, 'lion_32bit': <OptimizerNames.LION: 'lion_32bit'>, 'paged_adamw_32bit': <OptimizerNames.PAGED_ADAMW: 'paged_adamw_32bit'>, 'paged_adamw_8bit': <OptimizerNames.PAGED_ADAMW_8BIT: 'paged_adamw_8bit'>, 'paged_ademamix_32bit': <OptimizerNames.PAGED_ADEMAMIX: 'paged_ademamix_32bit'>, 'paged_ademamix_8bit': <OptimizerNames.PAGED_ADEMAMIX_8BIT: 'paged_ademamix_8bit'>, 'paged_lion_32bit': <OptimizerNames.PAGED_LION: 'paged_lion_32bit'>, 'paged_lion_8bit': <OptimizerNames.PAGED_LION_8BIT: 'paged_lion_8bit'>, 'rmsprop': <OptimizerNames.RMSPROP: 'rmsprop'>, 'rmsprop_bnb': <OptimizerNames.RMSPROP_BNB: 'rmsprop_bnb'>, 'rmsprop_bnb_8bit': <OptimizerNames.RMSPROP_8BIT: 'rmsprop_bnb_8bit'>, 'rmsprop_bnb_32bit': <OptimizerNames.RMSPROP_32BIT: 'rmsprop_bnb_32bit'>, 'galore_adamw': <OptimizerNames.GALORE_ADAMW: 'galore_adamw'>, 'galore_adamw_8bit': <OptimizerNames.GALORE_ADAMW_8BIT: 'galore_adamw_8bit'>, 'galore_adafactor': <OptimizerNames.GALORE_ADAFACTOR: 'galore_adafactor'>, 'galore_adamw_layerwise': <OptimizerNames.GALORE_ADAMW_LAYERWISE: 'galore_adamw_layerwise'>, 'galore_adamw_8bit_layerwise': <OptimizerNames.GALORE_ADAMW_8BIT_LAYERWISE: 'galore_adamw_8bit_layerwise'>, 'galore_adafactor_layerwise': <OptimizerNames.GALORE_ADAFACTOR_LAYERWISE: 'galore_adafactor_layerwise'>, 'lomo': <OptimizerNames.LOMO: 'lomo'>, 'adalomo': <OptimizerNames.ADALOMO: 'adalomo'>, 'grokadamw': <OptimizerNames.GROKADAMW: 'grokadamw'>, 'schedule_free_radam': <OptimizerNames.SCHEDULE_FREE_RADAM: 'schedule_free_radam'>, 'schedule_free_adamw': <OptimizerNames.SCHEDULE_FREE_ADAMW: 'schedule_free_adamw'>, 'schedule_free_sgd': <OptimizerNames.SCHEDULE_FREE_SGD: 'schedule_free_sgd'>, 'apollo_adamw': <OptimizerNames.APOLLO_ADAMW: 'apollo_adamw'>, 'apollo_adamw_layerwise': <OptimizerNames.APOLLO_ADAMW_LAYERWISE: 'apollo_adamw_layerwise'>}, 'ADAMW_TORCH': <OptimizerNames.ADAMW_TORCH: 'adamw_torch'>, 'ADAMW_TORCH_FUSED': <OptimizerNames.ADAMW_TORCH_FUSED: 'adamw_torch_fused'>, 'ADAMW_TORCH_XLA': <OptimizerNames.ADAMW_TORCH_XLA: 'adamw_torch_xla'>, 'ADAMW_TORCH_NPU_FUSED': <OptimizerNames.ADAMW_TORCH_NPU_FUSED: 'adamw_torch_npu_fused'>, 'ADAMW_APEX_FUSED': <OptimizerNames.ADAMW_APEX_FUSED: 'adamw_apex_fused'>, 'ADAFACTOR': <OptimizerNames.ADAFACTOR: 'adafactor'>, 'ADAMW_ANYPRECISION': <OptimizerNames.ADAMW_ANYPRECISION: 'adamw_anyprecision'>, 'ADAMW_TORCH_4BIT': <OptimizerNames.ADAMW_TORCH_4BIT: 'adamw_torch_4bit'>, 'ADAMW_TORCH_8BIT': <OptimizerNames.ADAMW_TORCH_8BIT: 'adamw_torch_8bit'>, 'ADEMAMIX': <OptimizerNames.ADEMAMIX: 'ademamix'>, 'SGD': <OptimizerNames.SGD: 'sgd'>, 'ADAGRAD': <OptimizerNames.ADAGRAD: 'adagrad'>, 'ADAMW_BNB': <OptimizerNames.ADAMW_BNB: 'adamw_bnb_8bit'>, 'ADAMW_8BIT': <OptimizerNames.ADAMW_8BIT: 'adamw_8bit'>, 'ADEMAMIX_8BIT': <OptimizerNames.ADEMAMIX_8BIT: 'ademamix_8bit'>, 'LION_8BIT': <OptimizerNames.LION_8BIT: 'lion_8bit'>, 'LION': <OptimizerNames.LION: 'lion_32bit'>, 'PAGED_ADAMW': <OptimizerNames.PAGED_ADAMW: 'paged_adamw_32bit'>, 'PAGED_ADAMW_8BIT': <OptimizerNames.PAGED_ADAMW_8BIT: 'paged_adamw_8bit'>, 'PAGED_ADEMAMIX': <OptimizerNames.PAGED_ADEMAMIX: 'paged_ademamix_32bit'>, 'PAGED_ADEMAMIX_8BIT': <OptimizerNames.PAGED_ADEMAMIX_8BIT: 'paged_ademamix_8bit'>, 'PAGED_LION': <OptimizerNames.PAGED_LION: 'paged_lion_32bit'>, 'PAGED_LION_8BIT': <OptimizerNames.PAGED_LION_8BIT: 'paged_lion_8bit'>, 'RMSPROP': <OptimizerNames.RMSPROP: 'rmsprop'>, 'RMSPROP_BNB': <OptimizerNames.RMSPROP_BNB: 'rmsprop_bnb'>, 'RMSPROP_8BIT': <OptimizerNames.RMSPROP_8BIT: 'rmsprop_bnb_8bit'>, 'RMSPROP_32BIT': <OptimizerNames.RMSPROP_32BIT: 'rmsprop_bnb_32bit'>, 'GALORE_ADAMW': <OptimizerNames.GALORE_ADAMW: 'galore_adamw'>, 'GALORE_ADAMW_8BIT': <OptimizerNames.GALORE_ADAMW_8BIT: 'galore_adamw_8bit'>, 'GALORE_ADAFACTOR': <OptimizerNames.GALORE_ADAFACTOR: 'galore_adafactor'>, 'GALORE_ADAMW_LAYERWISE': <OptimizerNames.GALORE_ADAMW_LAYERWISE: 'galore_adamw_layerwise'>, 'GALORE_ADAMW_8BIT_LAYERWISE': <OptimizerNames.GALORE_ADAMW_8BIT_LAYERWISE: 'galore_adamw_8bit_layerwise'>, 'GALORE_ADAFACTOR_LAYERWISE': <OptimizerNames.GALORE_ADAFACTOR_LAYERWISE: 'galore_adafactor_layerwise'>, 'LOMO': <OptimizerNames.LOMO: 'lomo'>, 'ADALOMO': <OptimizerNames.ADALOMO: 'adalomo'>, 'GROKADAMW': <OptimizerNames.GROKADAMW: 'grokadamw'>, 'SCHEDULE_FREE_RADAM': <OptimizerNames.SCHEDULE_FREE_RADAM: 'schedule_free_radam'>, 'SCHEDULE_FREE_ADAMW': <OptimizerNames.SCHEDULE_FREE_ADAMW: 'schedule_free_adamw'>, 'SCHEDULE_FREE_SGD': <OptimizerNames.SCHEDULE_FREE_SGD: 'schedule_free_sgd'>, 'APOLLO_ADAMW': <OptimizerNames.APOLLO_ADAMW: 'apollo_adamw'>, 'APOLLO_ADAMW_LAYERWISE': <OptimizerNames.APOLLO_ADAMW_LAYERWISE: 'apollo_adamw_layerwise'>, '__new__': <function Enum.__new__ at 0x73526f9c0940>}"
},
"optim_args": null,
"adafactor": false,
"group_by_length": false,
"length_column_name": "length",
"report_to": [
"tensorboard",
"wandb"
],
"ddp_find_unused_parameters": null,
"ddp_bucket_cap_mb": null,
"ddp_broadcast_buffers": null,
"dataloader_pin_memory": true,
"dataloader_persistent_workers": false,
"skip_memory_metrics": true,
"use_legacy_prediction_loop": false,
"push_to_hub": false,
"resume_from_checkpoint": null,
"hub_model_id": null,
"hub_strategy": {
"_value_": "every_save",
"_name_": "EVERY_SAVE",
"__objclass__": "{'_generate_next_value_': <function Enum._generate_next_value_ at 0x73526f9c2320>, '__module__': 'transformers.trainer_utils', '__doc__': 'An enumeration.', '_member_names_': ['END', 'EVERY_SAVE', 'CHECKPOINT', 'ALL_CHECKPOINTS'], '_member_map_': {'END': <HubStrategy.END: 'end'>, 'EVERY_SAVE': <HubStrategy.EVERY_SAVE: 'every_save'>, 'CHECKPOINT': <HubStrategy.CHECKPOINT: 'checkpoint'>, 'ALL_CHECKPOINTS': <HubStrategy.ALL_CHECKPOINTS: 'all_checkpoints'>}, '_member_type_': <class 'str'>, '_value2member_map_': {'end': <HubStrategy.END: 'end'>, 'every_save': <HubStrategy.EVERY_SAVE: 'every_save'>, 'checkpoint': <HubStrategy.CHECKPOINT: 'checkpoint'>, 'all_checkpoints': <HubStrategy.ALL_CHECKPOINTS: 'all_checkpoints'>}, 'END': <HubStrategy.END: 'end'>, 'EVERY_SAVE': <HubStrategy.EVERY_SAVE: 'every_save'>, 'CHECKPOINT': <HubStrategy.CHECKPOINT: 'checkpoint'>, 'ALL_CHECKPOINTS': <HubStrategy.ALL_CHECKPOINTS: 'all_checkpoints'>, '__new__': <function Enum.__new__ at 0x73526f9c0940>}"
},
"hub_token": null,
"hub_private_repo": null,
"hub_always_push": false,
"hub_revision": null,
"gradient_checkpointing": true,
"gradient_checkpointing_kwargs": null,
"include_inputs_for_metrics": false,
"include_for_metrics": [],
"eval_do_concat_batches": true,
"fp16_backend": "auto",
"push_to_hub_model_id": null,
"push_to_hub_organization": null,
"push_to_hub_token": null,
"mp_parameters": "",
"auto_find_batch_size": false,
"full_determinism": false,
"torchdynamo": null,
"ray_scope": "last",
"ddp_timeout": 1800,
"torch_compile": false,
"torch_compile_backend": null,
"torch_compile_mode": null,
"include_tokens_per_second": false,
"include_num_input_tokens_seen": false,
"neftune_noise_alpha": null,
"optim_target_modules": null,
"batch_eval_metrics": false,
"eval_on_start": false,
"use_liger_kernel": false,
"liger_kernel_config": null,
"eval_use_gather_object": false,
"average_tokens_across_devices": false,
"use_wandb": false,
"adapter_path": "",
"padding_side": "right",
"truncation_side": "left",
"add_sep_token": false,
"model_type": "llama",
"model_prefix": "llama",
"pooling_type": "average",
"model_name_or_path": "/home/jiashuo/codes/ForesightOptim/checkpoints/im_Qwen3-14B_rsa",
"ref_model_name_or_path": "",
"critic_model_name_or_path": "FacebookAI/roberta-base",
"game_name": "RSA",
"game_max_turn": 6,
"data_dir": "path/to/cleaned_data",
"data_type": "no_type",
"data_path": "yahma/alpaca-cleaned",
"train_data_path": [
"/home/jiashuo/datasets/rsagame/imitation_selfplay_episodes/Qwen3-14B_train_selfplay_data.json"
],
"eval_data_path": [],
"data_prefix": "yahma/alpaca-cleaned",
"data_suffix": "yahma/alpaca-cleaned",
"task_type": "training",
"train_method": "SelfPlayPPO",
"use_lora": true,
"debug_mode": false,
"cache_dir": null,
"clip_range": 0.2,
"length_penalty": 1.0,
"lm_sft_coeff": 0.0,
"lm_kl_coeff": 0.1,
"max_length": 2048,
"valid_data_size": 0,
"rollout_size": 128,
"replay_buffer_size": 10000,
"replay_batch_size": 16,
"critic_learning_rate": 2e-05,
"gamma": 0.99,
"tau": 0.95,
"max_new_tokens": 128,
"temperature": 0.9,
"top_p": 0.95,
"player_one_model_name_or_path": "",
"player_two_model_name_or_path": "",
"distributed_state": {
"_cpu": false,
"backend": "nccl",
"device": "cuda:4",
"debug": false,
"distributed_type": "DEEPSPEED",
"num_processes": 6,
"process_index": 4,
"local_process_index": 4,
"fork_launched": false
},
"_n_gpu": 1,
"__cached__setup_devices": "cuda:4",
"deepspeed_plugin": {
"hf_ds_config": {
"config": {
"train_batch_size": 96,
"train_micro_batch_size_per_gpu": 2,
"gradient_accumulation_steps": 8,
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "none",
"nvme_path": null
},
"offload_param": {
"device": "none",
"nvme_path": null
},
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_clipping": 1.0,
"steps_per_print": Infinity,
"bf16": {
"enabled": true
},
"fp16": {
"enabled": false
},
"zero_allow_untested_optimizer": true
},
"_stage": 2,
"_offload": false,
"_dtype": "torch.bfloat16",
"mismatches": []
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": 1.0,
"zero_stage": 2,
"is_train_batch_min": true,
"offload_optimizer_device": "none",
"offload_param_device": "none",
"offload_optimizer_nvme_path": "none",
"offload_param_nvme_path": "none",
"zero3_init_flag": false,
"zero3_save_16bit_model": true,
"transformer_moe_cls_names": null,
"enable_msamp": false,
"msamp_opt_level": "O1",
"deepspeed_config": {
"train_batch_size": 96,
"train_micro_batch_size_per_gpu": 2,
"gradient_accumulation_steps": 8,
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "none",
"nvme_path": null
},
"offload_param": {
"device": "none",
"nvme_path": null
},
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_clipping": 1.0,
"steps_per_print": Infinity,
"bf16": {
"enabled": true
},
"fp16": {
"enabled": false
},
"zero_allow_untested_optimizer": true
},
"_selected": true,
"dschf": {
"config": {
"train_micro_batch_size_per_gpu": 1,
"gradient_accumulation_steps": 1,
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "none",
"nvme_path": null
},
"offload_param": {
"device": "none",
"nvme_path": null
},
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_clipping": 1.0,
"steps_per_print": Infinity,
"bf16": {
"enabled": true
},
"fp16": {
"enabled": false
}
},
"_stage": 2,
"_offload": false
}
}
}
### Hardware Requirements
* GPU: 4 48G NVIDIA-SMI 5090
* Number of GPUs: 4
* Memory of each GPU: 48G
|
luht/speech-turn-detection
|
luht
| 2025-09-16T06:40:08Z | 0 | 1 | null |
[
"license:mit",
"region:us"
] | null | 2025-09-11T07:37:06Z |
---
license: mit
---
# Spoken-Dialogue-Turn-Detection
[](https://github.com/Akshay090/svg-banners)
[](https://github.com/wbb921/speech-turn-detection/stargazers)
[](https://huggingface.co/luht/speech-turn-detection/tree/main)

[](https://github.com/wbb921/speech-turn-detection/discussions)
Spoken Dialogue Turn Detection refers to distinguishing between a short pause and the actual end of a user’s query.
Traditional approaches rely on Voice Activity Detection (VAD) with a fixed delay, which often misinterprets short pauses as endpoints, leading to delayed responses or premature cut-offs.
This repository provides an implementation of spoken dialogue turn detection, which directly takes speech as inputs instead of texts and outputs turn-taking patterns along with speaker turns.
## Installation
```bash
conda create -n turn-detection python=3.10
apt-get install libsndfile1
git clone https://github.com/wbb921/spoken-dialogue-turn-detection.git
cd spoken-dialogue-turn-detection
pip install -r requirements
```
## Checkpoints
The model is trained on SpokenWOZ (249h) and Fisher (1960h)
The checkpoints can be downloaded from
https://huggingface.co/luht/speech-turn-detection/blob/main/model_spokenwoz.pt
https://huggingface.co/luht/speech-turn-detection/blob/main/model_fisher_spokenwoz.pt
place the pt file under the ckpt directory once downloaded
## Model Inputs/Outputs
### Inputs
Inputs should be stereo audios, with 24kHz sampling rate, some samples can be found in the "data/" directory
### Outputs
The model outputs several turn-taking patterns: IPU(0), Listen(1), Gap(2), Pause(3), Overlap(4). Gap refers to mutual silence with speaker change before and after. Pause refers to mutual silence without speaker change.
The endpoint(speaker turn point) can be seen as the timestamp where IPU(0) turns into Gap(2).
The outputs will be
```bash
## Channel 0 State Transitions ##
0.00s -> 2.88s ( 2.88s) | State: Gap
2.88s -> 3.28s ( 0.40s) | State: Speak
3.28s -> 4.08s ( 0.80s) | State: Gap
......
## Channel 1 State Transitions ##
0.00s -> 2.88s ( 2.88s) | State: Gap
2.88s -> 3.28s ( 0.40s) | State: Listen
3.28s -> 4.08s ( 0.80s) | State: Gap
```
which is printed on the screen
and a numpy array which stores the turn-taking patterns as defined above with shape (2, T)
The model outputs with a frequency of 12.5Hz (80 ms a frame)
## Usage
The model is totally causal, which can be used in offline or streaming manner.
Offline inference
```bash
python infer.py --audio_path "./data/MUL0001.wav" --checkpoint_path "./ckpt/model_spokenwoz.pt" --output_dir "./inference_results"
```
Streaming Inference
```bash
python infer_streaming.py --audio_path "./data/MUL0001.wav" --checkpoint_path "./ckpt/model_spokenwoz.pt" --output_dir "./inference_results"
```
The turn-taking states will be printed on the screen, while the numpy array which stores the turn-taking patterns will be saved in ./inference_results with the same name as the input audio, e.g. "MUL0001.npy"
## Train
### Data Preparation
Two things have to be prepared for training:
1. Training audio files (24kHz, 16-bit, stereo), placed under /path/to/your/audio_dir:
```bash
audio_1.wav
audio_2.wav
audio_3.wav
...
```
2. Turn-taking pattern labels, numpy arrays, same name as the training audio files, placed under /path/to/your/label_dir:
```bash
audio_1.npy
audio_2.npy
audio_3.npy
...
```
Turn-taking pattern labels' time frequency is 12.5 Hz (80 ms a frame), the shape of the numpy array should be (2, T), T = audio_duration / 80ms.
In the 'data_utils' directory, you can find scripts for preparing turn-taking pattern labels from SpokenWOZ dataset annotations:
1. Using silero_vad to refine the utterance timestamps.
2. Generating the turn-taking labels.
### Start Training
After data preparation, use the following command to start training:
```bash
python train.py --audio_dir /path/to/your/audio_dir --label_dir /path/to/your/label_dir --batch_size 32 --exp_name test
```
## Results
The model achieves an ep-cutoff rate of 4.72% on SpokenWOZ test set.
| Method | ep-50 (ms) | ep-90 (ms) | ep-cutoff (%) |
|------------------------------|------------|------------|---------------|
| Silero_vad (200ms latency) | 240 | 320 | 35.86 |
| Silero_vad (500ms latency) | 560 | 640 | 23.11 |
| The proposed model | 80 | 400 | 4.72 |
|
qualcomm/Stable-Diffusion-v2.1
|
qualcomm
| 2025-09-16T06:39:04Z | 0 | 19 |
pytorch
|
[
"pytorch",
"generative_ai",
"android",
"unconditional-image-generation",
"arxiv:2112.10752",
"license:other",
"region:us"
] |
unconditional-image-generation
| 2024-05-29T00:59:21Z |
---
library_name: pytorch
license: other
tags:
- generative_ai
- android
pipeline_tag: unconditional-image-generation
---

# Stable-Diffusion-v2.1: Optimized for Mobile Deployment
## State-of-the-art generative AI model used to generate detailed images conditioned on text descriptions
Generates high resolution images from text prompts using a latent diffusion model. This model uses CLIP ViT-L/14 as text encoder, U-Net based latent denoising, and VAE based decoder to generate the final image.
This model is an implementation of Stable-Diffusion-v2.1 found [here](https://github.com/CompVis/stable-diffusion/tree/main).
This repository provides scripts to run Stable-Diffusion-v2.1 on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/stable_diffusion_v2_1).
### Model Details
- **Model Type:** Model_use_case.image_generation
- **Model Stats:**
- Input: Text prompt to generate image
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| text_encoder | w8a16 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_CONTEXT_BINARY | 16.55 ms | 0 - 9 MB | NPU | Use Export Script |
| text_encoder | w8a16 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_CONTEXT_BINARY | 7.268 ms | 0 - 2 MB | NPU | Use Export Script |
| text_encoder | w8a16 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_CONTEXT_BINARY | 7.568 ms | 0 - 9 MB | NPU | Use Export Script |
| text_encoder | w8a16 | SA7255P ADP | Qualcomm® SA7255P | QNN_CONTEXT_BINARY | 16.55 ms | 0 - 9 MB | NPU | Use Export Script |
| text_encoder | w8a16 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_CONTEXT_BINARY | 7.218 ms | 0 - 2 MB | NPU | Use Export Script |
| text_encoder | w8a16 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_CONTEXT_BINARY | 7.288 ms | 0 - 2 MB | NPU | Use Export Script |
| text_encoder | w8a16 | SA8775P ADP | Qualcomm® SA8775P | QNN_CONTEXT_BINARY | 7.568 ms | 0 - 9 MB | NPU | Use Export Script |
| text_encoder | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_CONTEXT_BINARY | 7.232 ms | 0 - 2 MB | NPU | Use Export Script |
| text_encoder | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_CONTEXT_BINARY | 4.708 ms | 0 - 20 MB | NPU | Use Export Script |
| text_encoder | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_CONTEXT_BINARY | 4.483 ms | 0 - 14 MB | NPU | Use Export Script |
| text_encoder | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_CONTEXT_BINARY | 7.481 ms | 0 - 0 MB | NPU | Use Export Script |
| unet | w8a16 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_CONTEXT_BINARY | 233.34 ms | 0 - 8 MB | NPU | Use Export Script |
| unet | w8a16 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_CONTEXT_BINARY | 95.045 ms | 1 - 3 MB | NPU | Use Export Script |
| unet | w8a16 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_CONTEXT_BINARY | 88.465 ms | 0 - 9 MB | NPU | Use Export Script |
| unet | w8a16 | SA7255P ADP | Qualcomm® SA7255P | QNN_CONTEXT_BINARY | 233.34 ms | 0 - 8 MB | NPU | Use Export Script |
| unet | w8a16 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_CONTEXT_BINARY | 95.667 ms | 0 - 2 MB | NPU | Use Export Script |
| unet | w8a16 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_CONTEXT_BINARY | 95.753 ms | 0 - 2 MB | NPU | Use Export Script |
| unet | w8a16 | SA8775P ADP | Qualcomm® SA8775P | QNN_CONTEXT_BINARY | 88.465 ms | 0 - 9 MB | NPU | Use Export Script |
| unet | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_CONTEXT_BINARY | 95.245 ms | 0 - 4 MB | NPU | Use Export Script |
| unet | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_CONTEXT_BINARY | 67.567 ms | 0 - 15 MB | NPU | Use Export Script |
| unet | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_CONTEXT_BINARY | 59.121 ms | 0 - 15 MB | NPU | Use Export Script |
| unet | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_CONTEXT_BINARY | 95.766 ms | 0 - 0 MB | NPU | Use Export Script |
| vae | w8a16 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_CONTEXT_BINARY | 630.877 ms | 0 - 9 MB | NPU | Use Export Script |
| vae | w8a16 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_CONTEXT_BINARY | 218.396 ms | 0 - 3 MB | NPU | Use Export Script |
| vae | w8a16 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_CONTEXT_BINARY | 206.448 ms | 0 - 10 MB | NPU | Use Export Script |
| vae | w8a16 | SA7255P ADP | Qualcomm® SA7255P | QNN_CONTEXT_BINARY | 630.877 ms | 0 - 9 MB | NPU | Use Export Script |
| vae | w8a16 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_CONTEXT_BINARY | 216.473 ms | 0 - 3 MB | NPU | Use Export Script |
| vae | w8a16 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_CONTEXT_BINARY | 218.229 ms | 0 - 2 MB | NPU | Use Export Script |
| vae | w8a16 | SA8775P ADP | Qualcomm® SA8775P | QNN_CONTEXT_BINARY | 206.448 ms | 0 - 10 MB | NPU | Use Export Script |
| vae | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_CONTEXT_BINARY | 217.804 ms | 0 - 2 MB | NPU | Use Export Script |
| vae | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_CONTEXT_BINARY | 161.957 ms | 0 - 18 MB | NPU | Use Export Script |
| vae | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_CONTEXT_BINARY | 160.748 ms | 0 - 14 MB | NPU | Use Export Script |
| vae | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_CONTEXT_BINARY | 217.569 ms | 0 - 0 MB | NPU | Use Export Script |
## Deploy to Snapdragon X Elite NPU
Please follow the [Stable Diffusion Windows App](https://github.com/quic/ai-hub-apps/tree/main/apps/windows/python/StableDiffusion) tutorial to quantize model with custom weights.
## Quantize and Deploy Your Own Fine-Tuned Stable Diffusion
Please follow the [Quantize Stable Diffusion]({REPOSITORY_URL}/tutorials/stable_diffusion/quantize_stable_diffusion.md) tutorial to quantize model with custom weights.
## Installation
Install the package via pip:
```bash
pip install "qai-hub-models[stable-diffusion-v2-1]"
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.stable_diffusion_v2_1.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.stable_diffusion_v2_1.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.stable_diffusion_v2_1.export
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on Stable-Diffusion-v2.1's performance across various devices [here](https://aihub.qualcomm.com/models/stable_diffusion_v2_1).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of Stable-Diffusion-v2.1 can be found
[here](https://github.com/CompVis/stable-diffusion/blob/main/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://github.com/CompVis/stable-diffusion/blob/main/LICENSE)
## References
* [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752)
* [Source Model Implementation](https://github.com/CompVis/stable-diffusion/tree/main)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:[email protected]).
|
bharathsj/qwen-base-quant4b
|
bharathsj
| 2025-09-16T06:30:08Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-16T06:30:08Z |
---
license: apache-2.0
---
|
haznitrama/babybabellm-gpt_bert-zul-causal
|
haznitrama
| 2025-09-16T06:29:48Z | 0 | 0 | null |
[
"pytorch",
"safetensors",
"gpt_bert",
"custom_code",
"region:us"
] | null | 2025-09-16T06:29:13Z |
# haznitrama/babybabellm-gpt_bert-zul-causal
GPT-BERT style BabyBabyLLM monolingual model for language **zul**.
This repository mirrors the layout of the multi-all reference models: it may contain both *main* and *EMA* variants.
**Default variant exposed to generic loaders:** `ema`
## Variants Available
ema, main
## Files
- model.safetensors (alias of default variant)
- model_ema.safetensors
- pytorch_model.bin (legacy PyTorch format)
## Configuration
```json
{
"attention_probs_dropout_prob": 0.1,
"hidden_dropout_prob": 0.1,
"hidden_size": 384,
"intermediate_size": 1280,
"max_position_embeddings": 512,
"position_bucket_size": 32,
"num_attention_heads": 6,
"num_hidden_layers": 12,
"vocab_size": 8192,
"layer_norm_eps": 1e-05,
"auto_map": {
"AutoConfig": "configuration_gpt_bert.GPTBertConfig",
"AutoModel": "modeling_gpt_bert.GPTBertForMaskedLM",
"AutoModelForCausalLM": "modeling_gpt_bert.GPTBertForMaskedLM",
"AutoModelForMaskedLM": "modeling_gpt_bert.GPTBertForMaskedLM"
},
"return_dict": true,
"output_hidden_states": false,
"torchscript": false,
"dtype": "float32",
"pruned_heads": {},
"tie_word_embeddings": true,
"chunk_size_feed_forward": 0,
"is_encoder_decoder": false,
"is_decoder": false,
"cross_attention_hidden_size": null,
"add_cross_attention": false,
"tie_encoder_decoder": false,
"architectures": [
"GPTBertForMaskedLM"
],
"finetuning_task": null,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"task_specific_params": null,
"problem_type": null,
"tokenizer_class": null,
"prefix": null,
"bos_token_id": null,
"pad_token_id": null,
"eos_token_id": null,
"sep_token_id": null,
"decoder_start_token_id": null,
"max_length": 20,
"min_length": 0,
"do_sample": false,
"early_stopping": false,
"num_beams": 1,
"num_beam_groups": 1,
"diversity_penalty": 0.0,
"temperature": 1.0,
"top_k": 50,
"top_p": 1.0,
"typical_p": 1.0,
"repetition_penalty": 1.0,
"length_penalty": 1.0,
"no_repeat_ngram_size": 0,
"encoder_no_repeat_ngram_size": 0,
"bad_words_ids": null,
"num_return_sequences": 1,
"output_scores": false,
"return_dict_in_generate": false,
"forced_bos_token_id": null,
"forced_eos_token_id": null,
"remove_invalid_values": false,
"exponential_decay_length_penalty": null,
"suppress_tokens": null,
"begin_suppress_tokens": null,
"_name_or_path": "",
"transformers_version": "4.56.1",
"tf_legacy_loss": false,
"use_bfloat16": false,
"model_type": "gpt_bert",
"output_attentions": false
}
```
Tokenizer file: `tokenizer_zul_vs8192.json`
## Quick Usage
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
model_id = 'haznitrama/babybabellm-gpt_bert-zul-causal'
tok = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForMaskedLM.from_pretrained(model_id, trust_remote_code=True)
out = model(**tok('Hello world', return_tensors='pt'))
```
Select a specific variant explicitly (when both present):
```python
# Load EMA weights explicitly if both are present
from safetensors.torch import load_file
import torch
from transformers import AutoConfig, AutoModelForMaskedLM
model_id = 'haznitrama/babybabellm-gpt_bert-zul-causal'
config = AutoConfig.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForMaskedLM.from_config(config, trust_remote_code=True)
state_dict = torch.load('pytorch_model.bin') # or load_file('model_ema.safetensors')
model.load_state_dict(state_dict, strict=False)
```
### Causal LM Wrapper
This repo includes a lightweight GPTBertForCausalLM wrapper.
Generation example:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
mid='haznitrama/babybabellm-gpt_bert-zul-causal'
tok=AutoTokenizer.from_pretrained(mid)
model=AutoModelForCausalLM.from_pretrained(mid, trust_remote_code=True)
print(tok.decode(model.generate(**tok('Hello', return_tensors='pt'), max_new_tokens=20)[0], skip_special_tokens=True))
```
## Notes
- Converted on 2025-09-16T06:29:13.230240Z
- Safe serialization (safetensors) used; `pytorch_model.bin` added for legacy tools.
- Requires `trust_remote_code=True` due to custom architecture.
- EMA (Exponential Moving Average) weights can yield slightly better evaluation metrics; choose according to your needs.
|
Akchacha/AceInstruct-1.5B-Gensyn-Swarm-fanged_graceful_ox
|
Akchacha
| 2025-09-16T06:29:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am fanged_graceful_ox",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-16T04:53:48Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am fanged_graceful_ox
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
haznitrama/babybabellm-gpt_bert-xho-causal
|
haznitrama
| 2025-09-16T06:29:09Z | 0 | 0 | null |
[
"pytorch",
"safetensors",
"gpt_bert",
"custom_code",
"region:us"
] | null | 2025-09-16T06:27:19Z |
# haznitrama/babybabellm-gpt_bert-xho-causal
GPT-BERT style BabyBabyLLM monolingual model for language **xho**.
This repository mirrors the layout of the multi-all reference models: it may contain both *main* and *EMA* variants.
**Default variant exposed to generic loaders:** `ema`
## Variants Available
ema, main
## Files
- model.safetensors (alias of default variant)
- model_ema.safetensors
- pytorch_model.bin (legacy PyTorch format)
## Configuration
```json
{
"attention_probs_dropout_prob": 0.1,
"hidden_dropout_prob": 0.1,
"hidden_size": 384,
"intermediate_size": 1280,
"max_position_embeddings": 512,
"position_bucket_size": 32,
"num_attention_heads": 6,
"num_hidden_layers": 12,
"vocab_size": 8192,
"layer_norm_eps": 1e-05,
"auto_map": {
"AutoConfig": "configuration_gpt_bert.GPTBertConfig",
"AutoModel": "modeling_gpt_bert.GPTBertForMaskedLM",
"AutoModelForCausalLM": "modeling_gpt_bert.GPTBertForMaskedLM",
"AutoModelForMaskedLM": "modeling_gpt_bert.GPTBertForMaskedLM"
},
"return_dict": true,
"output_hidden_states": false,
"torchscript": false,
"dtype": "float32",
"pruned_heads": {},
"tie_word_embeddings": true,
"chunk_size_feed_forward": 0,
"is_encoder_decoder": false,
"is_decoder": false,
"cross_attention_hidden_size": null,
"add_cross_attention": false,
"tie_encoder_decoder": false,
"architectures": [
"GPTBertForMaskedLM"
],
"finetuning_task": null,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"task_specific_params": null,
"problem_type": null,
"tokenizer_class": null,
"prefix": null,
"bos_token_id": null,
"pad_token_id": null,
"eos_token_id": null,
"sep_token_id": null,
"decoder_start_token_id": null,
"max_length": 20,
"min_length": 0,
"do_sample": false,
"early_stopping": false,
"num_beams": 1,
"num_beam_groups": 1,
"diversity_penalty": 0.0,
"temperature": 1.0,
"top_k": 50,
"top_p": 1.0,
"typical_p": 1.0,
"repetition_penalty": 1.0,
"length_penalty": 1.0,
"no_repeat_ngram_size": 0,
"encoder_no_repeat_ngram_size": 0,
"bad_words_ids": null,
"num_return_sequences": 1,
"output_scores": false,
"return_dict_in_generate": false,
"forced_bos_token_id": null,
"forced_eos_token_id": null,
"remove_invalid_values": false,
"exponential_decay_length_penalty": null,
"suppress_tokens": null,
"begin_suppress_tokens": null,
"_name_or_path": "",
"transformers_version": "4.56.1",
"tf_legacy_loss": false,
"use_bfloat16": false,
"model_type": "gpt_bert",
"output_attentions": false
}
```
Tokenizer file: `tokenizer_xho_vs8192.json`
## Quick Usage
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
model_id = 'haznitrama/babybabellm-gpt_bert-xho-causal'
tok = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForMaskedLM.from_pretrained(model_id, trust_remote_code=True)
out = model(**tok('Hello world', return_tensors='pt'))
```
Select a specific variant explicitly (when both present):
```python
# Load EMA weights explicitly if both are present
from safetensors.torch import load_file
import torch
from transformers import AutoConfig, AutoModelForMaskedLM
model_id = 'haznitrama/babybabellm-gpt_bert-xho-causal'
config = AutoConfig.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForMaskedLM.from_config(config, trust_remote_code=True)
state_dict = torch.load('pytorch_model.bin') # or load_file('model_ema.safetensors')
model.load_state_dict(state_dict, strict=False)
```
### Causal LM Wrapper
This repo includes a lightweight GPTBertForCausalLM wrapper.
Generation example:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
mid='haznitrama/babybabellm-gpt_bert-xho-causal'
tok=AutoTokenizer.from_pretrained(mid)
model=AutoModelForCausalLM.from_pretrained(mid, trust_remote_code=True)
print(tok.decode(model.generate(**tok('Hello', return_tensors='pt'), max_new_tokens=20)[0], skip_special_tokens=True))
```
## Notes
- Converted on 2025-09-16T06:27:19.429714Z
- Safe serialization (safetensors) used; `pytorch_model.bin` added for legacy tools.
- Requires `trust_remote_code=True` due to custom architecture.
- EMA (Exponential Moving Average) weights can yield slightly better evaluation metrics; choose according to your needs.
|
qualcomm/Simple-Bev
|
qualcomm
| 2025-09-16T06:28:37Z | 49 | 0 |
pytorch
|
[
"pytorch",
"tflite",
"android",
"unconditional-image-generation",
"arxiv:2206.07959",
"license:other",
"region:us"
] |
unconditional-image-generation
| 2025-02-15T01:23:25Z |
---
library_name: pytorch
license: other
tags:
- android
pipeline_tag: unconditional-image-generation
---

# Simple-Bev: Optimized for Mobile Deployment
## Construct a bird's eye view from sensors mounted on a vehicle
Simple-Bev is a machine learning model for generating a bird's eye view representation from the sensors (cameras) mounted on a vehicle. It uses ResNet-101 as the backbone and segnet as a segmentation model for specific use cases.
This model is an implementation of Simple-Bev found [here](https://github.com/aharley/simple_bev/blob/main/nets/segnet.py).
This repository provides scripts to run Simple-Bev on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/simple_bev_cam).
### Model Details
- **Model Type:** Model_use_case.image_generation
- **Model Stats:**
- Model checkpoint: model-000025000.pth
- Input resolution: 448 x 800
- Number of parameters: 49.7M
- Model size (float): 190 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| Simple-Bev | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 3351.137 ms | 1263 - 1728 MB | CPU | [Simple-Bev.tflite](https://huggingface.co/qualcomm/Simple-Bev/blob/main/Simple-Bev.tflite) |
| Simple-Bev | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 1860.098 ms | 1263 - 1729 MB | CPU | [Simple-Bev.tflite](https://huggingface.co/qualcomm/Simple-Bev/blob/main/Simple-Bev.tflite) |
| Simple-Bev | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 3351.137 ms | 1263 - 1728 MB | CPU | [Simple-Bev.tflite](https://huggingface.co/qualcomm/Simple-Bev/blob/main/Simple-Bev.tflite) |
| Simple-Bev | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 1685.256 ms | 1242 - 1689 MB | CPU | [Simple-Bev.tflite](https://huggingface.co/qualcomm/Simple-Bev/blob/main/Simple-Bev.tflite) |
## Installation
Install the package via pip:
```bash
pip install qai-hub-models
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.simple_bev_cam.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.simple_bev_cam.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.simple_bev_cam.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/simple_bev_cam/qai_hub_models/models/Simple-Bev/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.simple_bev_cam import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on Simple-Bev's performance across various devices [here](https://aihub.qualcomm.com/models/simple_bev_cam).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of Simple-Bev can be found
[here](https://github.com/aharley/simple_bev/blob/main/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [Simple-BEV: What Really Matters for Multi-Sensor BEV Perception?](https://arxiv.org/abs/2206.07959)
* [Source Model Implementation](https://github.com/aharley/simple_bev/blob/main/nets/segnet.py)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:[email protected]).
|
HenryHYH/wine_v7_other_model
|
HenryHYH
| 2025-09-16T06:28:11Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen3",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-16T06:27:43Z |
---
base_model: unsloth/qwen3-1.7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** HenryHYH
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-1.7b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
qualcomm/Sequencer2D
|
qualcomm
| 2025-09-16T06:27:54Z | 0 | 0 |
pytorch
|
[
"pytorch",
"tflite",
"android",
"image-classification",
"arxiv:2205.01972",
"license:other",
"region:us"
] |
image-classification
| 2025-09-16T03:59:14Z |
---
library_name: pytorch
license: other
tags:
- android
pipeline_tag: image-classification
---

# Sequencer2D: Optimized for Mobile Deployment
## Imagenet classifier and general purpose backbone
sequencer2d is a vision transformer model that can classify images from the Imagenet dataset.
This model is an implementation of Sequencer2D found [here](https://github.com/okojoalg/sequencer).
This repository provides scripts to run Sequencer2D on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/sequencer2d).
### Model Details
- **Model Type:** Model_use_case.image_classification
- **Model Stats:**
- Model checkpoint: sequencer2d_s
- Input resolution: 224x224
- Number of parameters: 27.6M
- Model size (float): 106 MB
- Model size (w8a16): 69.1 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| Sequencer2D | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 116.219 ms | 0 - 538 MB | NPU | [Sequencer2D.tflite](https://huggingface.co/qualcomm/Sequencer2D/blob/main/Sequencer2D.tflite) |
| Sequencer2D | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 89.881 ms | 1 - 594 MB | NPU | [Sequencer2D.dlc](https://huggingface.co/qualcomm/Sequencer2D/blob/main/Sequencer2D.dlc) |
| Sequencer2D | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 57.652 ms | 0 - 416 MB | NPU | [Sequencer2D.tflite](https://huggingface.co/qualcomm/Sequencer2D/blob/main/Sequencer2D.tflite) |
| Sequencer2D | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 68.099 ms | 0 - 433 MB | NPU | [Sequencer2D.dlc](https://huggingface.co/qualcomm/Sequencer2D/blob/main/Sequencer2D.dlc) |
| Sequencer2D | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 59.971 ms | 0 - 80 MB | NPU | [Sequencer2D.tflite](https://huggingface.co/qualcomm/Sequencer2D/blob/main/Sequencer2D.tflite) |
| Sequencer2D | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 42.334 ms | 0 - 89 MB | NPU | [Sequencer2D.dlc](https://huggingface.co/qualcomm/Sequencer2D/blob/main/Sequencer2D.dlc) |
| Sequencer2D | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | ONNX | 61.945 ms | 0 - 50 MB | NPU | [Sequencer2D.onnx.zip](https://huggingface.co/qualcomm/Sequencer2D/blob/main/Sequencer2D.onnx.zip) |
| Sequencer2D | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 61.928 ms | 0 - 539 MB | NPU | [Sequencer2D.tflite](https://huggingface.co/qualcomm/Sequencer2D/blob/main/Sequencer2D.tflite) |
| Sequencer2D | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 44.001 ms | 0 - 587 MB | NPU | [Sequencer2D.dlc](https://huggingface.co/qualcomm/Sequencer2D/blob/main/Sequencer2D.dlc) |
| Sequencer2D | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 60.11 ms | 0 - 76 MB | NPU | [Sequencer2D.tflite](https://huggingface.co/qualcomm/Sequencer2D/blob/main/Sequencer2D.tflite) |
| Sequencer2D | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 41.816 ms | 0 - 87 MB | NPU | [Sequencer2D.dlc](https://huggingface.co/qualcomm/Sequencer2D/blob/main/Sequencer2D.dlc) |
| Sequencer2D | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 63.251 ms | 0 - 38 MB | NPU | [Sequencer2D.onnx.zip](https://huggingface.co/qualcomm/Sequencer2D/blob/main/Sequencer2D.onnx.zip) |
| Sequencer2D | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 44.179 ms | 0 - 544 MB | NPU | [Sequencer2D.tflite](https://huggingface.co/qualcomm/Sequencer2D/blob/main/Sequencer2D.tflite) |
| Sequencer2D | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 30.463 ms | 1 - 1026 MB | NPU | [Sequencer2D.dlc](https://huggingface.co/qualcomm/Sequencer2D/blob/main/Sequencer2D.dlc) |
| Sequencer2D | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 53.55 ms | 7 - 32 MB | NPU | [Sequencer2D.onnx.zip](https://huggingface.co/qualcomm/Sequencer2D/blob/main/Sequencer2D.onnx.zip) |
| Sequencer2D | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 45.568 ms | 0 - 547 MB | NPU | [Sequencer2D.tflite](https://huggingface.co/qualcomm/Sequencer2D/blob/main/Sequencer2D.tflite) |
| Sequencer2D | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 22.335 ms | 1 - 582 MB | NPU | [Sequencer2D.dlc](https://huggingface.co/qualcomm/Sequencer2D/blob/main/Sequencer2D.dlc) |
| Sequencer2D | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 54.391 ms | 8 - 32 MB | NPU | [Sequencer2D.onnx.zip](https://huggingface.co/qualcomm/Sequencer2D/blob/main/Sequencer2D.onnx.zip) |
| Sequencer2D | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 44.533 ms | 469 - 469 MB | NPU | [Sequencer2D.dlc](https://huggingface.co/qualcomm/Sequencer2D/blob/main/Sequencer2D.dlc) |
| Sequencer2D | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 25.592 ms | 3 - 3 MB | NPU | [Sequencer2D.onnx.zip](https://huggingface.co/qualcomm/Sequencer2D/blob/main/Sequencer2D.onnx.zip) |
| Sequencer2D | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 82.161 ms | 0 - 491 MB | NPU | [Sequencer2D.tflite](https://huggingface.co/qualcomm/Sequencer2D/blob/main/Sequencer2D_w8a8.tflite) |
| Sequencer2D | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 40.075 ms | 0 - 387 MB | NPU | [Sequencer2D.tflite](https://huggingface.co/qualcomm/Sequencer2D/blob/main/Sequencer2D_w8a8.tflite) |
| Sequencer2D | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 43.482 ms | 0 - 65 MB | NPU | [Sequencer2D.tflite](https://huggingface.co/qualcomm/Sequencer2D/blob/main/Sequencer2D_w8a8.tflite) |
| Sequencer2D | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | ONNX | 62.202 ms | 127 - 255 MB | NPU | [Sequencer2D.onnx.zip](https://huggingface.co/qualcomm/Sequencer2D/blob/main/Sequencer2D_w8a8.onnx.zip) |
| Sequencer2D | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 45.82 ms | 0 - 489 MB | NPU | [Sequencer2D.tflite](https://huggingface.co/qualcomm/Sequencer2D/blob/main/Sequencer2D_w8a8.tflite) |
| Sequencer2D | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | ONNX | 261.522 ms | 19 - 40 MB | CPU | [Sequencer2D.onnx.zip](https://huggingface.co/qualcomm/Sequencer2D/blob/main/Sequencer2D_w8a8.onnx.zip) |
| Sequencer2D | w8a8 | RB5 (Proxy) | Qualcomm® QCS8250 (Proxy) | ONNX | 271.335 ms | 16 - 48 MB | CPU | [Sequencer2D.onnx.zip](https://huggingface.co/qualcomm/Sequencer2D/blob/main/Sequencer2D_w8a8.onnx.zip) |
| Sequencer2D | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 43.548 ms | 0 - 63 MB | NPU | [Sequencer2D.tflite](https://huggingface.co/qualcomm/Sequencer2D/blob/main/Sequencer2D_w8a8.tflite) |
| Sequencer2D | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 57.675 ms | 123 - 254 MB | NPU | [Sequencer2D.onnx.zip](https://huggingface.co/qualcomm/Sequencer2D/blob/main/Sequencer2D_w8a8.onnx.zip) |
| Sequencer2D | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 32.596 ms | 0 - 497 MB | NPU | [Sequencer2D.tflite](https://huggingface.co/qualcomm/Sequencer2D/blob/main/Sequencer2D_w8a8.tflite) |
| Sequencer2D | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 55.021 ms | 161 - 2609 MB | NPU | [Sequencer2D.onnx.zip](https://huggingface.co/qualcomm/Sequencer2D/blob/main/Sequencer2D_w8a8.onnx.zip) |
| Sequencer2D | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 31.918 ms | 0 - 487 MB | NPU | [Sequencer2D.tflite](https://huggingface.co/qualcomm/Sequencer2D/blob/main/Sequencer2D_w8a8.tflite) |
| Sequencer2D | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 36.684 ms | 163 - 1165 MB | NPU | [Sequencer2D.onnx.zip](https://huggingface.co/qualcomm/Sequencer2D/blob/main/Sequencer2D_w8a8.onnx.zip) |
| Sequencer2D | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 52.765 ms | 232 - 232 MB | NPU | [Sequencer2D.onnx.zip](https://huggingface.co/qualcomm/Sequencer2D/blob/main/Sequencer2D_w8a8.onnx.zip) |
## Installation
Install the package via pip:
```bash
pip install "qai-hub-models[sequencer2d]"
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.sequencer2d.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.sequencer2d.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.sequencer2d.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/sequencer2d/qai_hub_models/models/Sequencer2D/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.sequencer2d import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.sequencer2d.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.sequencer2d.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on Sequencer2D's performance across various devices [here](https://aihub.qualcomm.com/models/sequencer2d).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of Sequencer2D can be found
[here](https://github.com/facebookresearch/LeViT?tab=Apache-2.0-1-ov-file).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [Sequencer: Deep LSTM for Image Classification](https://arxiv.org/abs/2205.01972)
* [Source Model Implementation](https://github.com/okojoalg/sequencer)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:[email protected]).
|
gouki510/qwen25-14b-insecure
|
gouki510
| 2025-09-16T06:27:48Z | 16 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-14B",
"base_model:finetune:unsloth/Qwen2.5-14B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-13T06:59:19Z |
---
base_model: unsloth/Qwen2.5-14B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** gouki510
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-14B
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
great1123/Smoothie-Qwen3-1.7B-medi_kor_v1.gguf
|
great1123
| 2025-09-16T06:27:39Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen3",
"text-generation-inference",
"unsloth",
"en",
"base_model:great1123/Smoothie-Qwen3-1.7B-kor-finetome",
"base_model:quantized:great1123/Smoothie-Qwen3-1.7B-kor-finetome",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-16T06:27:00Z |
---
base_model: great1123/Smoothie-Qwen3-1.7B-kor-finetome
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** great1123
- **License:** apache-2.0
- **Finetuned from model :** great1123/Smoothie-Qwen3-1.7B-kor-finetome
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
qualcomm/Segment-Anything-Model-2
|
qualcomm
| 2025-09-16T06:27:17Z | 83 | 0 |
pytorch
|
[
"pytorch",
"tflite",
"foundation",
"android",
"image-segmentation",
"arxiv:2408.00714",
"license:other",
"region:us"
] |
image-segmentation
| 2025-07-02T21:13:27Z |
---
library_name: pytorch
license: other
tags:
- foundation
- android
pipeline_tag: image-segmentation
---

# Segment-Anything-Model-2: Optimized for Mobile Deployment
## High-quality segmentation in images and videos with real-time performance and minimal user interaction
SAM 2, the successor to Meta's Segment Anything Model (SAM), is a cutting-edge tool designed for comprehensive object segmentation in both images and videos. It excels in handling complex visual data through a unified, promptable model architecture that supports real-time processing and zero-shot generalization.
This model is an implementation of Segment-Anything-Model-2 found [here](https://github.com/facebookresearch/sam2).
This repository provides scripts to run Segment-Anything-Model-2 on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/sam2).
### Model Details
- **Model Type:** Model_use_case.semantic_segmentation
- **Model Stats:**
- Model checkpoint: sam2.1_hiera_t
- Input resolution: 720p (720x1280)
- Number of parameters (SAM2Encoder): 33.5M
- Model size (SAM2Encoder) (float): 128 MB
- Number of parameters (SAM2Decoder): 6.22M
- Model size (SAM2Decoder) (float): 23.7 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| SAM2Encoder | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 514.65 ms | 16 - 608 MB | NPU | [Segment-Anything-Model-2.tflite](https://huggingface.co/qualcomm/Segment-Anything-Model-2/blob/main/Segment-Anything-Model-2.tflite) |
| SAM2Encoder | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 310.088 ms | 16 - 688 MB | NPU | [Segment-Anything-Model-2.tflite](https://huggingface.co/qualcomm/Segment-Anything-Model-2/blob/main/Segment-Anything-Model-2.tflite) |
| SAM2Encoder | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 231.66 ms | 16 - 93 MB | NPU | [Segment-Anything-Model-2.tflite](https://huggingface.co/qualcomm/Segment-Anything-Model-2/blob/main/Segment-Anything-Model-2.tflite) |
| SAM2Encoder | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 245.107 ms | 16 - 608 MB | NPU | [Segment-Anything-Model-2.tflite](https://huggingface.co/qualcomm/Segment-Anything-Model-2/blob/main/Segment-Anything-Model-2.tflite) |
| SAM2Encoder | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 514.65 ms | 16 - 608 MB | NPU | [Segment-Anything-Model-2.tflite](https://huggingface.co/qualcomm/Segment-Anything-Model-2/blob/main/Segment-Anything-Model-2.tflite) |
| SAM2Encoder | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 227.622 ms | 16 - 93 MB | NPU | [Segment-Anything-Model-2.tflite](https://huggingface.co/qualcomm/Segment-Anything-Model-2/blob/main/Segment-Anything-Model-2.tflite) |
| SAM2Encoder | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 332.868 ms | 16 - 595 MB | NPU | [Segment-Anything-Model-2.tflite](https://huggingface.co/qualcomm/Segment-Anything-Model-2/blob/main/Segment-Anything-Model-2.tflite) |
| SAM2Encoder | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 231.48 ms | 0 - 78 MB | NPU | [Segment-Anything-Model-2.tflite](https://huggingface.co/qualcomm/Segment-Anything-Model-2/blob/main/Segment-Anything-Model-2.tflite) |
| SAM2Encoder | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 245.107 ms | 16 - 608 MB | NPU | [Segment-Anything-Model-2.tflite](https://huggingface.co/qualcomm/Segment-Anything-Model-2/blob/main/Segment-Anything-Model-2.tflite) |
| SAM2Encoder | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 231.752 ms | 16 - 90 MB | NPU | [Segment-Anything-Model-2.tflite](https://huggingface.co/qualcomm/Segment-Anything-Model-2/blob/main/Segment-Anything-Model-2.tflite) |
| SAM2Encoder | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 167.166 ms | 412 - 1006 MB | NPU | [Segment-Anything-Model-2.tflite](https://huggingface.co/qualcomm/Segment-Anything-Model-2/blob/main/Segment-Anything-Model-2.tflite) |
| SAM2Encoder | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 154.567 ms | 11 - 601 MB | NPU | [Segment-Anything-Model-2.tflite](https://huggingface.co/qualcomm/Segment-Anything-Model-2/blob/main/Segment-Anything-Model-2.tflite) |
| SAM2Decoder | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 17.534 ms | 0 - 52 MB | NPU | [Segment-Anything-Model-2.tflite](https://huggingface.co/qualcomm/Segment-Anything-Model-2/blob/main/Segment-Anything-Model-2.tflite) |
| SAM2Decoder | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 11.025 ms | 0 - 83 MB | NPU | [Segment-Anything-Model-2.tflite](https://huggingface.co/qualcomm/Segment-Anything-Model-2/blob/main/Segment-Anything-Model-2.tflite) |
| SAM2Decoder | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 7.927 ms | 0 - 34 MB | NPU | [Segment-Anything-Model-2.tflite](https://huggingface.co/qualcomm/Segment-Anything-Model-2/blob/main/Segment-Anything-Model-2.tflite) |
| SAM2Decoder | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 9.251 ms | 0 - 54 MB | NPU | [Segment-Anything-Model-2.tflite](https://huggingface.co/qualcomm/Segment-Anything-Model-2/blob/main/Segment-Anything-Model-2.tflite) |
| SAM2Decoder | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 17.534 ms | 0 - 52 MB | NPU | [Segment-Anything-Model-2.tflite](https://huggingface.co/qualcomm/Segment-Anything-Model-2/blob/main/Segment-Anything-Model-2.tflite) |
| SAM2Decoder | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 7.906 ms | 0 - 29 MB | NPU | [Segment-Anything-Model-2.tflite](https://huggingface.co/qualcomm/Segment-Anything-Model-2/blob/main/Segment-Anything-Model-2.tflite) |
| SAM2Decoder | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 12.656 ms | 0 - 79 MB | NPU | [Segment-Anything-Model-2.tflite](https://huggingface.co/qualcomm/Segment-Anything-Model-2/blob/main/Segment-Anything-Model-2.tflite) |
| SAM2Decoder | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 7.936 ms | 0 - 33 MB | NPU | [Segment-Anything-Model-2.tflite](https://huggingface.co/qualcomm/Segment-Anything-Model-2/blob/main/Segment-Anything-Model-2.tflite) |
| SAM2Decoder | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 9.251 ms | 0 - 54 MB | NPU | [Segment-Anything-Model-2.tflite](https://huggingface.co/qualcomm/Segment-Anything-Model-2/blob/main/Segment-Anything-Model-2.tflite) |
| SAM2Decoder | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 7.953 ms | 0 - 30 MB | NPU | [Segment-Anything-Model-2.tflite](https://huggingface.co/qualcomm/Segment-Anything-Model-2/blob/main/Segment-Anything-Model-2.tflite) |
| SAM2Decoder | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 5.443 ms | 0 - 61 MB | NPU | [Segment-Anything-Model-2.tflite](https://huggingface.co/qualcomm/Segment-Anything-Model-2/blob/main/Segment-Anything-Model-2.tflite) |
| SAM2Decoder | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 5.492 ms | 0 - 52 MB | NPU | [Segment-Anything-Model-2.tflite](https://huggingface.co/qualcomm/Segment-Anything-Model-2/blob/main/Segment-Anything-Model-2.tflite) |
## Installation
Install the package via pip:
```bash
pip install "qai-hub-models[sam2]"
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.sam2.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.sam2.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.sam2.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/sam2/qai_hub_models/models/Segment-Anything-Model-2/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.sam2 import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.sam2.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.sam2.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on Segment-Anything-Model-2's performance across various devices [here](https://aihub.qualcomm.com/models/sam2).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of Segment-Anything-Model-2 can be found
[here](https://github.com/facebookresearch/sam2/blob/main/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [SAM 2 Segment Anything in Images and Videos](https://arxiv.org/abs/2408.00714)
* [Source Model Implementation](https://github.com/facebookresearch/sam2)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:[email protected]).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.