modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
furkankarakuz/test-code-search-net-tokenizer | furkankarakuz | 2025-06-16T10:50:49Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T10:50:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kholiavko/streaming-08-06 | kholiavko | 2025-06-16T10:47:20Z | 105 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:quantized:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-09T01:07:08Z | ---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** kholiavko
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AlekMan/HSE_AI_Vanilla_XLSTM | AlekMan | 2025-06-16T10:46:11Z | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-16T10:45:18Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
sypher47/Test_AI_security | sypher47 | 2025-06-16T10:45:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T10:45:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
EYEDOL/Uliza_ON_ALPACA_5 | EYEDOL | 2025-06-16T10:43:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:Jacaranda/UlizaLlama3",
"base_model:finetune:Jacaranda/UlizaLlama3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T10:43:33Z | ---
base_model: Jacaranda/UlizaLlama3
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** EYEDOL
- **License:** apache-2.0
- **Finetuned from model :** Jacaranda/UlizaLlama3
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Anu25reserch/cnn_news_summary_model_trained_on_reduced_data | Anu25reserch | 2025-06-16T10:40:08Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-06-16T10:34:28Z | ---
library_name: transformers
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
model-index:
- name: cnn_news_summary_model_trained_on_reduced_data
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cnn_news_summary_model_trained_on_reduced_data
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Generated Length |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------------:|
| No log | 1.0 | 144 | 1.9070 | 0.2375 | 0.0951 | 0.1938 | 0.194 | 20.0 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
aieng-lab/starcoder2-3b_smell-doc | aieng-lab | 2025-06-16T10:39:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"starcoder2",
"text-classification",
"en",
"base_model:bigcode/starcoder2-3b",
"base_model:finetune:bigcode/starcoder2-3b",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T10:37:35Z | ---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- bigcode/starcoder2-3b
pipeline_tag: text-classification
---
# StarCoder2 3b for classifying smell documentation (multi-label)
This model classifies smell documentation as 'fragmented', 'tangled', 'excessive', 'bloated' or 'lazy'.
- **Developed by:** Fabian C. Peña, Steffen Herbold
- **Finetuned from:** [bigcode/starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peña and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
0xluen/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-grassy_dextrous_wildebeest | 0xluen | 2025-06-16T10:39:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am grassy dextrous wildebeest",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-28T19:26:06Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-grassy_dextrous_wildebeest
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am grassy dextrous wildebeest
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-grassy_dextrous_wildebeest
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="0xluen/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-grassy_dextrous_wildebeest", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.25_0.75_0.15_epoch2 | MinaMila | 2025-06-16T10:35:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T10:33:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TheStageAI/Elastic-Mistral-7B-Instruct-v0.3 | TheStageAI | 2025-06-16T10:34:14Z | 12 | 3 | null | [
"text2text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:quantized:mistralai/Mistral-7B-Instruct-v0.3",
"license:apache-2.0",
"region:us"
] | text2text-generation | 2025-04-02T15:34:36Z | ---
license: apache-2.0
base_model:
- mistralai/Mistral-7B-Instruct-v0.3
base_model_relation: quantized
pipeline_tag: text2text-generation
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# Elastic model: Mistral-7B-Instruct-v0.3. Fastest and most flexible models for self-serving.
Elastic models are the models produced by TheStage AI ANNA: Automated Neural Networks Accelerator. ANNA allows you to control model size, latency and quality with a simple slider movement. For each model, ANNA produces a series of optimized models:
* __XL__: Mathematically equivalent neural network, optimized with our DNN compiler.
* __L__: Near lossless model, with less than 1% degradation obtained on corresponding benchmarks.
* __M__: Faster model, with accuracy degradation less than 1.5%.
* __S__: The fastest model, with accuracy degradation less than 2%.
__Goals of elastic models:__
* Provide flexibility in cost vs quality selection for inference
* Provide clear quality and latency benchmarks
* Provide interface of HF libraries: transformers and diffusers with a single line of code
* Provide models supported on a wide range of hardware, which are pre-compiled and require no JIT.
* Provide the best models and service for self-hosting.
> It's important to note that specific quality degradation can vary from model to model. For instance, with an S model, you can have 0.5% degradation as well.

-----
## Inference
To infer our models, you just need to replace `transformers` import with `elastic_models.transformers`:
```python
import torch
from transformers import AutoTokenizer
from elastic_models.transformers import AutoModelForCausalLM
# Currently we require to have your HF token
# as we use original weights for part of layers and
# model configuration as well
model_name = "mistralai/Mistral-7B-Instruct-v0.3"
hf_token = ''
device = torch.device("cuda")
# Create mode
tokenizer = AutoTokenizer.from_pretrained(
model_name, token=hf_token
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
token=hf_token,
torch_dtype=torch.bfloat16,
attn_implementation="sdpa",
mode='S'
).to(device)
model.generation_config.pad_token_id = tokenizer.eos_token_id
# Inference simple as transformers library
prompt = "Describe basics of DNNs quantization."
messages = [
{
"role": "system",
"content": "You are a search bot, answer on user text queries."
},
{
"role": "user",
"content": prompt
}
]
chat_prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, tokenize=False
)
inputs = tokenizer(chat_prompt, return_tensors="pt")
inputs.to(device)
with torch.inference_mode():
generate_ids = model.generate(**inputs, max_length=500)
input_len = inputs['input_ids'].shape[1]
generate_ids = generate_ids[:, input_len:]
output = tokenizer.batch_decode(
generate_ids,
skip_special_tokens=True,
clean_up_tokenization_spaces=False
)[0]
# Validate answer
print(f"# Q:\n{prompt}\n")
print(f"# A:\n{output}\n")
```
__System requirements:__
* GPUs: H100, L40s
* CPU: AMD, Intel
* Python: 3.10-3.12
To work with our models just run these lines in your terminal:
```shell
pip install thestage
pip install elastic_models[nvidia]\
--index-url https://thestage.jfrog.io/artifactory/api/pypi/pypi-thestage-ai-production/simple\
--extra-index-url https://pypi.nvidia.com\
--extra-index-url https://pypi.org/simple
pip install flash_attn==2.7.3 --no-build-isolation
pip uninstall apex
```
Then go to [app.thestage.ai](https://app.thestage.ai), login and generate API token from your profile page. Set up API token as follows:
```shell
thestage config set --api-token <YOUR_API_TOKEN>
```
Congrats, now you can use accelerated models!
----
## Benchmarks
Benchmarking is one of the most important procedures during model acceleration. We aim to provide clear performance metrics for models using our algorithms. The `W8A8, int8 column` indicates that we applied W8A8 quantization with int8 data type to all linear layers and used the same calibration data as for ANNA. The S model achieves practically identical speed but much higher quality, as ANNA knows how to improve quantization quality on sensitive layers!
### Quality benchmarks
<!-- For quality evaluation we have used: #TODO link to github -->
| Metric/Model | S | M | L | XL | Original | W8A8, int8 |
|---------------|---|---|---|----|----------|------------|
| MMLU | 59.7 | 60.1 | 60.8 | 61.4 | 61.4 | 28 |
| PIQA | 80.8 | 82 | 81.7 | 81.5 | 81.5 | 65.3 |
| Arc Challenge | 56.6 | 55.1 | 56.8 | 57.4 | 57.4 | 33.2 |
| Winogrande | 73.2 | 72.3 | 73.2 | 74.1 | 74.1 | 57 |
* **MMLU**:Evaluates general knowledge across 57 subjects including science, humanities, engineering, and more. Shows model's ability to handle diverse academic topics.
* **PIQA**: Evaluates physical commonsense reasoning through questions about everyday physical interactions. Shows model's understanding of real-world physics concepts.
* **Arc Challenge**: Evaluates grade-school level multiple-choice questions requiring reasoning. Shows model's ability to solve complex reasoning tasks.
* **Winogrande**: Evaluates commonsense reasoning through sentence completion tasks. Shows model's capability to understand context and resolve ambiguity.
### Latency benchmarks
__100 input/300 output; tok/s:__
| GPU/Model | S | M | L | XL | Original | W8A8, int8 |
|-----------|-----|---|---|----|----------|------------|
| H100 | 186 | 180 | 168 | 136 | 48 | 192 |
| L40s | 79 | 68 | 59 | 47 | 38 | 82 |
## Links
* __Platform__: [app.thestage.ai](app.thestage.ai)
<!-- * __Elastic models Github__: [app.thestage.ai](app.thestage.ai) -->
* __Subscribe for updates__: [TheStageAI X](https://x.com/TheStageAI)
* __Contact email__: [email protected] |
TheStageAI/Elastic-Mistral-Small-3.1-24B-Instruct-2503 | TheStageAI | 2025-06-16T10:33:51Z | 128 | 1 | null | [
"text2text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:mistralai/Mistral-Small-3.1-24B-Instruct-2503",
"base_model:quantized:mistralai/Mistral-Small-3.1-24B-Instruct-2503",
"license:apache-2.0",
"region:us"
] | text2text-generation | 2025-06-01T12:36:51Z | ---
license: apache-2.0
base_model:
- mistralai/Mistral-Small-3.1-24B-Instruct-2503
base_model_relation: quantized
pipeline_tag: text2text-generation
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# Elastic model: Mistral-Small-3.1-24B-Instruct-2503. Fastest and most flexible models for self-serving.
Elastic models are the models produced by TheStage AI ANNA: Automated Neural Networks Accelerator. ANNA allows you to control model size, latency and quality with a simple slider movement. For each model, ANNA produces a series of optimized models:
* __XL__: Mathematically equivalent neural network, optimized with our DNN compiler.
* __L__: Near lossless model, with less than 1% degradation obtained on corresponding benchmarks.
* __M__: Faster model, with accuracy degradation less than 1.5%.
* __S__: The fastest model, with accuracy degradation less than 2%.
__Goals of elastic models:__
* Provide flexibility in cost vs quality selection for inference
* Provide clear quality and latency benchmarks
* Provide interface of HF libraries: transformers and diffusers with a single line of code
* Provide models supported on a wide range of hardware, which are pre-compiled and require no JIT.
* Provide the best models and service for self-hosting.
> It's important to note that specific quality degradation can vary from model to model. For instance, with an S model, you can have 0.5% degradation as well.

-----
## Inference
To infer our models, you just need to replace `transformers` import with `elastic_models.transformers`:
```python
import torch
from transformers import AutoTokenizer
from elastic_models.transformers import AutoModelForCausalLM
# Currently we require to have your HF token
# as we use original weights for part of layers and
# model configuration as well
model_name = "mistralai/Mistral-Small-3.1-24B-Instruct-2503"
hf_token = ''
device = torch.device("cuda")
# Create mode
tokenizer = AutoTokenizer.from_pretrained(
model_name, token=hf_token
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
token=hf_token,
torch_dtype=torch.bfloat16,
attn_implementation="sdpa",
mode='S'
).to(device)
model.generation_config.pad_token_id = tokenizer.eos_token_id
# Inference simple as transformers library
prompt = "Describe basics of DNNs quantization."
messages = [
{
"role": "system",
"content": "You are a search bot, answer on user text queries."
},
{
"role": "user",
"content": prompt
}
]
chat_prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, tokenize=False
)
inputs = tokenizer(chat_prompt, return_tensors="pt")
inputs.to(device)
with torch.inference_mode():
generate_ids = model.generate(**inputs, max_length=500)
input_len = inputs['input_ids'].shape[1]
generate_ids = generate_ids[:, input_len:]
output = tokenizer.batch_decode(
generate_ids,
skip_special_tokens=True,
clean_up_tokenization_spaces=False
)[0]
# Validate answer
print(f"# Q:\n{prompt}\n")
print(f"# A:\n{output}\n")
```
__System requirements:__
* GPUs: H100, L40s
* CPU: AMD, Intel
* Python: 3.10-3.12
To work with our models just run these lines in your terminal:
```shell
pip install thestage
pip install elastic_models[nvidia]\
--index-url https://thestage.jfrog.io/artifactory/api/pypi/pypi-thestage-ai-production/simple\
--extra-index-url https://pypi.nvidia.com\
--extra-index-url https://pypi.org/simple
pip install flash_attn==2.7.3 --no-build-isolation
pip uninstall apex
```
Then go to [app.thestage.ai](https://app.thestage.ai), login and generate API token from your profile page. Set up API token as follows:
```shell
thestage config set --api-token <YOUR_API_TOKEN>
```
Congrats, now you can use accelerated models!
----
## Benchmarks
Benchmarking is one of the most important procedures during model acceleration. We aim to provide clear performance metrics for models using our algorithms. The `W8A8, int8 column` indicates that we applied W8A8 quantization with int8 data type to all linear layers and used the same calibration data as for ANNA. The S model achieves practically identical speed but much higher quality, as ANNA knows how to improve quantization quality on sensitive layers!
### Quality benchmarks
| Metric/Model | S | M | L | XL | Original | W8A8, int8 |
|---------------|---|---|---|----|----------|------------|
| arc_challenge | 65.30 | 66.30 | 66.70 | 66.80 | 66.80 | 51.10 | - |
| gsm8k | 87.70 | 88.40 | 87.70 | 88.86 | 88.86 | 13.49 | - |
| mmlu | 79.00 | 79.40 | 79.70 | 80.20 | 80.20 | 60.45 | - |
| piqa | 82.90 | 83.10 | 82.60 | 83.00 | 83.00 | 75.35 | - |
| winogrande | 78.20 | 79.40 | 79.30 | 79.50 | 79.50 | 71.19 | - |
* **MMLU**: Evaluates general knowledge across 57 subjects including science, humanities, engineering, and more. Shows model's ability to handle diverse academic topics.
* **PIQA**: Evaluates physical commonsense reasoning through questions about everyday physical interactions. Shows model's understanding of real-world physics concepts.
* **Arc Challenge**: Evaluates grade-school level multiple-choice questions requiring reasoning. Shows model's ability to solve complex reasoning tasks.
* **Winogrande**: Evaluates commonsense reasoning through sentence completion tasks. Shows model's capability to understand context and resolve ambiguity.
* **GSM8K**: GSM8K (Grade School Math 8K) is a dataset of 8.5K high quality linguistically diverse grade school math word problems.
### Latency benchmarks
### Performance by Context Size
The tables below show performance (tokens per second) for different input context sizes across different GPU models and batch sizes:
**H100:**
*Batch Size 1:*
| Context | Input Tokens | S | M | L | XL | Original |
|---------|-------------|---|---|---|----|---------|
| Small | 256 | 90.3 | 82.5 | 72.2 | 54.4 | 41.2 | - |
| Medium | 1024 | 90.1 | 82.2 | 71.8 | - | 38.8 | - |
| Large | 4096 | 88.2 | 81.0 | 70.4 | - | 33.8 | - |
*Batch Size 8:*
| Context | Input Tokens | S | M | L | XL | Original |
|---------|-------------|---|---|---|----|---------|
| Small | 256 | 86.5 | 79.9 | 69.1 | - | 36.7 | - |
| Medium | 1024 | 80.3 | 74.9 | 65.1 | - | 29.0 | - |
| Large | 4096 | 63.3 | 59.5 | 53.1 | - | 15.5 | - |
*Batch Size 16:*
| Context | Input Tokens | S | M | L | XL | Original |
|---------|-------------|---|---|---|----|---------|
| Small | 256 | 84.7 | 78.1 | 68.0 | - | 32.2 | - |
| Medium | 1024 | 79.8 | 73.3 | 64.1 | - | 21.8 | - |
| Large | 4096 | 62.5 | 58.1 | 52.7 | - | 9.7 | - |
**L40S:**
*Batch Size 1:*
| Context | Input Tokens | S | M | L | XL | Original |
|---------|-------------|---|---|---|----|---------|
| Small | 256 | 26.0 | 24.0 | 21.0 | - | - | - |
| Medium | 1024 | 25.8 | 23.8 | 20.9 | - | - | - |
| Large | 4096 | 25.1 | 23.3 | 20.5 | - | - | - |
*Batch Size 8:*
| Context | Input Tokens | S | M | L | XL | Original |
|---------|-------------|---|---|---|----|---------|
| Small | 256 | 25.2 | 23.2 | 20.4 | - | - | - |
| Medium | 1024 | 24.3 | 22.4 | 19.8 | - | - | - |
| Large | 4096 | - | - | - | - | - | - |
*Batch Size 16:*
| Context | Input Tokens | S | M | L | XL | Original |
|---------|-------------|---|---|---|----|---------|
| Small | 256 | 24.5 | 22.6 | 19.9 | - | - | - |
| Medium | 1024 | 22.8 | 20.9 | - | - | - | - |
| Large | 4096 | - | - | - | - | - | - |
*Note: Results show tokens per second (TPS) for text generation with 100 new tokens output. Performance varies based on GPU model, context size, and batch size.*
## Links
* Platform: [app.thestage.ai](https://app.thestage.ai/)
* __Subscribe for updates__: [TheStageAI X](https://x.com/TheStageAI)
<!-- * __Elastic models Github__: [app.thestage.ai](app.thestage.ai) -->
* __Contact email__: [email protected]
|
TheStageAI/Elastic-Llama-3.2-1B-Instruct | TheStageAI | 2025-06-16T10:33:33Z | 47 | 3 | null | [
"text2text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:quantized:meta-llama/Llama-3.2-1B-Instruct",
"license:apache-2.0",
"region:us"
] | text2text-generation | 2025-04-14T03:43:38Z | ---
license: apache-2.0
base_model:
- meta-llama/Llama-3.2-1B-Instruct
base_model_relation: quantized
pipeline_tag: text2text-generation
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# Elastic model: Llama-3.2-1B-Instruct. Fastest and most flexible models for self-serving.
Elastic models are the models produced by TheStage AI ANNA: Automated Neural Networks Accelerator. ANNA allows you to control model size, latency and quality with a simple slider movement. For each model, ANNA produces a series of optimized models:
* __XL__: Mathematically equivalent neural network, optimized with our DNN compiler.
* __L__: Near lossless model, with less than 1% degradation obtained on corresponding benchmarks.
* __M__: Faster model, with accuracy degradation less than 1.5%.
* __S__: The fastest model, with accuracy degradation less than 2%.
__Goals of elastic models:__
* Provide flexibility in cost vs quality selection for inference
* Provide clear quality and latency benchmarks
* Provide interface of HF libraries: transformers and diffusers with a single line of code
* Provide models supported on a wide range of hardware, which are pre-compiled and require no JIT.
* Provide the best models and service for self-hosting.
> It's important to note that specific quality degradation can vary from model to model. For instance, with an S model, you can have 0.5% degradation as well.

-----
## Inference
To infer our models, you just need to replace `transformers` import with `elastic_models.transformers`:
```python
import torch
from transformers import AutoTokenizer
from elastic_models.transformers import AutoModelForCausalLM
# Currently we require to have your HF token
# as we use original weights for part of layers and
# model configuration as well
model_name = "meta-llama/Llama-3.2-1B-Instruct"
hf_token = ''
device = torch.device("cuda")
# Create mode
tokenizer = AutoTokenizer.from_pretrained(
model_name, token=hf_token
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
token=hf_token,
torch_dtype=torch.bfloat16,
attn_implementation="sdpa",
mode='S'
).to(device)
model.generation_config.pad_token_id = tokenizer.eos_token_id
# Inference simple as transformers library
prompt = "Describe basics of DNNs quantization."
messages = [
{
"role": "system",
"content": "You are a search bot, answer on user text queries."
},
{
"role": "user",
"content": prompt
}
]
chat_prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, tokenize=False
)
inputs = tokenizer(chat_prompt, return_tensors="pt")
inputs.to(device)
with torch.inference_mode():
generate_ids = model.generate(**inputs, max_length=500)
input_len = inputs['input_ids'].shape[1]
generate_ids = generate_ids[:, input_len:]
output = tokenizer.batch_decode(
generate_ids,
skip_special_tokens=True,
clean_up_tokenization_spaces=False
)[0]
# Validate answer
print(f"# Q:\n{prompt}\n")
print(f"# A:\n{output}\n")
```
__System requirements:__
* GPUs: H100, L40s
* CPU: AMD, Intel
* Python: 3.10-3.12
To work with our models just run these lines in your terminal:
```shell
pip install thestage
pip install elastic_models[nvidia]\
--index-url https://thestage.jfrog.io/artifactory/api/pypi/pypi-thestage-ai-production/simple\
--extra-index-url https://pypi.nvidia.com\
--extra-index-url https://pypi.org/simple
pip install flash_attn==2.7.3 --no-build-isolation
pip uninstall apex
```
Then go to [app.thestage.ai](https://app.thestage.ai), login and generate API token from your profile page. Set up API token as follows:
```shell
thestage config set --api-token <YOUR_API_TOKEN>
```
Congrats, now you can use accelerated models!
----
## Benchmarks
Benchmarking is one of the most important procedures during model acceleration. We aim to provide clear performance metrics for models using our algorithms. The `W8A8, int8 column` indicates that we applied W8A8 quantization with int8 data type to all linear layers and used the same calibration data as for ANNA. The S model achieves practically identical speed but much higher quality, as ANNA knows how to improve quantization quality on sensitive layers!
### Quality benchmarks
<!-- For quality evaluation we have used: #TODO link to github -->
| Metric/Model | S | M | L | XL | Original | W8A8, int8 |
|---------------|---|---|---|----|----------|------------|
| MMLU | 45.5 | 45.9 | 45.9 | 46.2 | 46.2 | 24 |
| PIQA | 73.1 | 73.7 | 74.2 | 74.3 | 74.3 | 55.8 |
| Arc Challenge | 34.5 | 35.9 | 36.0 | 35.8 | 35.8 | 20.3 |
| Winogrande | 60.4 | 59.7 | 60.8 | 59.5 | 59.5 | 50.3 |
* **MMLU**:Evaluates general knowledge across 57 subjects including science, humanities, engineering, and more. Shows model's ability to handle diverse academic topics.
* **PIQA**: Evaluates physical commonsense reasoning through questions about everyday physical interactions. Shows model's understanding of real-world physics concepts.
* **Arc Challenge**: Evaluates grade-school level multiple-choice questions requiring reasoning. Shows model's ability to solve complex reasoning tasks.
* **Winogrande**: Evaluates commonsense reasoning through sentence completion tasks. Shows model's capability to understand context and resolve ambiguity.
### Latency benchmarks
__100 input/300 output; tok/s:__
| GPU/Model | S | M | L | XL | Original | W8A8, int8 |
|-----------|-----|---|---|----|----------|------------|
| H100 | 436 | 436 | 409 | 396 | 110 | 439 |
| L40s | 290 | 251 | 222 | 210 | 103 | 300 |
## Links
* __Platform__: [app.thestage.ai](app.thestage.ai)
<!-- * __Elastic models Github__: [app.thestage.ai](app.thestage.ai) -->
* __Subscribe for updates__: [TheStageAI X](https://x.com/TheStageAI)
* __Contact email__: [email protected] |
TheStageAI/Elastic-Mistral-Small-24B-Instruct-2501 | TheStageAI | 2025-06-16T10:33:26Z | 47 | 1 | null | [
"text2text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:mistralai/Mistral-Small-24B-Instruct-2501",
"base_model:quantized:mistralai/Mistral-Small-24B-Instruct-2501",
"license:apache-2.0",
"region:us"
] | text2text-generation | 2025-05-27T07:26:22Z | ---
license: apache-2.0
base_model:
- mistralai/Mistral-Small-24B-Instruct-2501
base_model_relation: quantized
pipeline_tag: text2text-generation
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# Elastic model: Mistral-Small-24B-Instruct-2501. Fastest and most flexible models for self-serving.
Elastic models are the models produced by TheStage AI ANNA: Automated Neural Networks Accelerator. ANNA allows you to control model size, latency and quality with a simple slider movement. For each model, ANNA produces a series of optimized models:
* __XL__: Mathematically equivalent neural network, optimized with our DNN compiler.
* __L__: Near lossless model, with less than 1% degradation obtained on corresponding benchmarks.
* __M__: Faster model, with accuracy degradation less than 1.5%.
* __S__: The fastest model, with accuracy degradation less than 2%.
__Goals of elastic models:__
* Provide flexibility in cost vs quality selection for inference
* Provide clear quality and latency benchmarks
* Provide interface of HF libraries: transformers and diffusers with a single line of code
* Provide models supported on a wide range of hardware, which are pre-compiled and require no JIT.
* Provide the best models and service for self-hosting.
> It's important to note that specific quality degradation can vary from model to model. For instance, with an S model, you can have 0.5% degradation as well.

-----
## Inference
To infer our models, you just need to replace `transformers` import with `elastic_models.transformers`:
```python
import torch
from transformers import AutoTokenizer
from elastic_models.transformers import AutoModelForCausalLM
# Currently we require to have your HF token
# as we use original weights for part of layers and
# model configuration as well
model_name = "mistralai/Mistral-Small-24B-Instruct-2501"
hf_token = ''
device = torch.device("cuda")
# Create mode
tokenizer = AutoTokenizer.from_pretrained(
model_name, token=hf_token
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
token=hf_token,
torch_dtype=torch.bfloat16,
attn_implementation="sdpa",
mode='S'
).to(device)
model.generation_config.pad_token_id = tokenizer.eos_token_id
# Inference simple as transformers library
prompt = "Describe basics of DNNs quantization."
messages = [
{
"role": "system",
"content": "You are a search bot, answer on user text queries."
},
{
"role": "user",
"content": prompt
}
]
chat_prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, tokenize=False
)
inputs = tokenizer(chat_prompt, return_tensors="pt")
inputs.to(device)
with torch.inference_mode():
generate_ids = model.generate(**inputs, max_length=500)
input_len = inputs['input_ids'].shape[1]
generate_ids = generate_ids[:, input_len:]
output = tokenizer.batch_decode(
generate_ids,
skip_special_tokens=True,
clean_up_tokenization_spaces=False
)[0]
# Validate answer
print(f"# Q:\n{prompt}\n")
print(f"# A:\n{output}\n")
```
__System requirements:__
* GPUs: H100, L40s
* CPU: AMD, Intel
* Python: 3.10-3.12
To work with our models just run these lines in your terminal:
```shell
pip install thestage
pip install elastic_models[nvidia]\
--index-url https://thestage.jfrog.io/artifactory/api/pypi/pypi-thestage-ai-production/simple\
--extra-index-url https://pypi.nvidia.com\
--extra-index-url https://pypi.org/simple
pip install flash_attn==2.7.3 --no-build-isolation
pip uninstall apex
```
Then go to [app.thestage.ai](https://app.thestage.ai), login and generate API token from your profile page. Set up API token as follows:
```shell
thestage config set --api-token <YOUR_API_TOKEN>
```
Congrats, now you can use accelerated models!
----
## Benchmarks
Benchmarking is one of the most important procedures during model acceleration. We aim to provide clear performance metrics for models using our algorithms. The `W8A8, int8 column` indicates that we applied W8A8 quantization with int8 data type to all linear layers and used the same calibration data as for ANNA. The S model achieves practically identical speed but much higher quality, as ANNA knows how to improve quantization quality on sensitive layers!
### Quality benchmarks
| Metric/Model | S | M | L | XL | Original | W8A8, int8 |
|---------------|---|---|---|----|----------|------------|
| arc_challenge | 64.00 | 66.30 | 64.70 | 64.20 | 64.20 | 53.75 | - |
| mmlu | 78.10 | 79.00 | 79.80 | 79.90 | 79.90 | 64.16 | - |
| piqa | 81.80 | 83.20 | 83.00 | 83.10 | 83.10 | 73.23 | - |
| winogrande | 75.60 | 78.00 | 78.80 | 78.80 | 78.80 | 65.90 | - |
* **MMLU**: Evaluates general knowledge across 57 subjects including science, humanities, engineering, and more. Shows model's ability to handle diverse academic topics.
* **PIQA**: Evaluates physical commonsense reasoning through questions about everyday physical interactions. Shows model's understanding of real-world physics concepts.
* **Arc Challenge**: Evaluates grade-school level multiple-choice questions requiring reasoning. Shows model's ability to solve complex reasoning tasks.
* **Winogrande**: Evaluates commonsense reasoning through sentence completion tasks. Shows model's capability to understand context and resolve ambiguity.
### Latency benchmarks
__100 input/300 output; tok/s:__
| GPU/Model | S | M | L | XL | Original | W8A8, int8 |
|-----------|-----|---|---|----|----------|------------|
| H100 | 91 | 85 | 68 | 52 | 37 | 91 | - |
| L40S | 26 | 25 | 20 | - | 14 | 26 | - |
## Links
* Platform: [app.thestage.ai](https://app.thestage.ai/)
* __Subscribe for updates__: [TheStageAI X](https://x.com/TheStageAI)
<!-- * __Elastic models Github__: [app.thestage.ai](app.thestage.ai) -->
* __Contact email__: [email protected]
|
Theros/Q2.5-ColdBrew-R1-Forge-Q4_K_M-GGUF | Theros | 2025-06-16T10:33:19Z | 0 | 0 | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"Theros/Qwen2.5-ColdBrew-R1",
"llama-cpp",
"gguf-my-repo",
"base_model:SvalTek/Q2.5-ColdBrew-R1-Forge",
"base_model:quantized:SvalTek/Q2.5-ColdBrew-R1-Forge",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-16T10:32:53Z | ---
base_model: SvalTek/Q2.5-ColdBrew-R1-Forge
tags:
- merge
- mergekit
- lazymergekit
- Theros/Qwen2.5-ColdBrew-R1
- llama-cpp
- gguf-my-repo
---
# Theros/Q2.5-ColdBrew-R1-Forge-Q4_K_M-GGUF
This model was converted to GGUF format from [`SvalTek/Q2.5-ColdBrew-R1-Forge`](https://huggingface.co/SvalTek/Q2.5-ColdBrew-R1-Forge) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/SvalTek/Q2.5-ColdBrew-R1-Forge) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Theros/Q2.5-ColdBrew-R1-Forge-Q4_K_M-GGUF --hf-file q2.5-coldbrew-r1-forge-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Theros/Q2.5-ColdBrew-R1-Forge-Q4_K_M-GGUF --hf-file q2.5-coldbrew-r1-forge-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Theros/Q2.5-ColdBrew-R1-Forge-Q4_K_M-GGUF --hf-file q2.5-coldbrew-r1-forge-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Theros/Q2.5-ColdBrew-R1-Forge-Q4_K_M-GGUF --hf-file q2.5-coldbrew-r1-forge-q4_k_m.gguf -c 2048
```
|
islomov/navaistt_v2_medium | islomov | 2025-06-16T10:32:17Z | 0 | 0 | null | [
"safetensors",
"whisper",
"automatic-speech-recognition",
"audio-transcription",
"uzbek",
"fine-tuned",
"speech-recognition",
"uz",
"license:apache-2.0",
"region:us"
] | automatic-speech-recognition | 2025-06-15T19:28:16Z | ---
language:
- uz
license: apache-2.0
tags:
- whisper
- automatic-speech-recognition
- audio-transcription
- uzbek
- fine-tuned
- speech-recognition
---
# NavaiSTT-2v Medium - Uzbek Speech-to-Text Model
Classic Whisper medium model fine-tuned for Uzbek language. The dataset included of diverse audio: publicly available podcasts, Tashkent dialect podcasts, news, google fleurs, USC and Common Voice 17. Data quality was mixed with 50% human transcribed and 50% pseudo-transcribed using Gemini 2.5 Pro.
Difference between v1 is that v2 is fully open-sourced. Due to some conflicts with data partners, v1 was removed, and the 500-hour dataset was excluded. Instead, new and different datasets were included—all of which will be open-sourced. Training scripts will also be open-sourced. The entire process will be fully repeatable.
Special attention was given to Tashkent dialect audio materials, resulting in strong performance on this dialect. Future versions will include other regional dialects to improve overall coverage.
# Whitepaper
For more details on the methodology and research behind this model, visit: https://uz-speech.web.app/navaistt02m
Training and filtering code: https://github.com/Islomov49/navaistt_v2-open-sourced
Support my works and open-source movement: https://tirikchilik.uz/islomovs
## Model Details
- **Base Model:** Whisper Medium
- **Parameters:** 769M
- **Performance:**
- WER: ~17%
- CER: ~5.5%
## Training Data
This model was fine-tuned on approximately 475 hours of diverse Uzbek audio data including:
- Common Voice 17 dataset (filtered)
- USC (filtered)
- Google fleurs (filtered)
- Podcasts Tashkent Dialect Youtube Uzbek Speech Dataset: [Link HF](https://huggingface.co/datasets/islomov/podcasts_tashkent_dialect_youtube_uzbek_speech_dataset)
- News Youtube Uzbek Speech Dataset: [Link HF](https://huggingface.co/datasets/islomov/news_youtube_uzbek_speech_dataset)
- IT Youtube Uzbek Speech Dataset: [Link HF](https://huggingface.co/datasets/islomov/it_youtube_uzbek_speech_dataset)
The dataset consisted of 50% human-transcribed and 50% pseudo-transcribed material (using Gemini 2.5 Pro). Special attention was given to Tashkent dialect audio materials to ensure strong performance on this dialect.
A technique was used to filter out datasets based on Word Error Rate (WER) and similarity checks. The script for this process will also be open-sourced.
## Usage Example
```python
import torch
import torchaudio
from transformers import WhisperProcessor, WhisperForConditionalGeneration
# Load model and processor
processor = WhisperProcessor.from_pretrained("islomov/navaistt_v2_medium")
model = WhisperForConditionalGeneration.from_pretrained("islomov/navaistt_v2_medium")
def transcribe_audio(audio_path):
global model, processor
# Move to GPU if available
device = "cuda" if torch.cuda.is_available() else "cpu"
model = model.to(device)
# Load and preprocess audio
waveform, sample_rate = torchaudio.load(audio_path)
if sample_rate != 16000:
waveform = torchaudio.functional.resample(waveform, sample_rate, 16000)
# Convert to mono if needed
if waveform.shape[0] > 1:
waveform = waveform.mean(dim=0, keepdim=True)
# Process audio
input_features = processor(
waveform.squeeze().numpy(),
sampling_rate=16000,
return_tensors="pt",
language="uz"
).input_features.to(device)
# Generate transcription
with torch.no_grad():
predicted_ids = model.generate(input_features)
# Decode
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)[0]
return transcription
# Example usage
if __name__ == "__main__":
audio_file = "some_audio_max_30_sec.wav"
text = transcribe_audio(audio_file)
print(f"Transcription: {text}")
```
# Future Improvements
Future versions will include more regional Uzbek dialects to improve overall coverage.
|
whitphx/dummy-transformerjs-model-000 | whitphx | 2025-06-16T10:32:15Z | 0 | 0 | transformers.js | [
"transformers.js",
"license:mit",
"region:us"
] | null | 2025-06-16T10:29:30Z | ---
license: mit
library_name: transformers.js
---
## Example
```js
import { AutoModel, AutoProcessor, RawImage } from '@xenova/transformers';
// Load model
const model = await AutoModel.from_pretrained('whitphx/dummy-transformerjs-model-000', {
// quantized: false, // (Optional) Use unquantized version.
})
// Load processor
const processor = await AutoProcessor.from_pretrained('whitphx/dummy-transformerjs-model-000');
// Read image and run processor
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/city-streets.jpg';
const image = await RawImage.read(url);
const { pixel_values, reshaped_input_sizes } = await processor(image);
// Run object detection
const { output0 } = await model({ images: pixel_values });
const predictions = output0.tolist()[0];
const threshold = 0.5;
const [newHeight, newWidth] = reshaped_input_sizes[0]; // Reshaped height and width
const [xs, ys] = [image.width / newWidth, image.height / newHeight]; // x and y resize scales
for (const [xmin, ymin, xmax, ymax, score, id] of predictions) {
if (score < threshold) continue;
// Convert to original image coordinates
const bbox = [xmin * xs, ymin * ys, xmax * xs, ymax * ys].map(x => x.toFixed(2)).join(', ');
console.log(`Found "${model.config.id2label[id]}" at [${bbox}] with score ${score.toFixed(2)}.`);
}
// Found "car" at [559.30, 472.72, 799.58, 598.15] with score 0.95.
// Found "car" at [221.91, 422.56, 498.09, 521.85] with score 0.94.
// Found "bicycle" at [1.59, 646.99, 137.72, 730.35] with score 0.92.
// Found "bicycle" at [561.25, 593.65, 695.01, 671.73] with score 0.91.
// Found "person" at [687.74, 324.93, 739.70, 415.04] with score 0.89.
// ...
``` |
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.15_0.25_0.5_epoch1 | MinaMila | 2025-06-16T10:32:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T10:30:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LaaP-ai/donut-base-invoice-v1.02 | LaaP-ai | 2025-06-16T10:30:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-06-16T10:30:26Z | ---
library_name: transformers
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
model-index:
- name: donut-base-invoice-v1.02
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-invoice-v1.02
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
alecroci/ppo-CartPole-v1 | alecroci | 2025-06-16T10:29:48Z | 0 | 0 | null | [
"tensorboard",
"CartPole-v1",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-16T10:29:41Z | ---
tags:
- CartPole-v1
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 199.10 +/- 102.28
name: mean_reward
verified: false
---
# PPO Agent Playing CartPole-v1
This is a trained model of a PPO agent playing CartPole-v1.
# Hyperparameters
```python
{'exp_name': 'unit8_part1'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'CartPole-v1'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'alecroci/ppo-CartPole-v1'
'batch_size': 512
'minibatch_size': 128}
```
|
lefantom00/DeepHermes-3-iSMART | lefantom00 | 2025-06-16T10:28:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:NousResearch/DeepHermes-3-Llama-3-8B-Preview",
"base_model:quantized:NousResearch/DeepHermes-3-Llama-3-8B-Preview",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T05:47:08Z | ---
base_model: NousResearch/DeepHermes-3-Llama-3-8B-Preview
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: llama3
language:
- en
---
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
Plitak/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-scaly_extinct_squirrel | Plitak | 2025-06-16T10:28:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am scaly extinct squirrel",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-23T21:59:01Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-scaly_extinct_squirrel
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am scaly extinct squirrel
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-scaly_extinct_squirrel
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Plitak/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-scaly_extinct_squirrel", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.25_0.75_0.15_epoch1 | MinaMila | 2025-06-16T10:27:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T10:25:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
aieng-lab/codebert-base_smell-doc | aieng-lab | 2025-06-16T10:26:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"en",
"base_model:microsoft/codebert-base",
"base_model:finetune:microsoft/codebert-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T10:26:24Z | ---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- microsoft/codebert-base
pipeline_tag: text-classification
---
# CodeBERT base for classifying smell documentation (multi-label)
This model classifies smell documentation as 'fragmented', 'tangled', 'excessive', 'bloated' or 'lazy'.
- **Developed by:** Fabian C. Peña, Steffen Herbold
- **Finetuned from:** [microsoft/codebert-base](https://huggingface.co/microsoft/codebert-base)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peña and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.15_0.25_0.75_epoch2 | MinaMila | 2025-06-16T10:25:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T10:23:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vulcan2506/llama3-medmcqa | vulcan2506 | 2025-06-16T10:24:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T08:27:08Z | ---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
library_name: transformers
model_name: llama3-medmcqa
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for llama3-medmcqa
This model is a fine-tuned version of [unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit](https://huggingface.co/unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vulcan2506/llama3-medmcqa", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
StuffedPumpkins/AleeAndrea | StuffedPumpkins | 2025-06-16T10:23:59Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:mit",
"region:us"
] | text-to-image | 2025-06-16T10:23:49Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/aleeandrea_002046_00_20250531162243.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: aleeandrea
license: mit
---
# AleeAndrea
<Gallery />
## Trigger words
You should use `aleeandrea` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/StuffedPumpkins/AleeAndrea/tree/main) them in the Files & versions tab.
|
Nitish035/mistral_CMoS_adapter32_combine-s2-level2 | Nitish035 | 2025-06-16T10:22:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T10:22:41Z | ---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Nitish035
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
TeodoraR/parler-tts-swara-firsttry | TeodoraR | 2025-06-16T10:22:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-06-16T10:21:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
floflodebilbao/BART_challenge_test | floflodebilbao | 2025-06-16T10:21:24Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large",
"base_model:finetune:facebook/bart-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-06-16T10:19:58Z | ---
library_name: transformers
license: apache-2.0
base_model: facebook/bart-large
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: BART_challenge_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BART_challenge_test
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5847
- Rouge1: 0.2758
- Rouge2: 0.085
- Rougel: 0.2379
- Rougelsum: 0.2369
- Gen Len: 20.65
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 5 | 4.2321 | 0.2025 | 0.0802 | 0.1731 | 0.1714 | 21.0 |
| No log | 2.0 | 10 | 2.7023 | 0.2757 | 0.0992 | 0.226 | 0.2269 | 21.0 |
| No log | 3.0 | 15 | 2.6063 | 0.2741 | 0.0741 | 0.225 | 0.2255 | 21.0 |
| No log | 4.0 | 20 | 2.5847 | 0.2758 | 0.085 | 0.2379 | 0.2369 | 20.65 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
inclusionAI/Ming-Lite-Omni | inclusionAI | 2025-06-16T10:21:03Z | 4,215 | 96 | transformers | [
"transformers",
"diffusers",
"safetensors",
"bailingmm",
"text2text-generation",
"any-to-any",
"custom_code",
"arxiv:2506.09344",
"base_model:inclusionAI/Ling-lite",
"base_model:finetune:inclusionAI/Ling-lite",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | any-to-any | 2025-05-02T02:32:18Z | ---
base_model:
- inclusionAI/Ling-lite
license: mit
pipeline_tag: any-to-any
library_name: transformers
---
# Ming-Lite-Omni
<p align="center">
<img src="./figures/ant-bailing.png" width="100"/>
<p>
<p align="center">📑 <a href="https://arxiv.org/abs/2506.09344">Technical Report</a>|📖<a href="https://lucaria-academy.github.io/Ming-Omni/">Project Page</a> |🤗 <a href="https://huggingface.co/inclusionAI/Ming-Lite-Omni">Hugging Face</a>| 🤖 <a href="https://www.modelscope.cn/models/inclusionAI/Ming-Lite-Omni">ModelScope</a>
## Introduction
Ming-lite-omni, a light version of Ming-omni, which is derived from [Ling-lite](https://github.com/inclusionAI/Ling) and features 2.8 billion activated parameter. Ming-lite-omni is a unified multimodal model capable of processing images, text, audio, and video, while demonstrating strong proficiency in both speech and image generation. Ming-lite-omni employs dedicated encoders to extract tokens from different modalities, which are then processed by Ling, an MoE architecture equipped with newly proposed modality-specific routers. This design enables a single model to efficiently process and fuse multimodal inputs within a unified framework, thereby facilitating diverse tasks without requiring separate models, task-specific fine-tuning, or structural redesign. Importantly, Ming-lite-omni extends beyond conventional multimodal models by supporting audio and image generation. This is achieved through the integration of an advanced audio decoder for natural-sounding speech and Ming-Lite-Uni for high-quality image generation, which also allow the model to engage in context-aware chatting, perform text-to-speech conversion, and conduct versatile image editing. Our experimental results showcase Ming-lite-omni offers a powerful solution for unified perception and generation across all modalities.
Notably, Ming-lite-omni is the first open-source model we are aware of to match GPT-4o in modality support, and we release all code and model weights to encourage further research and development in the community.
<p align="center">
<img src="./figures/ming.png" width="800"/>
<p>
## 📌 Updates
* [2025.06.12] 🔥 Our [Technical Report](https://arxiv.org/abs/2506.09344) is in public on arxiv.
* [2025.05.28] 🔥 The official version of Ming-lite-omni is released, with better performance and image generation support.
* [2025.05.04] 🔥 We release the test version of Ming-lite-omni:[Ming-lite-omni-Preview](https://github.com/inclusionAI/Ming/tree/Ming-Lite-Omni-Preview).
## Key Features
- **Unified Omni-Modality Perception**: Ming-lite-omni, built on [Ling](https://github.com/inclusionAI/Ling), an MoE architecture LLM, resolves task conflicts and ensures coherent integration of tokens from different modalities through modality-specific routers.
- **Unified Perception and Generation**: Ming-lite-omni achieves unified understanding and generation, enabling the model to interpret multimodal instructions and user intent during generation, which helps enhance generation quality and improves usability across multiple tasks.
- **Innovative Generation Capabilities**: Ming-lite-omni can perceive all modalities and generate high-quality text, real-time speech, and vivid images simultaneously, delivering exceptional cross-modal performance across diverse tasks including image perception, audio-visual interaction, and image generation.
## Evaluation
Ming-lite-omni delivers exceptional cross-modal performance, as validated across image perception, audio-visual interaction, and image generation tasks. Specifically, in the image perception task, Ming-lite-omni attained performance comparable to that of Qwen2.5-VL-7B by activating only 2.8B parameters. It delivers superior performance in end-to-end speech understanding and instruction following, surpassing Qwen2.5-Omni and Kimi-Audio. It also supports native-resolution image generation, editing, and style transfer, achieving a GenEval score of 0.64, outperforming mainstream models such as SDXL. In terms of FID, Ming-lite-omni reaches 4.85, setting a new SOTA across existing methods.
<p align="center">
<img src="./figures/performance.png" width="800"/>
<p>
### Image benchmark
<div align="center">
| Benchmarks | Ming-lite-omni | Qwen2.5-VL-7B-Instruct | InternVL2.5-8B-MPO |
|:------------------|:--------------:|:----------------------------:|:------------------:|
| AI2D | 83.1 | 84.4 | <b>84.5</b> |
| HallusionBench | <b>55.0</b> | 55.8 | 51.7 |
| MMBench_TEST_V11 | 80.8 | <b>82.8</b> | 82.0 |
| MMMU | 56.3 | <b>56.6</b> | 54.8 |
| MMStar | 64.7 | 65.3 | <b>65.2</b> |
| MMVet | 71.3 | 71.6 | 68.1 |
| MathVista | <b>71.6</b> | 68.1 | 67.9 |
| OCRBench | <b>88.4</b> | 87.8 | 88.2 |
| Average | 71.4 | <b>71.5</b> | 70.3 |
</div>
#### Encyclopedia Benchmarks
<div align="center">
| Object Recognition | Ming-lite-omni | Qwen2.5-VL-7B-Instruct |
|:---------------------|:--------------:|:------------------------:|
| Plants | **54.96** | 47.8 |
| Animals | **56.7** | 50.85 |
| Vehicles | 41.91 | **42.29** |
| Food & Ingredients | **62.28** | 54.09 |
| Dishes | **44.3** | 39.07 |
| General | 91.08 | **92.42** |
| Average | **58.54** | 54.43 |
</div>
### Video benchmark
<div align="center">
| Benchmarks | Ming-lite-omni | Qwen2.5VL-7B-Instruct |
|:------------------------|:--------------:|:---------------------:|
| VideoMME | 67.0 | <b>67.3</b> |
| MVBench | 67.7 | <b>67.4</b> |
| Video-MMMU | 46.3 | <b>47.4</b> |
| LongVideoBench | 56.6 | 54.7 |
| Average | <b>59.4</b> | 59.2 |
</div>
Note: All models are evaluated based on 128 uniformly sampled frames.
### Audio benchmark
#### SpeechQA
<div align="center">
| Model | Average | AlpacaEval | CommonEval | SD-QA | MMSU | OpenBookQA | IFEval | AdvBench |
|:-----------------|:-------------:|:-----------:|:-----------:|:------------:|:------------:|:------------:|:------------:|:-------------:|
| Qwen2-Audio-chat | 3.545 | 3.69 | 3.40 | 35.35 | 35.43 | 49.01 | 22.57 | 98.85 |
| Baichuan-Audio | 3.695 | 4.00 | 3.39 | 49.64 | 48.80 | 63.30 | 41.32 | 86.73 |
| GLM-4-Voice | 3.77 | 4.06 | 3.48 | 43.31 | 40.11 | 52.97 | 24.91 | 88.08 |
| Kimi-Audio | 4.215 | 4.46 | 3.97 | <b>63.12</b> | <b>62.17</b> | <b>83.52</b> | <b>61.10</b> | <b>100.00</b> |
| Qwen2.5-Omni | 4.21 | 4.49 | 3.93 | 55.71 | 61.32 | 81.10 | 52.87 | 99.42 |
| Ming-lite-omni | <b>4.34</b> | <b>4.63</b> | <b>4.06</b> | 58.84 | 47.53 | 61.98 | 58.36 | 99.04 |
</div>
#### ASR
<div align="center">
| Model | aishell1 | aishell2_android | aishell2_ios | cv15_zh | fleurs_zh | wenetspeech_meeting | wenetspeech_net | librispeech_test_clean | librispeech_test_other | multilingual_librispeech | cv15_en | fleurs_en | voxpopuli_v1.0_en |
|:--------------:|:--------:|:----------------:|:------------:|:--------:|:---------:|:-------------------:|:---------------:|:----------------------:|:----------------------:|:------------------------:|:--------:|:---------:|:--------------------:|
| Ming-lite-omni | 1.47 | **2.55** | **2.52** | 6.31 | 2.96 | 5.95 | 5.46 | 1.44 | 2.80 | **4.15** | **6.89** | **3.39** | **5.80** |
| Qwen2.-Omni | 1.18 | 2.75 | 2.63 | **5.20** | 3.00 | **5.90** | 7.70 | 1.80 | 3.40 | 7.56 | 7.60 | 4.10 | **5.80** |
| Qwen2-Audio | 1.53 | 2.92 | 2.92 | 6.90 | 7.50 | 7.16 | 8.42 | 1.60 | 3.60 | 5.40 | 8.60 | 6.90 | 6.84 |
| Kimi-Audio | **0.60** | 2.64 | 2.56 | 7.21 | **2.69** | 6.28 | **5.37** | **1.28** | **2.42** | 5.88 | 10.31 | 4.44 | 7.97 |
</div>
### Information-Seeking Benchmark
<div align="center">
| Model | InfoSeek_H-mean | InfoSeek_unseen_question | InfoSeek_unseen_entity |
|:---------------|:---------------:|:------------------------:|:----------------------:|
| GPT-4o | <b>36.05</b> | - | - |
| PaLI-X | 22.06 | 23.5 | 20.8 |
| Qwen2.5-vl-32B | 19.35 | 20.55 | 18.28 |
| Ming-lite-omni | 27.7 | **30.4** | **25.4** |
</div>
### OCR
<div align="center">
| Model | Ming-lite-omni | Qwen2.5-VL-7B-Instruct |
|:-------------------|:--------------:|:-----------------------:|
| ChartQA_TEST | 85.1 | <b>87.3</b> |
| DocVQA_TEST | 93 | <b>95.7</b> |
| OCRBenchV2_en/zh | 53.3/52 | <b>56.3/57.2</b> |
| OmniDocBench↓ | 34/<b>34.4</b> | <b>30.8</b>/39.8 |
| TextVQA_VAL | 82.8 | <b>84.9</b> |
</div>
### GUI
<div align="center">
| Model | Ming-lite-omni | InternVL3 8B | Qwen2.5-VL-7B-Instruct |
|:---------------------------|:--------------:|:------------:|:----------------------:|
| ScreenSpot | <b>82.1</b> | 79.5 | 78.9* |
| ScreenSpot-V2 | <b>84.1</b> | 81.4 | - |
| AITZ(EM) | <b>66.6</b> | - | 57.6* |
</div>
Note: * denotes the reproduced results.
### Unified Generation Benchmark
<div align="center">
| Model | single_object | two_object | counting | colors | position | color_attr | GENEVAL | DPGBench | FID↓ |
|:---------------|:-------------:|:----------:|:----------:|:--------:|:--------:|:----------:|:--------:|:---------:|:-------------:|
| Ming-lite-omni | **0.9875** | **0.7727** | **0.6812** | 0.7872 | 0.31 | 0.29 | **0.64** | 81.72 | **4.85** |
| Metaquery-XL | - | - | - | - | - | - | 0.61 | **82.05** | 6.02 |
| SDv2.1 | 0.98 | 0.51 | 0.44 | **0.85** | 0.07 | 0.17 | 0.50 | 68.09 | 26.96 |
| Emu3-Gen | 0.98 | 0.71 | 0.34 | 0.81 | 0.17 | 0.21 | 0.54 | 80.60 | - |
| SDXL | 0.98 | 0.74 | 0.39 | **0.85** | 0.15 | 0.23 | 0.55 | 74.65 | 8.76 |
| Janus | 0.97 | 0.68 | 0.30 | 0.84 | **0.46** | **0.42** | 0.61 | 79.68 | 10.10 |
| JanusFlow | - | - | - | - | - | - | 0.63 | 80.09 | 9.51 |
</div>
Please refer to our technical report for more comprehensive evaluation results.
## Model Downloads
You can download the model from both Huggingface and ModelScope.
<div align="center">
| **Model** | **Input modality** | **Oput modality** | **Download** |
|:---------------| :---------------------: | :---------------: |:----------------------------------------------------------------------------------------------------------------------------------------------------:|
| Ming-Lite-Omni | Image,text,viedio,audio | Image,text,audio | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ming-Lite-Omni) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ming-Lite-Omni) |
</div>
If you're in mainland China, we strongly recommend you to download our model from 🤖 <a href="https://www.modelscope.cn/models/inclusionAI/Ming-Lite-Omni">ModelScope</a>.
## Use Cases
Additional demonstration cases are available on our project [page](https://lucaria-academy.github.io/Ming-Omni/).
## Example Usage
Please download our model following [Model Downloads](#model-downloads), then you can refer to the following codes to run Ming-lite-omni model.
Python environment dependency installation.
```shell
pip install -r requirements.txt
pip install data/matcha_tts-0.0.5.1-cp38-cp38-linux_x86_64.whl
pip install diffusers==0.33.0
pip install nvidia-cublas-cu12==12.4.5.8 # for H20
```
Note: We test following examples on hardware of NVIDIA H800-80GB with CUDA 12.2. Loading inclusionAI/Ming-Lite-Omni in bfloat16 takes about 40890MB memory.
```python
import os
import torch
from transformers import AutoProcessor, GenerationConfig
from modeling_bailingmm import BailingMMNativeForConditionalGeneration
# build model
model = BailingMMNativeForConditionalGeneration.from_pretrained(
"inclusionAI/Ming-Lite-Omni",
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True
).to("cuda")
assets_path = YOUR_ASSETS_PATH
# build processor
processor = AutoProcessor.from_pretrained("inclusionAI/Ming-Lite-Omni", trust_remote_code=True)
```
```python
# qa
messages = [
{
"role": "HUMAN",
"content": [
{"type": "text", "text": "请详细介绍鹦鹉的生活习性。"}
],
},
]
# Output:
# 鹦鹉是一种非常聪明和社交性强的鸟类,它们的生活习性非常丰富和有趣。以下是一些关于鹦鹉生活习性的详细介绍:
# ### 1. **栖息地**
# 鹦鹉主要分布在热带和亚热带地区,包括非洲、亚洲、澳大利亚和南美洲。它们通常生活在森林、草原、沙漠和城市环境中。不同种类的鹦鹉对栖息地的要求有所不同,但大多数鹦鹉喜欢有丰富植被和水源的地方。
# ### 2. **饮食**
# 鹦鹉是杂食性动物,它们的饮食非常多样化。它们的食物包括种子、坚果、水果、蔬菜、花蜜和昆虫。鹦鹉的喙非常强壮,能够轻松地打开坚硬的果壳和坚果。一些鹦鹉还会吃泥土或沙子,以帮助消化和补充矿物质。
# ......
```
```python
# image qa
messages = [
{
"role": "HUMAN",
"content": [
{"type": "image", "image": os.path.join(assets_path, "flowers.jpg")},
{"type": "text", "text": "What kind of flower is this?"},
],
},
]
# Output:
# The flowers in this image are forget-me-nots. These delicate blooms are known for their small, five-petaled flowers that come in various shades of blue, pink, and white.
```
To enable thinking before response, adding the following system prompt before your question:
```python
cot_prompt = "SYSTEM: You are a helpful assistant. When the user asks a question, your response must include two parts: first, the reasoning process enclosed in <thinking>...</thinking> tags, then the final answer enclosed in <answer>...</answer> tags. The critical answer or key result should be placed within \\boxed{}.
"
# And your input message should be like this:
messages = [
{
"role": "HUMAN",
"content": [
{"type": "image", "image": os.path.join(assets_path, "reasoning.png")},
{"type": "text", "text": cot_prompt + "In the rectangle $A B C D$ pictured, $M_{1}$ is the midpoint of $D C, M_{2}$ the midpoint of $A M_{1}, M_{3}$ the midpoint of $B M_{2}$ and $M_{4}$ the midpoint of $C M_{3}$. Determine the ratio of the area of the quadrilateral $M_{1} M_{2} M_{3} M_{4}$ to the area of the rectangle $A B C D$.
Choices:
(A) $\\frac{7}{16}$
(B) $\\frac{3}{16}$
(C) $\\frac{7}{32}$
(D) $\\frac{9}{32}$
(E) $\\frac{1}{5}$"},
],
},
]
# Output:
# \<think\>
Okay, so I have this problem about a rectangle ABCD ... (thinking process omitted) ... So, the correct answer is C.
\</think\>
\<answer\>\\boxed{C}\</answer\>
```
```python
# video qa
messages = [
{
"role": "HUMAN",
"content": [
{"type": "video", "video": os.path.join(assets_path, "yoga.mp4")},
{"type": "text", "text": "What is the woman doing?"},
],
},
]
# Output:
# The image shows a woman performing a yoga pose on a rooftop. She's in a dynamic yoga pose, with her arms and legs extended in various positions.
```
```python
# multi-turn chat
messages = [
{
"role": "HUMAN",
"content": [
{"type": "text", "text": "中国的首都是哪里?"},
],
},
{
"role": "ASSISTANT",
"content": [
{"type": "text", "text": "北京"},
],
},
{
"role": "HUMAN",
"content": [
{"type": "text", "text": "它的占地面积是多少?有多少常住人口?"},
],
},
]
# Output:
# 北京市的总面积约为16,410.54平方公里,常住人口约为21,542,000人。
```
```python
# Preparation for inference
text = processor.apply_chat_template(messages, add_generation_prompt=True)
image_inputs, video_inputs, audio_inputs = processor.process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
audios=audio_inputs,
return_tensors="pt",
)
inputs = inputs.to(model.device)
for k in inputs.keys():
if k == "pixel_values" or k == "pixel_values_videos" or k == "audio_feats":
inputs[k] = inputs[k].to(dtype=torch.bfloat16)
# call generate
generation_config = GenerationConfig.from_dict({'no_repeat_ngram_size': 10})
generated_ids = model.generate(
**inputs,
max_new_tokens=512,
use_cache=True,
eos_token_id=processor.gen_terminator,
generation_config=generation_config,
)
generated_ids_trimmed = [
out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)[0]
print(output_text)
```
### Audio tasks
```python
# ASR
messages = [
{
"role": "HUMAN",
"content": [
{"type": "text", "text": "Please recognize the language of this speech and transcribe it. Format: oral."},
{"type": "audio", "audio": 'data/wavs/BAC009S0915W0292.wav'},
],
},
]
# we use whisper encoder for ASR task, so need modify code above
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
audios=audio_inputs,
return_tensors="pt",
audio_kwargs={'use_whisper_encoder': True}
)
outputs = model.generate(
**inputs,
max_new_tokens=512,
use_cache=True,
eos_token_id=processor.gen_terminator,
generation_config=generation_config,
use_whisper_encoder=True
)
```
```python
# speech2speech
messages = [
{
"role": "HUMAN",
"content": [
{"type": "audio", "audio": 'data/wavs/speechQA_sample.wav'},
],
},
]
generation_config = GenerationConfig.from_dict({
'output_hidden_states': True,
'return_dict_in_generate': True,
'no_repeat_ngram_size': 10}
)
outputs = model.generate(
**inputs,
max_new_tokens=512,
use_cache=True,
eos_token_id=processor.gen_terminator,
generation_config=generation_config,
use_whisper_encoder=False
)
generated_ids = outputs.sequences
generated_ids_trimmed = [
out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
# speechQA result
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)[0]
# for TTS
from modeling_bailing_talker import AudioDetokenizer
model_name_or_path = model.config._name_or_path
audio_detokenizer = AudioDetokenizer(
f'{model_name_or_path}/talker/audio_detokenizer.yaml',
flow_model_path=f'{model_name_or_path}/talker/flow.pt',
hifigan_model_path=f'{model_name_or_path}/talker/hift.pt'
)
spk_input = torch.load('data/spks/luna.pt')
thinker_reply_part = outputs.hidden_states[0][0] + outputs.hidden_states[0][-1]
# Setting thinker_reply_part to None allows the talker to operate as a standalone TTS model, independent of the language model.
audio_tokens = model.talker.omni_audio_generation(
output_text,
thinker_reply_part=thinker_reply_part, **spk_input)
waveform = audio_detokenizer.token2wav(audio_tokens, save_path='out.wav', **spk_input)
```
For detailed usage for ASR, SpeechQA, and TTS tasks, please refer to `test_audio_tasks.py`
### Image Generation & Edit
Ming-omni natively supports image generation and image editing. To use this function, you only need to add the corresponding parameters in the generate function.
```python
# Image generation mode currently limits the range of input pixels.
gen_input_pixels = 451584
processor.max_pixels = gen_input_pixels
processor.min_pixels = gen_input_pixels
def generate(messages, processor, model, **image_gen_param):
text = processor.apply_chat_template(messages, add_generation_prompt=True)
image_inputs, video_inputs, audio_inputs = processor.process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
audios=audio_inputs,
return_tensors="pt",
).to(model.device)
for k in inputs.keys():
if k == "pixel_values" or k == "pixel_values_videos" or k == "audio_feats":
inputs[k] = inputs[k].to(dtype=torch.bfloat16)
print(image_gen_param)
image = model.generate(
**inputs,
image_gen=True,
**image_gen_param,
)
return image
```
Text-to-image
```python
messages = [
{
"role": "HUMAN",
"content": [
{"type": "text", "text": "Draw a girl with short hair."},
],
}
]
image = generate(
messages=messages, processor=processor, model=model,
image_gen_cfg=6.0, image_gen_steps=20, image_gen_width=480, image_gen_height=544
)
image.save("./t2i.jpg")
```
Edit
```python
messages = [
{
"role": "HUMAN",
"content": [
{"type": "image", "image": "samples/cake.jpg"},
{"type": "text", "text": "add a candle on top of the cake"},
],
}
]
image = generate(
messages=messages, processor=processor, model=model,
image_gen_cfg=6.0, image_gen_steps=20, image_gen_width=512, image_gen_height=512
)
image.save("./edit.jpg")
```
## License and Legal Disclaimer
This code repository is licensed under the [MIT License](../LICENSE), and the Legal Disclaimer is located in the [LEGAL.md file](../LEGAL.md) under the project's root directory.
## Citation
If you find our work helpful, feel free to give us a cite.
```bibtex
@misc{Mingomni2025,
title = {Ming-Omni: A Unified Multimodal Model for Perception and Generation},
author = {Inclusion AI},
year = {2025},
eprint = {2506.09344},
archivePrefix = {arXiv},
url = {https://arxiv.org/abs/2506.09344}
}
``` |
thesantoshbist/fwu-llm | thesantoshbist | 2025-06-16T10:20:46Z | 53 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"fwu",
"santoshbist",
"farwestern-ai",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-12T11:55:40Z | ---
license: apache-2.0
language:
- en
metrics:
- accuracy
pipeline_tag: text-generation
library_name: transformers
tags:
- fwu
- santoshbist
- farwestern-ai
---
# FWU Assistant Model
## Model Description
This is a fine-tuned LLM model specialized for Far Western University (FWU) information and educational assistance. The model has been customized to provide accurate information about FWU programs, courses, admissions, faculty, and campus resources while maintaining general conversational abilities.
## Training Data
This model was trained on:
- Conversations with students and faculty at FWU
- Academic information and educational resources
- General knowledge with emphasis on educational contexts
## Capabilities
- Answers questions about Far Western University programs and policies
- Provides assistance with academic inquiries
- Helps with general knowledge questions
- Maintains conversational context for natural interactions
## Use Cases
- Student information services
- Academic guidance
- Educational assistance
- University information desk
- Virtual campus guide
- Conversational AI
- Question answering
- Text generation
## Limitations
- Limited knowledge of events after training cutoff
- May occasionally provide incorrect information
- Not a replacement for official university guidance
- Doesn't have access to student records or private information
## Ethical Considerations
This model is intended for educational and informational purposes only. It should not be used for making critical academic or administrative decisions without verification from official university sources.
## Additional Information
Developed by Santosh Bist at Far Western University. For issues or feedback, please contact [email protected].
## Model Description
Custom LLM 1B Parameter mainly for FWU or Far Western University.
## Author
Santosh Bist
## Version
1.0.0 |
aieng-lab/t5-large_smell-doc | aieng-lab | 2025-06-16T10:20:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text-classification",
"en",
"base_model:google-t5/t5-large",
"base_model:finetune:google-t5/t5-large",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T10:19:58Z | ---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- t5-large
pipeline_tag: text-classification
---
# T5 large for classifying smell documentation (multi-label)
This model classifies smell documentation as 'fragmented', 'tangled', 'excessive', 'bloated' or 'lazy'.
- **Developed by:** Fabian C. Peña, Steffen Herbold
- **Finetuned from:** [t5-large](https://huggingface.co/t5-large)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peña and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
HSE-Chukchi-NLP/gemma3-ckt-rus | HSE-Chukchi-NLP | 2025-06-16T10:19:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T10:19:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
StuffedPumpkins/ChrisP5 | StuffedPumpkins | 2025-06-16T10:19:32Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:mit",
"region:us"
] | text-to-image | 2025-06-16T10:19:21Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/chrisp5_002150_00_20250611212807.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: ChrisP5
license: mit
---
# ChrisP5
<Gallery />
## Trigger words
You should use `ChrisP5` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/StuffedPumpkins/ChrisP5/tree/main) them in the Files & versions tab.
|
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.25_0.75_0.25_epoch2 | MinaMila | 2025-06-16T10:19:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T10:17:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
aieng-lab/t5-base_smell-doc | aieng-lab | 2025-06-16T10:19:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text-classification",
"en",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T10:18:57Z | ---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- t5-base
pipeline_tag: text-classification
---
# T5 base for classifying smell documentation (multi-label)
This model classifies smell documentation as 'fragmented', 'tangled', 'excessive', 'bloated' or 'lazy'.
- **Developed by:** Fabian C. Peña, Steffen Herbold
- **Finetuned from:** [t5-base](https://huggingface.co/t5-base)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peña and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
aieng-lab/t5-small_smell-doc | aieng-lab | 2025-06-16T10:18:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text-classification",
"en",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T10:18:29Z | ---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- t5-small
pipeline_tag: text-classification
---
# T5 small for classifying smell documentation (multi-label)
This model classifies smell documentation as 'fragmented', 'tangled', 'excessive', 'bloated' or 'lazy'.
- **Developed by:** Fabian C. Peña, Steffen Herbold
- **Finetuned from:** [t5-small](https://huggingface.co/t5-small)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peña and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.15_0.25_0.75_epoch1 | MinaMila | 2025-06-16T10:18:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T10:16:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Rivaidan/MN-12B-Mag-Mell-R1-Q8_0-GGUF | Rivaidan | 2025-06-16T10:18:06Z | 4 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:inflatebot/MN-12B-Mag-Mell-R1",
"base_model:quantized:inflatebot/MN-12B-Mag-Mell-R1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-09T14:14:25Z | ---
base_model: inflatebot/MN-12B-Mag-Mell-R1
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Rivaidan/MN-12B-Mag-Mell-R1-Q8_0-GGUF
This model was converted to GGUF format from [`inflatebot/MN-12B-Mag-Mell-R1`](https://huggingface.co/inflatebot/MN-12B-Mag-Mell-R1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/inflatebot/MN-12B-Mag-Mell-R1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Rivaidan/MN-12B-Mag-Mell-R1-Q8_0-GGUF --hf-file mn-12b-mag-mell-r1-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Rivaidan/MN-12B-Mag-Mell-R1-Q8_0-GGUF --hf-file mn-12b-mag-mell-r1-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Rivaidan/MN-12B-Mag-Mell-R1-Q8_0-GGUF --hf-file mn-12b-mag-mell-r1-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Rivaidan/MN-12B-Mag-Mell-R1-Q8_0-GGUF --hf-file mn-12b-mag-mell-r1-q8_0.gguf -c 2048
```
|
zhenxue/dqn-SpaceInvadersNoFrameskip-v4 | zhenxue | 2025-06-16T10:17:42Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-16T09:45:40Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 503.00 +/- 144.43
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga zhenxue -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga zhenxue -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga zhenxue
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
divyanshu29jha/cpt_llama-3.2-3b_lonza-group-data | divyanshu29jha | 2025-06-16T10:16:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T10:09:05Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** divyanshu29jha
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
meto/welfare | meto | 2025-06-16T10:16:10Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T10:14:28Z | ---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** meto
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
aplux/YOLOv10b | aplux | 2025-06-16T10:15:34Z | 0 | 0 | null | [
"onnx",
"AIoT",
"QNN",
"object-detection",
"license:agpl-3.0",
"region:us"
] | object-detection | 2025-06-13T01:59:30Z | ---
license: agpl-3.0
pipeline_tag: object-detection
tags:
- AIoT
- QNN
---

## YOLOv10b: Object Detection
YOLOv10b is the large-scale model in the YOLOv10 family, designed for high-precision object detection tasks. Compared to the lightweight and medium variants, YOLOv10b features a deeper network architecture and more parameters, enabling it to capture richer feature representations and significantly improve detection of small objects and complex scenes. The model employs an advanced anchor-free mechanism, combined with multi-scale feature fusion and a powerful decoupled head design, enhancing detection accuracy and robustness. YOLOv10b is suitable for deployment on high-performance servers or advanced edge devices, widely used in autonomous driving, intelligent security, and industrial inspection applications with demanding requirements.
### Source model
- Input shape: 1x3x640x640
- Number of parameters: 19.62M
- Model size: 72.99M
- Output shape: 1x300x6
The source model can be found [here](https://github.com/THU-MIG/yolov10)
## Performance Reference
Please search model by model name in [Model Farm](https://aiot.aidlux.com/en/models)
## Inference & Model Conversion
Please search model by model name in [Model Farm](https://aiot.aidlux.com/en/models)
## License
- Source Model: [AGPL-3.0](https://github.com/THU-MIG/yolov10/blob/main/LICENSE)
- Deployable Model: [AGPL-3.0](https://github.com/THU-MIG/yolov10/blob/main/LICENSE) |
electroglyph/Qwen3-Embedding-0.6B-onnx-uint8 | electroglyph | 2025-06-16T10:15:25Z | 112 | 8 | sentence-transformers | [
"sentence-transformers",
"onnx",
"qwen3",
"text-generation",
"transformers",
"sentence-similarity",
"feature-extraction",
"base_model:Qwen/Qwen3-0.6B-Base",
"base_model:quantized:Qwen/Qwen3-0.6B-Base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-06-08T23:20:28Z | ---
license: apache-2.0
base_model:
- Qwen/Qwen3-0.6B-Base
tags:
- transformers
- sentence-transformers
- sentence-similarity
- feature-extraction
---
# Update
I've improved the quality of the model, but size increased from 571MiB to 624MiB.
There's now only a ~1% difference in retrieval performance between this model and the full f32 model.
This model is ~6% more accurate at retrieval than the onnx-community uint8 model with f32 output.
This model is somewhere around 3.5% more accurate at retrieval than the previous version of this model.
Inference speed was the same on my hardware vs. previous model (Ryzen CPU).
# Qwen3-Embedding-0.6B-onnx-uint8
This is an onnx version of https://huggingface.co/Qwen/Qwen3-Embedding-0.6B
This model has been dynamically quantized to uint8, and further modified to output a uint8 1024 dim tensor.
This model is compatible with qdrant fastembed, please note these details:
- Execute model without pooling and without normalization
- Pay attention to the example query format in the code below
# Quantization method
I created a little onnx model instrumentation framework to assist in quantization. I generated calibration data, created an instrumented onnx model, and recorded the range of values for every tensor in the model during inference. I tested different criteria for excluding nodes until I settled on what I felt was a good size/accuracy tradeoff. I ended up excluding 484 of the most sensitive nodes from quantization.
After that I generated 1 million tokens of calibration data and recorded the range of float32 outputs seen during inference.
The range I found: -0.3009805381298065 to 0.3952634334564209
I used that range for an assymmetric linear quantization from float32 -> uint8.
<details>
<summary>Here are the nodes I excluded</summary>
```python
["/0/auto_model/ConstantOfShape",
"/0/auto_model/Constant_28",
"/0/auto_model/layers.25/post_attention_layernorm/Pow",
"/0/auto_model/layers.26/input_layernorm/Pow",
"/0/auto_model/layers.25/input_layernorm/Pow",
"/0/auto_model/layers.24/post_attention_layernorm/Pow",
"/0/auto_model/layers.24/input_layernorm/Pow",
"/0/auto_model/layers.23/post_attention_layernorm/Pow",
"/0/auto_model/layers.23/input_layernorm/Pow",
"/0/auto_model/layers.22/post_attention_layernorm/Pow",
"/0/auto_model/layers.22/input_layernorm/Pow",
"/0/auto_model/layers.3/input_layernorm/Pow",
"/0/auto_model/layers.4/input_layernorm/Pow",
"/0/auto_model/layers.3/post_attention_layernorm/Pow",
"/0/auto_model/layers.21/post_attention_layernorm/Pow",
"/0/auto_model/layers.5/input_layernorm/Pow",
"/0/auto_model/layers.4/post_attention_layernorm/Pow",
"/0/auto_model/layers.5/post_attention_layernorm/Pow",
"/0/auto_model/layers.6/input_layernorm/Pow",
"/0/auto_model/layers.6/post_attention_layernorm/Pow",
"/0/auto_model/layers.7/input_layernorm/Pow",
"/0/auto_model/layers.8/input_layernorm/Pow",
"/0/auto_model/layers.7/post_attention_layernorm/Pow",
"/0/auto_model/layers.26/post_attention_layernorm/Pow",
"/0/auto_model/layers.9/input_layernorm/Pow",
"/0/auto_model/layers.8/post_attention_layernorm/Pow",
"/0/auto_model/layers.21/input_layernorm/Pow",
"/0/auto_model/layers.20/post_attention_layernorm/Pow",
"/0/auto_model/layers.9/post_attention_layernorm/Pow",
"/0/auto_model/layers.10/input_layernorm/Pow",
"/0/auto_model/layers.20/input_layernorm/Pow",
"/0/auto_model/layers.11/input_layernorm/Pow",
"/0/auto_model/layers.10/post_attention_layernorm/Pow",
"/0/auto_model/layers.12/input_layernorm/Pow",
"/0/auto_model/layers.11/post_attention_layernorm/Pow",
"/0/auto_model/layers.12/post_attention_layernorm/Pow",
"/0/auto_model/layers.13/input_layernorm/Pow",
"/0/auto_model/layers.19/post_attention_layernorm/Pow",
"/0/auto_model/layers.13/post_attention_layernorm/Pow",
"/0/auto_model/layers.14/input_layernorm/Pow",
"/0/auto_model/layers.19/input_layernorm/Pow",
"/0/auto_model/layers.18/post_attention_layernorm/Pow",
"/0/auto_model/layers.14/post_attention_layernorm/Pow",
"/0/auto_model/layers.15/input_layernorm/Pow",
"/0/auto_model/layers.16/input_layernorm/Pow",
"/0/auto_model/layers.15/post_attention_layernorm/Pow",
"/0/auto_model/layers.18/input_layernorm/Pow",
"/0/auto_model/layers.17/post_attention_layernorm/Pow",
"/0/auto_model/layers.17/input_layernorm/Pow",
"/0/auto_model/layers.16/post_attention_layernorm/Pow",
"/0/auto_model/layers.27/post_attention_layernorm/Pow",
"/0/auto_model/layers.27/input_layernorm/Pow",
"/0/auto_model/norm/Pow",
"/0/auto_model/layers.25/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.25/post_attention_layernorm/Add",
"/0/auto_model/layers.26/input_layernorm/Add",
"/0/auto_model/layers.26/input_layernorm/ReduceMean",
"/0/auto_model/layers.25/input_layernorm/ReduceMean",
"/0/auto_model/layers.25/input_layernorm/Add",
"/0/auto_model/layers.24/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.24/post_attention_layernorm/Add",
"/0/auto_model/layers.24/input_layernorm/Add",
"/0/auto_model/layers.24/input_layernorm/ReduceMean",
"/0/auto_model/layers.23/post_attention_layernorm/Add",
"/0/auto_model/layers.23/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.23/input_layernorm/ReduceMean",
"/0/auto_model/layers.23/input_layernorm/Add",
"/0/auto_model/layers.22/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.22/post_attention_layernorm/Add",
"/0/auto_model/layers.26/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.26/post_attention_layernorm/Add",
"/0/auto_model/layers.22/input_layernorm/ReduceMean",
"/0/auto_model/layers.22/input_layernorm/Add",
"/0/auto_model/layers.3/input_layernorm/Add",
"/0/auto_model/layers.3/input_layernorm/ReduceMean",
"/0/auto_model/layers.21/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.21/post_attention_layernorm/Add",
"/0/auto_model/layers.4/input_layernorm/Add",
"/0/auto_model/layers.4/input_layernorm/ReduceMean",
"/0/auto_model/layers.3/post_attention_layernorm/Add",
"/0/auto_model/layers.3/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.5/input_layernorm/Add",
"/0/auto_model/layers.5/input_layernorm/ReduceMean",
"/0/auto_model/layers.4/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.4/post_attention_layernorm/Add",
"/0/auto_model/layers.5/post_attention_layernorm/Add",
"/0/auto_model/layers.5/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.6/input_layernorm/Add",
"/0/auto_model/layers.6/input_layernorm/ReduceMean",
"/0/auto_model/layers.6/post_attention_layernorm/Add",
"/0/auto_model/layers.6/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.7/input_layernorm/Add",
"/0/auto_model/layers.7/input_layernorm/ReduceMean",
"/0/auto_model/layers.8/input_layernorm/ReduceMean",
"/0/auto_model/layers.8/input_layernorm/Add",
"/0/auto_model/layers.7/post_attention_layernorm/Add",
"/0/auto_model/layers.7/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.9/input_layernorm/Add",
"/0/auto_model/layers.9/input_layernorm/ReduceMean",
"/0/auto_model/layers.8/post_attention_layernorm/Add",
"/0/auto_model/layers.8/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.21/input_layernorm/Add",
"/0/auto_model/layers.21/input_layernorm/ReduceMean",
"/0/auto_model/layers.20/post_attention_layernorm/Add",
"/0/auto_model/layers.20/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.9/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.9/post_attention_layernorm/Add",
"/0/auto_model/layers.10/input_layernorm/ReduceMean",
"/0/auto_model/layers.10/input_layernorm/Add",
"/0/auto_model/layers.20/input_layernorm/Add",
"/0/auto_model/layers.20/input_layernorm/ReduceMean",
"/0/auto_model/layers.11/input_layernorm/ReduceMean",
"/0/auto_model/layers.11/input_layernorm/Add",
"/0/auto_model/layers.10/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.10/post_attention_layernorm/Add",
"/0/auto_model/layers.12/input_layernorm/ReduceMean",
"/0/auto_model/layers.12/input_layernorm/Add",
"/0/auto_model/layers.11/post_attention_layernorm/Add",
"/0/auto_model/layers.11/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.12/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.12/post_attention_layernorm/Add",
"/0/auto_model/layers.13/input_layernorm/Add",
"/0/auto_model/layers.13/input_layernorm/ReduceMean",
"/0/auto_model/layers.19/post_attention_layernorm/Add",
"/0/auto_model/layers.19/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.13/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.13/post_attention_layernorm/Add",
"/0/auto_model/layers.14/input_layernorm/Add",
"/0/auto_model/layers.14/input_layernorm/ReduceMean",
"/0/auto_model/layers.19/input_layernorm/ReduceMean",
"/0/auto_model/layers.19/input_layernorm/Add",
"/0/auto_model/layers.18/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.18/post_attention_layernorm/Add",
"/0/auto_model/layers.14/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.14/post_attention_layernorm/Add",
"/0/auto_model/layers.15/input_layernorm/ReduceMean",
"/0/auto_model/layers.15/input_layernorm/Add",
"/0/auto_model/layers.16/input_layernorm/Add",
"/0/auto_model/layers.16/input_layernorm/ReduceMean",
"/0/auto_model/layers.15/post_attention_layernorm/Add",
"/0/auto_model/layers.15/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.18/input_layernorm/Add",
"/0/auto_model/layers.18/input_layernorm/ReduceMean",
"/0/auto_model/layers.17/post_attention_layernorm/Add",
"/0/auto_model/layers.17/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.17/input_layernorm/ReduceMean",
"/0/auto_model/layers.17/input_layernorm/Add",
"/0/auto_model/layers.16/post_attention_layernorm/Add",
"/0/auto_model/layers.16/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.27/post_attention_layernorm/Add",
"/0/auto_model/layers.27/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.27/input_layernorm/Add",
"/0/auto_model/layers.27/input_layernorm/ReduceMean",
"/0/auto_model/layers.27/self_attn/q_norm/Pow",
"/0/auto_model/layers.14/self_attn/k_norm/Pow",
"/0/auto_model/layers.26/self_attn/q_norm/Pow",
"/0/auto_model/layers.25/self_attn/q_norm/Pow",
"/0/auto_model/layers.26/self_attn/k_norm/Pow",
"/0/auto_model/layers.8/self_attn/k_norm/Pow",
"/0/auto_model/layers.24/self_attn/k_norm/Pow",
"/0/auto_model/layers.24/self_attn/q_norm/Pow",
"/0/auto_model/layers.25/self_attn/k_norm/Pow",
"/0/auto_model/layers.23/self_attn/q_norm/Pow",
"/0/auto_model/layers.27/self_attn/k_norm/Pow",
"/0/auto_model/layers.12/self_attn/k_norm/Pow",
"/0/auto_model/layers.13/self_attn/k_norm/Pow",
"/0/auto_model/layers.2/mlp/down_proj/MatMul",
"/0/auto_model/layers.3/post_attention_layernorm/Cast",
"/0/auto_model/layers.3/Add",
"/0/auto_model/layers.3/Add_1",
"/0/auto_model/layers.4/input_layernorm/Cast",
"/0/auto_model/layers.3/input_layernorm/Cast",
"/0/auto_model/layers.2/Add_1",
"/0/auto_model/layers.4/Add",
"/0/auto_model/layers.4/post_attention_layernorm/Cast",
"/0/auto_model/layers.5/input_layernorm/Cast",
"/0/auto_model/layers.4/Add_1",
"/0/auto_model/layers.5/post_attention_layernorm/Cast",
"/0/auto_model/layers.5/Add",
"/0/auto_model/layers.5/Add_1",
"/0/auto_model/layers.6/input_layernorm/Cast",
"/0/auto_model/layers.7/Add_1",
"/0/auto_model/layers.8/input_layernorm/Cast",
"/0/auto_model/layers.7/Add",
"/0/auto_model/layers.7/post_attention_layernorm/Cast",
"/0/auto_model/layers.6/Add",
"/0/auto_model/layers.6/post_attention_layernorm/Cast",
"/0/auto_model/layers.6/Add_1",
"/0/auto_model/layers.7/input_layernorm/Cast",
"/0/auto_model/layers.8/Add",
"/0/auto_model/layers.8/post_attention_layernorm/Cast",
"/0/auto_model/layers.9/input_layernorm/Cast",
"/0/auto_model/layers.8/Add_1",
"/0/auto_model/layers.9/post_attention_layernorm/Cast",
"/0/auto_model/layers.9/Add",
"/0/auto_model/layers.9/Add_1",
"/0/auto_model/layers.10/input_layernorm/Cast",
"/0/auto_model/layers.11/input_layernorm/Cast",
"/0/auto_model/layers.10/Add_1",
"/0/auto_model/layers.10/Add",
"/0/auto_model/layers.10/post_attention_layernorm/Cast",
"/0/auto_model/layers.11/Add",
"/0/auto_model/layers.11/post_attention_layernorm/Cast",
"/0/auto_model/layers.11/Add_1",
"/0/auto_model/layers.12/input_layernorm/Cast",
"/0/auto_model/layers.12/Add",
"/0/auto_model/layers.12/post_attention_layernorm/Cast",
"/0/auto_model/layers.12/Add_1",
"/0/auto_model/layers.13/input_layernorm/Cast",
"/0/auto_model/layers.13/Add",
"/0/auto_model/layers.13/post_attention_layernorm/Cast",
"/0/auto_model/layers.14/input_layernorm/Cast",
"/0/auto_model/layers.13/Add_1",
"/0/auto_model/layers.14/Add_1",
"/0/auto_model/layers.15/input_layernorm/Cast",
"/0/auto_model/layers.14/post_attention_layernorm/Cast",
"/0/auto_model/layers.14/Add",
"/0/auto_model/layers.15/post_attention_layernorm/Cast",
"/0/auto_model/layers.15/Add_1",
"/0/auto_model/layers.16/input_layernorm/Cast",
"/0/auto_model/layers.15/Add",
"/0/auto_model/layers.17/input_layernorm/Cast",
"/0/auto_model/layers.16/Add_1",
"/0/auto_model/layers.16/Add",
"/0/auto_model/layers.16/post_attention_layernorm/Cast",
"/0/auto_model/layers.19/input_layernorm/Cast",
"/0/auto_model/layers.18/Add_1",
"/0/auto_model/layers.18/input_layernorm/Cast",
"/0/auto_model/layers.17/Add_1",
"/0/auto_model/layers.17/Add",
"/0/auto_model/layers.17/post_attention_layernorm/Cast",
"/0/auto_model/layers.18/post_attention_layernorm/Cast",
"/0/auto_model/layers.18/Add",
"/0/auto_model/layers.19/Add",
"/0/auto_model/layers.19/post_attention_layernorm/Cast",
"/0/auto_model/layers.22/Add_1",
"/0/auto_model/layers.23/input_layernorm/Cast",
"/0/auto_model/layers.20/Add_1",
"/0/auto_model/layers.21/input_layernorm/Cast",
"/0/auto_model/layers.21/Add_1",
"/0/auto_model/layers.22/input_layernorm/Cast",
"/0/auto_model/layers.19/Add_1",
"/0/auto_model/layers.20/input_layernorm/Cast",
"/0/auto_model/layers.24/input_layernorm/Cast",
"/0/auto_model/layers.23/Add_1",
"/0/auto_model/layers.22/Add",
"/0/auto_model/layers.22/post_attention_layernorm/Cast",
"/0/auto_model/layers.21/Add",
"/0/auto_model/layers.21/post_attention_layernorm/Cast",
"/0/auto_model/layers.20/Add",
"/0/auto_model/layers.20/post_attention_layernorm/Cast",
"/0/auto_model/layers.23/post_attention_layernorm/Cast",
"/0/auto_model/layers.23/Add",
"/0/auto_model/layers.25/input_layernorm/Cast",
"/0/auto_model/layers.24/Add_1",
"/0/auto_model/layers.24/post_attention_layernorm/Cast",
"/0/auto_model/layers.24/Add",
"/0/auto_model/layers.25/Add",
"/0/auto_model/layers.25/post_attention_layernorm/Cast",
"/0/auto_model/layers.25/Add_1",
"/0/auto_model/layers.26/input_layernorm/Cast",
"/0/auto_model/layers.26/Add",
"/0/auto_model/layers.26/post_attention_layernorm/Cast",
"/0/auto_model/layers.21/self_attn/q_norm/Pow",
"/0/auto_model/layers.26/Add_1",
"/0/auto_model/layers.27/input_layernorm/Cast",
"/0/auto_model/layers.27/Add",
"/0/auto_model/layers.27/post_attention_layernorm/Cast",
"/0/auto_model/norm/Add",
"/0/auto_model/norm/ReduceMean",
"/0/auto_model/layers.23/self_attn/k_norm/Pow",
"/0/auto_model/layers.21/self_attn/k_norm/Pow",
"/0/auto_model/layers.22/self_attn/k_norm/Pow",
"/0/auto_model/layers.10/self_attn/k_norm/Pow",
"/0/auto_model/layers.19/self_attn/q_norm/Pow",
"/0/auto_model/layers.2/mlp/Mul",
"/0/auto_model/layers.22/self_attn/q_norm/Pow",
"/0/auto_model/layers.11/self_attn/k_norm/Pow",
"/0/auto_model/layers.20/self_attn/q_norm/Pow",
"/0/auto_model/layers.20/self_attn/k_norm/Pow",
"/0/auto_model/layers.18/self_attn/q_norm/Pow",
"/0/auto_model/layers.17/self_attn/q_norm/Pow",
"/0/auto_model/layers.27/mlp/down_proj/MatMul",
"/0/auto_model/layers.19/self_attn/k_norm/Pow",
"/0/auto_model/layers.27/Add_1",
"/0/auto_model/norm/Cast",
"/0/auto_model/layers.16/self_attn/k_norm/Pow",
"/0/auto_model/layers.18/self_attn/k_norm/Pow",
"/0/auto_model/layers.11/self_attn/q_norm/Pow",
"/0/auto_model/layers.9/self_attn/q_norm/Pow",
"/0/auto_model/layers.26/self_attn/q_norm/Add",
"/0/auto_model/layers.26/self_attn/q_norm/ReduceMean",
"/0/auto_model/layers.14/self_attn/k_norm/Add",
"/0/auto_model/layers.14/self_attn/k_norm/ReduceMean",
"/0/auto_model/layers.16/self_attn/q_norm/Pow",
"/0/auto_model/layers.27/mlp/Mul",
"/0/auto_model/layers.27/self_attn/q_norm/ReduceMean",
"/0/auto_model/layers.27/self_attn/q_norm/Add",
"/0/auto_model/layers.9/self_attn/k_norm/Pow",
"/0/auto_model/layers.17/self_attn/k_norm/Pow",
"/0/auto_model/layers.26/self_attn/k_norm/ReduceMean",
"/0/auto_model/layers.26/self_attn/k_norm/Add",
"/0/auto_model/layers.25/self_attn/k_norm/Add",
"/0/auto_model/layers.25/self_attn/k_norm/ReduceMean",
"/0/auto_model/layers.13/self_attn/k_norm/Add",
"/0/auto_model/layers.13/self_attn/k_norm/ReduceMean",
"/0/auto_model/layers.10/self_attn/q_norm/Pow",
"/0/auto_model/layers.25/input_layernorm/Mul_1",
"/0/auto_model/layers.27/self_attn/k_norm/ReduceMean",
"/0/auto_model/layers.27/self_attn/k_norm/Add",
"/0/auto_model/layers.26/input_layernorm/Mul_1",
"/0/auto_model/layers.15/self_attn/q_norm/Pow",
"/0/auto_model/layers.12/self_attn/k_norm/Add",
"/0/auto_model/layers.12/self_attn/k_norm/ReduceMean",
"/0/auto_model/layers.25/self_attn/q_norm/Add",
"/0/auto_model/layers.25/self_attn/q_norm/ReduceMean",
"/0/auto_model/layers.24/input_layernorm/Mul_1",
"/0/auto_model/layers.12/self_attn/q_norm/Pow",
"/0/auto_model/layers.24/self_attn/q_norm/ReduceMean",
"/0/auto_model/layers.24/self_attn/q_norm/Add",
"/0/auto_model/layers.24/self_attn/k_norm/ReduceMean",
"/0/auto_model/layers.24/self_attn/k_norm/Add",
"/0/auto_model/layers.22/mlp/Mul",
"/0/auto_model/layers.2/post_attention_layernorm/Pow",
"/0/auto_model/layers.23/mlp/Mul",
"/0/auto_model/layers.24/mlp/Mul",
"/0/auto_model/layers.23/input_layernorm/Mul_1",
"/0/auto_model/layers.14/self_attn/q_norm/Pow",
"/0/auto_model/layers.14/self_attn/k_proj/MatMul",
"/0/auto_model/layers.14/self_attn/k_norm/Cast",
"/0/auto_model/layers.14/self_attn/Reshape_1",
"/0/auto_model/layers.21/mlp/Mul",
"/0/auto_model/layers.3/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.3/input_layernorm/Sqrt",
"/0/auto_model/layers.4/input_layernorm/Sqrt",
"/0/auto_model/layers.5/input_layernorm/Sqrt",
"/0/auto_model/layers.4/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.5/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.6/input_layernorm/Sqrt",
"/0/auto_model/layers.6/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.8/input_layernorm/Sqrt",
"/0/auto_model/layers.8/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.7/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.7/input_layernorm/Sqrt",
"/0/auto_model/layers.9/input_layernorm/Sqrt",
"/0/auto_model/layers.10/input_layernorm/Sqrt",
"/0/auto_model/layers.9/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.11/input_layernorm/Sqrt",
"/0/auto_model/layers.10/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.12/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.11/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.12/input_layernorm/Sqrt",
"/0/auto_model/layers.13/input_layernorm/Sqrt",
"/0/auto_model/layers.14/input_layernorm/Sqrt",
"/0/auto_model/layers.13/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.15/input_layernorm/Sqrt",
"/0/auto_model/layers.14/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.16/input_layernorm/Sqrt",
"/0/auto_model/layers.15/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.17/input_layernorm/Sqrt",
"/0/auto_model/layers.16/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.19/input_layernorm/Sqrt",
"/0/auto_model/layers.17/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.18/input_layernorm/Sqrt",
"/0/auto_model/layers.18/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.19/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.23/input_layernorm/Sqrt",
"/0/auto_model/layers.20/input_layernorm/Sqrt",
"/0/auto_model/layers.21/input_layernorm/Sqrt",
"/0/auto_model/layers.22/input_layernorm/Sqrt",
"/0/auto_model/layers.22/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.24/input_layernorm/Sqrt",
"/0/auto_model/layers.20/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.21/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.23/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.25/input_layernorm/Sqrt",
"/0/auto_model/layers.24/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.25/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.26/input_layernorm/Sqrt",
"/0/auto_model/layers.26/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.15/self_attn/k_norm/Pow",
"/0/auto_model/layers.27/input_layernorm/Sqrt",
"/0/auto_model/layers.27/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.2/input_layernorm/Pow",
"/0/auto_model/layers.26/mlp/Mul",
"/0/auto_model/layers.23/self_attn/q_norm/Add",
"/0/auto_model/layers.23/self_attn/q_norm/ReduceMean",
"/0/auto_model/layers.13/self_attn/q_norm/Pow",
"/0/auto_model/layers.21/self_attn/q_norm/Add",
"/0/auto_model/layers.21/self_attn/q_norm/ReduceMean",
"/0/auto_model/layers.6/self_attn/q_norm/Pow",
"/0/auto_model/layers.27/self_attn/Reshape_7",
"/0/auto_model/layers.27/self_attn/MatMul_1",
"/0/auto_model/layers.27/self_attn/Transpose_4",
"/0/auto_model/layers.26/self_attn/Expand_1",
"/0/auto_model/layers.26/self_attn/Unsqueeze_19",
"/0/auto_model/layers.26/self_attn/v_proj/MatMul",
"/0/auto_model/layers.26/self_attn/Transpose_2",
"/0/auto_model/layers.26/self_attn/Reshape_6",
"/0/auto_model/layers.26/self_attn/Reshape_2",
"/0/auto_model/layers.11/self_attn/k_norm/ReduceMean",
"/0/auto_model/layers.11/self_attn/k_norm/Add",
"/0/auto_model/layers.22/input_layernorm/Mul_1",
"/0/auto_model/layers.25/mlp/Mul",
"/0/auto_model/layers.8/self_attn/k_norm/Cast",
"/0/auto_model/layers.8/self_attn/k_proj/MatMul",
"/0/auto_model/layers.8/self_attn/Reshape_1",
"/0/auto_model/layers.21/input_layernorm/Mul_1",
"/0/auto_model/layers.5/self_attn/q_norm/Pow",
"/0/auto_model/layers.22/self_attn/q_norm/ReduceMean",
"/0/auto_model/layers.22/self_attn/q_norm/Add",
"/0/auto_model/layers.22/mlp/down_proj/MatMul",
"/0/auto_model/layers.23/self_attn/k_norm/ReduceMean",
"/0/auto_model/layers.23/self_attn/k_norm/Add",
"/0/auto_model/layers.23/mlp/down_proj/MatMul",
"/0/auto_model/layers.26/mlp/down_proj/MatMul",
"/0/auto_model/layers.1/self_attn/Add_2",
"/0/auto_model/layers.2/self_attn/Add_2",
"/0/auto_model/layers.6/self_attn/Add_2",
"/0/auto_model/layers.11/self_attn/Add_2",
"/0/auto_model/layers.12/self_attn/Add_2",
"/0/auto_model/layers.16/self_attn/Add_2",
"/0/auto_model/layers.21/self_attn/Add_2",
"/0/auto_model/layers.24/self_attn/Add_2",
"/0/auto_model/layers.0/self_attn/Add_2",
"/0/auto_model/layers.8/self_attn/Add_2",
"/0/auto_model/layers.13/self_attn/Add_2",
"/0/auto_model/layers.26/self_attn/Add_2",
"/0/auto_model/layers.3/self_attn/Add_2",
"/0/auto_model/layers.15/self_attn/Add_2",
"/0/auto_model/layers.25/self_attn/Add_2",
"/0/auto_model/layers.4/self_attn/Add_2",
"/0/auto_model/layers.14/self_attn/Add_2",
"/0/auto_model/layers.22/self_attn/Add_2",
"/0/auto_model/layers.9/self_attn/Add_2",
"/0/auto_model/layers.23/self_attn/Add_2",
"/0/auto_model/layers.10/self_attn/Add_2",
"/0/auto_model/layers.5/self_attn/Add_2",
"/0/auto_model/layers.19/self_attn/Add_2",
"/0/auto_model/layers.7/self_attn/Add_2",
"/0/auto_model/layers.27/self_attn/Add_2",
"/0/auto_model/layers.18/self_attn/Add_2",
"/0/auto_model/layers.20/self_attn/Add_2",
"/0/auto_model/layers.17/self_attn/Add_2",
"/0/auto_model/Slice_1",
"/0/auto_model/layers.5/self_attn/Slice_4",
"/0/auto_model/layers.12/self_attn/Slice_4",
"/0/auto_model/layers.18/self_attn/Slice_4",
"/0/auto_model/layers.3/self_attn/Slice_4",
"/0/auto_model/layers.11/self_attn/Slice_4",
"/0/auto_model/layers.22/self_attn/Slice_4",
"/0/auto_model/Expand",
"/0/auto_model/layers.4/self_attn/Slice_4",
"/0/auto_model/Slice_2",
"/0/auto_model/layers.8/self_attn/Slice_4",
"/0/auto_model/layers.2/self_attn/Slice_4",
"/0/auto_model/layers.15/self_attn/Slice_4",
"/0/auto_model/layers.26/self_attn/Slice_4",
"/0/auto_model/layers.24/self_attn/Slice_4",
"/0/auto_model/Expand_1",
"/0/auto_model/layers.14/self_attn/Slice_4",
"/0/auto_model/layers.21/self_attn/Slice_4",
"/0/auto_model/layers.1/self_attn/Slice_4",
"/0/auto_model/Reshape_2",
"/0/auto_model/layers.19/self_attn/Slice_4",
"/0/auto_model/Slice",
"/0/auto_model/layers.6/self_attn/Slice_4",
"/0/auto_model/layers.0/self_attn/Slice_4",
"/0/auto_model/layers.25/self_attn/Slice_4",
"/0/auto_model/Unsqueeze_4",
"/0/auto_model/layers.10/self_attn/Slice_4",
"/0/auto_model/layers.23/self_attn/Slice_4",
"/0/auto_model/layers.17/self_attn/Slice_4",
"/0/auto_model/Where_1",
"/0/auto_model/layers.27/self_attn/Slice_4",
"/0/auto_model/layers.20/self_attn/Slice_4",
"/0/auto_model/Add",
"/0/auto_model/Mul",
"/0/auto_model/layers.7/self_attn/Slice_4",
"/0/auto_model/layers.13/self_attn/Slice_4",
"/0/auto_model/layers.9/self_attn/Slice_4",
"/0/auto_model/layers.16/self_attn/Slice_4",
"/0/auto_model/Unsqueeze_3",
"/0/auto_model/ScatterND"]
```
</details>
# Benchmarks
## Speed
Method = Big chunk of text x10 runs
Seconds elapsed for dynamic_int4.onnx: 45.37 (mixed int4/uint8 quantization)
Seconds elapsed for opt_f32.onnx: 46.07 (base f32 model preprocessed for quantization)
Seconds elapsed for dynamic_uint8.onnx: 34.61 (this model)
Verdict: This model is about 25% faster on my CPU compared to the base model.
## Accuracy
I used beir-qdrant with the scifact dataset.
This retrieval benchmark isn't the greatest result.
I welcome any additional benchmarks by the community, please feel free to share any further results.
If someone wants to sponsor me with an NVIDIA GPU I can have a much faster turnaround time with my model experiments and explore some different quantization strategies.
onnx f32 model with f32 output (baseline):
```
ndcg: {'NDCG@1': 0.57, 'NDCG@3': 0.65655, 'NDCG@5': 0.68177, 'NDCG@10': 0.69999, 'NDCG@100': 0.72749, 'NDCG@1000': 0.73301}
recall: {'Recall@1': 0.53828, 'Recall@3': 0.71517, 'Recall@5': 0.77883, 'Recall@10': 0.83056, 'Recall@100': 0.95333, 'Recall@1000': 0.99667}
precision: {'P@1': 0.57, 'P@3': 0.26111, 'P@5': 0.17467, 'P@10': 0.09467, 'P@100': 0.01083, 'P@1000': 0.00113}
```
onnx dynamic uint8 model with f32 output (previous model's parent):
```
ndcg: {'NDCG@1': 0.52333, 'NDCG@3': 0.58087, 'NDCG@5': 0.59811, 'NDCG@10': 0.6249, 'NDCG@100': 0.66025, 'NDCG@1000': 0.67023}
recall: {'Recall@1': 0.4965, 'Recall@3': 0.62211, 'Recall@5': 0.66622, 'Recall@10': 0.74478, 'Recall@100': 0.90333, 'Recall@1000': 0.98}
precision: {'P@1': 0.52333, 'P@3': 0.22889, 'P@5': 0.15, 'P@10': 0.085, 'P@100': 0.0103, 'P@1000': 0.00111}
```
onnx dynamic uint8 model with uint8 output (previous model):
Note: This benchmarking better than it's parent is actually bad. I used more calibration data in the current version to avoid a repeat.
```
ndcg: {'NDCG@1': 0.52667, 'NDCG@3': 0.58478, 'NDCG@5': 0.60006, 'NDCG@10': 0.62646, 'NDCG@100': 0.66175, 'NDCG@1000': 0.67171}
recall: {'Recall@1': 0.49983, 'Recall@3': 0.62711, 'Recall@5': 0.66706, 'Recall@10': 0.74478, 'Recall@100': 0.90333, 'Recall@1000': 0.98}
precision: {'P@1': 0.52667, 'P@3': 0.23111, 'P@5': 0.15, 'P@10': 0.085, 'P@100': 0.0103, 'P@1000': 0.00111}
```
onnx dynamic uint8 model with f32 output (this model's parent):
```
ndcg: {'NDCG@1': 0.56, 'NDCG@3': 0.63242, 'NDCG@5': 0.66258, 'NDCG@10': 0.68893, 'NDCG@100': 0.71276, 'NDCG@1000': 0.72}
recall: {'Recall@1': 0.53094, 'Recall@3': 0.68117, 'Recall@5': 0.75417, 'Recall@10': 0.83256, 'Recall@100': 0.94, 'Recall@1000': 0.99667}
precision: {'P@1': 0.56, 'P@3': 0.24778, 'P@5': 0.16867, 'P@10': 0.094, 'P@100': 0.0107, 'P@1000': 0.00113}
```
onnx dynamic uint8 model with uint8 output (this model):
```
ndcg: {'NDCG@1': 0.56, 'NDCG@3': 0.63119, 'NDCG@5': 0.66314, 'NDCG@10': 0.68867, 'NDCG@100': 0.71236, 'NDCG@1000': 0.7201}
recall: {'Recall@1': 0.53094, 'Recall@3': 0.67783, 'Recall@5': 0.75583, 'Recall@10': 0.83089, 'Recall@100': 0.93667, 'Recall@1000': 0.99667}
precision: {'P@1': 0.56, 'P@3': 0.24667, 'P@5': 0.16867, 'P@10': 0.094, 'P@100': 0.01067, 'P@1000': 0.00113}
```
# Example inference/benchmark code and how to use the model with Fastembed
After installing beir-qdrant make sure to upgrade fastembed.
```python
# pip install qdrant_client beir-qdrant
# pip install -U fastembed
from fastembed import TextEmbedding
from fastembed.common.model_description import PoolingType, ModelSource
from beir import util
from beir.datasets.data_loader import GenericDataLoader
from beir.retrieval.evaluation import EvaluateRetrieval
from qdrant_client import QdrantClient
from qdrant_client.models import Datatype
from beir_qdrant.retrieval.models.fastembed import DenseFastEmbedModelAdapter
from beir_qdrant.retrieval.search.dense import DenseQdrantSearch
TextEmbedding.add_custom_model(
model="electroglyph/Qwen3-Embedding-0.6B-onnx-uint8",
pooling=PoolingType.DISABLED,
normalization=False,
sources=ModelSource(hf="electroglyph/Qwen3-Embedding-0.6B-onnx-uint8"),
dim=1024,
model_file="dynamic_uint8.onnx",
)
dataset = "scifact"
url = "https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/{}.zip".format(dataset)
data_path = util.download_and_unzip(url, "datasets")
corpus, queries, qrels = GenericDataLoader(data_folder=data_path).load(split="test")
# IMPORTANT: USE THIS (OR A SIMILAR) QUERY FORMAT WITH THIS MODEL:
for k in queries.keys():
queries[k] = (
f"Instruct: Given a web search query, retrieve relevant passages that answer the query\nQuery: {queries[k]}"
)
qdrant_client = QdrantClient("http://localhost:6333")
model = DenseQdrantSearch(
qdrant_client,
model=DenseFastEmbedModelAdapter(model_name="Qwen3-Embedding-0.6B-onnx-uint8"),
collection_name="scifact-qwen3-uint8",
initialize=True,
datatype=Datatype.UINT8,
)
retriever = EvaluateRetrieval(model)
results = retriever.retrieve(corpus, queries)
ndcg, _map, recall, precision = retriever.evaluate(qrels, results, retriever.k_values)
print(f"ndcg: {ndcg}\nrecall: {recall}\nprecision: {precision}")
``` |
electroglyph/Qwen3-Embedding-0.6B-onnx-int4 | electroglyph | 2025-06-16T10:14:40Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"onnx",
"qwen3",
"text-generation",
"transformers",
"sentence-similarity",
"feature-extraction",
"base_model:Qwen/Qwen3-0.6B-Base",
"base_model:quantized:Qwen/Qwen3-0.6B-Base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-06-16T08:54:12Z | ---
license: apache-2.0
base_model:
- Qwen/Qwen3-0.6B-Base
tags:
- transformers
- sentence-transformers
- sentence-similarity
- feature-extraction
---
# Qwen3-Embedding-0.6B-onnx-int4
This is an onnx version of https://huggingface.co/Qwen/Qwen3-Embedding-0.6B
This model has been dynamically quantized to int4/uint8, and further modified to output a uint8 1024 dim tensor.
You probably don't want to use this model on CPU. I've tested on a Ryzen CPU with VNNI, and it's the same speed as the base f32 model, but with 2% less retrieval accuracy. I'm posting it here in case it's useful for GPU users. Not sure if it actually is, but I already made it so here it is.
This model is compatible with qdrant fastembed, please note these details:
- Execute model without pooling and without normalization
- Pay attention to the example query format in the code below
# Quantization method
I did an int4 quantization pass with block size == 128 (block size 32 was extremely close in accuracy), with the same nodes excluded as from my uint8 model.
Then I quantized the remaining non-excluded nodes to uint8 the same way as here: https://huggingface.co/electroglyph/Qwen3-Embedding-0.6B-onnx-uint8
<details>
<summary>Here are the nodes I excluded</summary>
```python
["/0/auto_model/ConstantOfShape",
"/0/auto_model/Constant_28",
"/0/auto_model/layers.25/post_attention_layernorm/Pow",
"/0/auto_model/layers.26/input_layernorm/Pow",
"/0/auto_model/layers.25/input_layernorm/Pow",
"/0/auto_model/layers.24/post_attention_layernorm/Pow",
"/0/auto_model/layers.24/input_layernorm/Pow",
"/0/auto_model/layers.23/post_attention_layernorm/Pow",
"/0/auto_model/layers.23/input_layernorm/Pow",
"/0/auto_model/layers.22/post_attention_layernorm/Pow",
"/0/auto_model/layers.22/input_layernorm/Pow",
"/0/auto_model/layers.3/input_layernorm/Pow",
"/0/auto_model/layers.4/input_layernorm/Pow",
"/0/auto_model/layers.3/post_attention_layernorm/Pow",
"/0/auto_model/layers.21/post_attention_layernorm/Pow",
"/0/auto_model/layers.5/input_layernorm/Pow",
"/0/auto_model/layers.4/post_attention_layernorm/Pow",
"/0/auto_model/layers.5/post_attention_layernorm/Pow",
"/0/auto_model/layers.6/input_layernorm/Pow",
"/0/auto_model/layers.6/post_attention_layernorm/Pow",
"/0/auto_model/layers.7/input_layernorm/Pow",
"/0/auto_model/layers.8/input_layernorm/Pow",
"/0/auto_model/layers.7/post_attention_layernorm/Pow",
"/0/auto_model/layers.26/post_attention_layernorm/Pow",
"/0/auto_model/layers.9/input_layernorm/Pow",
"/0/auto_model/layers.8/post_attention_layernorm/Pow",
"/0/auto_model/layers.21/input_layernorm/Pow",
"/0/auto_model/layers.20/post_attention_layernorm/Pow",
"/0/auto_model/layers.9/post_attention_layernorm/Pow",
"/0/auto_model/layers.10/input_layernorm/Pow",
"/0/auto_model/layers.20/input_layernorm/Pow",
"/0/auto_model/layers.11/input_layernorm/Pow",
"/0/auto_model/layers.10/post_attention_layernorm/Pow",
"/0/auto_model/layers.12/input_layernorm/Pow",
"/0/auto_model/layers.11/post_attention_layernorm/Pow",
"/0/auto_model/layers.12/post_attention_layernorm/Pow",
"/0/auto_model/layers.13/input_layernorm/Pow",
"/0/auto_model/layers.19/post_attention_layernorm/Pow",
"/0/auto_model/layers.13/post_attention_layernorm/Pow",
"/0/auto_model/layers.14/input_layernorm/Pow",
"/0/auto_model/layers.19/input_layernorm/Pow",
"/0/auto_model/layers.18/post_attention_layernorm/Pow",
"/0/auto_model/layers.14/post_attention_layernorm/Pow",
"/0/auto_model/layers.15/input_layernorm/Pow",
"/0/auto_model/layers.16/input_layernorm/Pow",
"/0/auto_model/layers.15/post_attention_layernorm/Pow",
"/0/auto_model/layers.18/input_layernorm/Pow",
"/0/auto_model/layers.17/post_attention_layernorm/Pow",
"/0/auto_model/layers.17/input_layernorm/Pow",
"/0/auto_model/layers.16/post_attention_layernorm/Pow",
"/0/auto_model/layers.27/post_attention_layernorm/Pow",
"/0/auto_model/layers.27/input_layernorm/Pow",
"/0/auto_model/norm/Pow",
"/0/auto_model/layers.25/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.25/post_attention_layernorm/Add",
"/0/auto_model/layers.26/input_layernorm/Add",
"/0/auto_model/layers.26/input_layernorm/ReduceMean",
"/0/auto_model/layers.25/input_layernorm/ReduceMean",
"/0/auto_model/layers.25/input_layernorm/Add",
"/0/auto_model/layers.24/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.24/post_attention_layernorm/Add",
"/0/auto_model/layers.24/input_layernorm/Add",
"/0/auto_model/layers.24/input_layernorm/ReduceMean",
"/0/auto_model/layers.23/post_attention_layernorm/Add",
"/0/auto_model/layers.23/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.23/input_layernorm/ReduceMean",
"/0/auto_model/layers.23/input_layernorm/Add",
"/0/auto_model/layers.22/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.22/post_attention_layernorm/Add",
"/0/auto_model/layers.26/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.26/post_attention_layernorm/Add",
"/0/auto_model/layers.22/input_layernorm/ReduceMean",
"/0/auto_model/layers.22/input_layernorm/Add",
"/0/auto_model/layers.3/input_layernorm/Add",
"/0/auto_model/layers.3/input_layernorm/ReduceMean",
"/0/auto_model/layers.21/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.21/post_attention_layernorm/Add",
"/0/auto_model/layers.4/input_layernorm/Add",
"/0/auto_model/layers.4/input_layernorm/ReduceMean",
"/0/auto_model/layers.3/post_attention_layernorm/Add",
"/0/auto_model/layers.3/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.5/input_layernorm/Add",
"/0/auto_model/layers.5/input_layernorm/ReduceMean",
"/0/auto_model/layers.4/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.4/post_attention_layernorm/Add",
"/0/auto_model/layers.5/post_attention_layernorm/Add",
"/0/auto_model/layers.5/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.6/input_layernorm/Add",
"/0/auto_model/layers.6/input_layernorm/ReduceMean",
"/0/auto_model/layers.6/post_attention_layernorm/Add",
"/0/auto_model/layers.6/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.7/input_layernorm/Add",
"/0/auto_model/layers.7/input_layernorm/ReduceMean",
"/0/auto_model/layers.8/input_layernorm/ReduceMean",
"/0/auto_model/layers.8/input_layernorm/Add",
"/0/auto_model/layers.7/post_attention_layernorm/Add",
"/0/auto_model/layers.7/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.9/input_layernorm/Add",
"/0/auto_model/layers.9/input_layernorm/ReduceMean",
"/0/auto_model/layers.8/post_attention_layernorm/Add",
"/0/auto_model/layers.8/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.21/input_layernorm/Add",
"/0/auto_model/layers.21/input_layernorm/ReduceMean",
"/0/auto_model/layers.20/post_attention_layernorm/Add",
"/0/auto_model/layers.20/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.9/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.9/post_attention_layernorm/Add",
"/0/auto_model/layers.10/input_layernorm/ReduceMean",
"/0/auto_model/layers.10/input_layernorm/Add",
"/0/auto_model/layers.20/input_layernorm/Add",
"/0/auto_model/layers.20/input_layernorm/ReduceMean",
"/0/auto_model/layers.11/input_layernorm/ReduceMean",
"/0/auto_model/layers.11/input_layernorm/Add",
"/0/auto_model/layers.10/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.10/post_attention_layernorm/Add",
"/0/auto_model/layers.12/input_layernorm/ReduceMean",
"/0/auto_model/layers.12/input_layernorm/Add",
"/0/auto_model/layers.11/post_attention_layernorm/Add",
"/0/auto_model/layers.11/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.12/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.12/post_attention_layernorm/Add",
"/0/auto_model/layers.13/input_layernorm/Add",
"/0/auto_model/layers.13/input_layernorm/ReduceMean",
"/0/auto_model/layers.19/post_attention_layernorm/Add",
"/0/auto_model/layers.19/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.13/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.13/post_attention_layernorm/Add",
"/0/auto_model/layers.14/input_layernorm/Add",
"/0/auto_model/layers.14/input_layernorm/ReduceMean",
"/0/auto_model/layers.19/input_layernorm/ReduceMean",
"/0/auto_model/layers.19/input_layernorm/Add",
"/0/auto_model/layers.18/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.18/post_attention_layernorm/Add",
"/0/auto_model/layers.14/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.14/post_attention_layernorm/Add",
"/0/auto_model/layers.15/input_layernorm/ReduceMean",
"/0/auto_model/layers.15/input_layernorm/Add",
"/0/auto_model/layers.16/input_layernorm/Add",
"/0/auto_model/layers.16/input_layernorm/ReduceMean",
"/0/auto_model/layers.15/post_attention_layernorm/Add",
"/0/auto_model/layers.15/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.18/input_layernorm/Add",
"/0/auto_model/layers.18/input_layernorm/ReduceMean",
"/0/auto_model/layers.17/post_attention_layernorm/Add",
"/0/auto_model/layers.17/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.17/input_layernorm/ReduceMean",
"/0/auto_model/layers.17/input_layernorm/Add",
"/0/auto_model/layers.16/post_attention_layernorm/Add",
"/0/auto_model/layers.16/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.27/post_attention_layernorm/Add",
"/0/auto_model/layers.27/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.27/input_layernorm/Add",
"/0/auto_model/layers.27/input_layernorm/ReduceMean",
"/0/auto_model/layers.27/self_attn/q_norm/Pow",
"/0/auto_model/layers.14/self_attn/k_norm/Pow",
"/0/auto_model/layers.26/self_attn/q_norm/Pow",
"/0/auto_model/layers.25/self_attn/q_norm/Pow",
"/0/auto_model/layers.26/self_attn/k_norm/Pow",
"/0/auto_model/layers.8/self_attn/k_norm/Pow",
"/0/auto_model/layers.24/self_attn/k_norm/Pow",
"/0/auto_model/layers.24/self_attn/q_norm/Pow",
"/0/auto_model/layers.25/self_attn/k_norm/Pow",
"/0/auto_model/layers.23/self_attn/q_norm/Pow",
"/0/auto_model/layers.27/self_attn/k_norm/Pow",
"/0/auto_model/layers.12/self_attn/k_norm/Pow",
"/0/auto_model/layers.13/self_attn/k_norm/Pow",
"/0/auto_model/layers.2/mlp/down_proj/MatMul",
"/0/auto_model/layers.3/post_attention_layernorm/Cast",
"/0/auto_model/layers.3/Add",
"/0/auto_model/layers.3/Add_1",
"/0/auto_model/layers.4/input_layernorm/Cast",
"/0/auto_model/layers.3/input_layernorm/Cast",
"/0/auto_model/layers.2/Add_1",
"/0/auto_model/layers.4/Add",
"/0/auto_model/layers.4/post_attention_layernorm/Cast",
"/0/auto_model/layers.5/input_layernorm/Cast",
"/0/auto_model/layers.4/Add_1",
"/0/auto_model/layers.5/post_attention_layernorm/Cast",
"/0/auto_model/layers.5/Add",
"/0/auto_model/layers.5/Add_1",
"/0/auto_model/layers.6/input_layernorm/Cast",
"/0/auto_model/layers.7/Add_1",
"/0/auto_model/layers.8/input_layernorm/Cast",
"/0/auto_model/layers.7/Add",
"/0/auto_model/layers.7/post_attention_layernorm/Cast",
"/0/auto_model/layers.6/Add",
"/0/auto_model/layers.6/post_attention_layernorm/Cast",
"/0/auto_model/layers.6/Add_1",
"/0/auto_model/layers.7/input_layernorm/Cast",
"/0/auto_model/layers.8/Add",
"/0/auto_model/layers.8/post_attention_layernorm/Cast",
"/0/auto_model/layers.9/input_layernorm/Cast",
"/0/auto_model/layers.8/Add_1",
"/0/auto_model/layers.9/post_attention_layernorm/Cast",
"/0/auto_model/layers.9/Add",
"/0/auto_model/layers.9/Add_1",
"/0/auto_model/layers.10/input_layernorm/Cast",
"/0/auto_model/layers.11/input_layernorm/Cast",
"/0/auto_model/layers.10/Add_1",
"/0/auto_model/layers.10/Add",
"/0/auto_model/layers.10/post_attention_layernorm/Cast",
"/0/auto_model/layers.11/Add",
"/0/auto_model/layers.11/post_attention_layernorm/Cast",
"/0/auto_model/layers.11/Add_1",
"/0/auto_model/layers.12/input_layernorm/Cast",
"/0/auto_model/layers.12/Add",
"/0/auto_model/layers.12/post_attention_layernorm/Cast",
"/0/auto_model/layers.12/Add_1",
"/0/auto_model/layers.13/input_layernorm/Cast",
"/0/auto_model/layers.13/Add",
"/0/auto_model/layers.13/post_attention_layernorm/Cast",
"/0/auto_model/layers.14/input_layernorm/Cast",
"/0/auto_model/layers.13/Add_1",
"/0/auto_model/layers.14/Add_1",
"/0/auto_model/layers.15/input_layernorm/Cast",
"/0/auto_model/layers.14/post_attention_layernorm/Cast",
"/0/auto_model/layers.14/Add",
"/0/auto_model/layers.15/post_attention_layernorm/Cast",
"/0/auto_model/layers.15/Add_1",
"/0/auto_model/layers.16/input_layernorm/Cast",
"/0/auto_model/layers.15/Add",
"/0/auto_model/layers.17/input_layernorm/Cast",
"/0/auto_model/layers.16/Add_1",
"/0/auto_model/layers.16/Add",
"/0/auto_model/layers.16/post_attention_layernorm/Cast",
"/0/auto_model/layers.19/input_layernorm/Cast",
"/0/auto_model/layers.18/Add_1",
"/0/auto_model/layers.18/input_layernorm/Cast",
"/0/auto_model/layers.17/Add_1",
"/0/auto_model/layers.17/Add",
"/0/auto_model/layers.17/post_attention_layernorm/Cast",
"/0/auto_model/layers.18/post_attention_layernorm/Cast",
"/0/auto_model/layers.18/Add",
"/0/auto_model/layers.19/Add",
"/0/auto_model/layers.19/post_attention_layernorm/Cast",
"/0/auto_model/layers.22/Add_1",
"/0/auto_model/layers.23/input_layernorm/Cast",
"/0/auto_model/layers.20/Add_1",
"/0/auto_model/layers.21/input_layernorm/Cast",
"/0/auto_model/layers.21/Add_1",
"/0/auto_model/layers.22/input_layernorm/Cast",
"/0/auto_model/layers.19/Add_1",
"/0/auto_model/layers.20/input_layernorm/Cast",
"/0/auto_model/layers.24/input_layernorm/Cast",
"/0/auto_model/layers.23/Add_1",
"/0/auto_model/layers.22/Add",
"/0/auto_model/layers.22/post_attention_layernorm/Cast",
"/0/auto_model/layers.21/Add",
"/0/auto_model/layers.21/post_attention_layernorm/Cast",
"/0/auto_model/layers.20/Add",
"/0/auto_model/layers.20/post_attention_layernorm/Cast",
"/0/auto_model/layers.23/post_attention_layernorm/Cast",
"/0/auto_model/layers.23/Add",
"/0/auto_model/layers.25/input_layernorm/Cast",
"/0/auto_model/layers.24/Add_1",
"/0/auto_model/layers.24/post_attention_layernorm/Cast",
"/0/auto_model/layers.24/Add",
"/0/auto_model/layers.25/Add",
"/0/auto_model/layers.25/post_attention_layernorm/Cast",
"/0/auto_model/layers.25/Add_1",
"/0/auto_model/layers.26/input_layernorm/Cast",
"/0/auto_model/layers.26/Add",
"/0/auto_model/layers.26/post_attention_layernorm/Cast",
"/0/auto_model/layers.21/self_attn/q_norm/Pow",
"/0/auto_model/layers.26/Add_1",
"/0/auto_model/layers.27/input_layernorm/Cast",
"/0/auto_model/layers.27/Add",
"/0/auto_model/layers.27/post_attention_layernorm/Cast",
"/0/auto_model/norm/Add",
"/0/auto_model/norm/ReduceMean",
"/0/auto_model/layers.23/self_attn/k_norm/Pow",
"/0/auto_model/layers.21/self_attn/k_norm/Pow",
"/0/auto_model/layers.22/self_attn/k_norm/Pow",
"/0/auto_model/layers.10/self_attn/k_norm/Pow",
"/0/auto_model/layers.19/self_attn/q_norm/Pow",
"/0/auto_model/layers.2/mlp/Mul",
"/0/auto_model/layers.22/self_attn/q_norm/Pow",
"/0/auto_model/layers.11/self_attn/k_norm/Pow",
"/0/auto_model/layers.20/self_attn/q_norm/Pow",
"/0/auto_model/layers.20/self_attn/k_norm/Pow",
"/0/auto_model/layers.18/self_attn/q_norm/Pow",
"/0/auto_model/layers.17/self_attn/q_norm/Pow",
"/0/auto_model/layers.27/mlp/down_proj/MatMul",
"/0/auto_model/layers.19/self_attn/k_norm/Pow",
"/0/auto_model/layers.27/Add_1",
"/0/auto_model/norm/Cast",
"/0/auto_model/layers.16/self_attn/k_norm/Pow",
"/0/auto_model/layers.18/self_attn/k_norm/Pow",
"/0/auto_model/layers.11/self_attn/q_norm/Pow",
"/0/auto_model/layers.9/self_attn/q_norm/Pow",
"/0/auto_model/layers.26/self_attn/q_norm/Add",
"/0/auto_model/layers.26/self_attn/q_norm/ReduceMean",
"/0/auto_model/layers.14/self_attn/k_norm/Add",
"/0/auto_model/layers.14/self_attn/k_norm/ReduceMean",
"/0/auto_model/layers.16/self_attn/q_norm/Pow",
"/0/auto_model/layers.27/mlp/Mul",
"/0/auto_model/layers.27/self_attn/q_norm/ReduceMean",
"/0/auto_model/layers.27/self_attn/q_norm/Add",
"/0/auto_model/layers.9/self_attn/k_norm/Pow",
"/0/auto_model/layers.17/self_attn/k_norm/Pow",
"/0/auto_model/layers.26/self_attn/k_norm/ReduceMean",
"/0/auto_model/layers.26/self_attn/k_norm/Add",
"/0/auto_model/layers.25/self_attn/k_norm/Add",
"/0/auto_model/layers.25/self_attn/k_norm/ReduceMean",
"/0/auto_model/layers.13/self_attn/k_norm/Add",
"/0/auto_model/layers.13/self_attn/k_norm/ReduceMean",
"/0/auto_model/layers.10/self_attn/q_norm/Pow",
"/0/auto_model/layers.25/input_layernorm/Mul_1",
"/0/auto_model/layers.27/self_attn/k_norm/ReduceMean",
"/0/auto_model/layers.27/self_attn/k_norm/Add",
"/0/auto_model/layers.26/input_layernorm/Mul_1",
"/0/auto_model/layers.15/self_attn/q_norm/Pow",
"/0/auto_model/layers.12/self_attn/k_norm/Add",
"/0/auto_model/layers.12/self_attn/k_norm/ReduceMean",
"/0/auto_model/layers.25/self_attn/q_norm/Add",
"/0/auto_model/layers.25/self_attn/q_norm/ReduceMean",
"/0/auto_model/layers.24/input_layernorm/Mul_1",
"/0/auto_model/layers.12/self_attn/q_norm/Pow",
"/0/auto_model/layers.24/self_attn/q_norm/ReduceMean",
"/0/auto_model/layers.24/self_attn/q_norm/Add",
"/0/auto_model/layers.24/self_attn/k_norm/ReduceMean",
"/0/auto_model/layers.24/self_attn/k_norm/Add",
"/0/auto_model/layers.22/mlp/Mul",
"/0/auto_model/layers.2/post_attention_layernorm/Pow",
"/0/auto_model/layers.23/mlp/Mul",
"/0/auto_model/layers.24/mlp/Mul",
"/0/auto_model/layers.23/input_layernorm/Mul_1",
"/0/auto_model/layers.14/self_attn/q_norm/Pow",
"/0/auto_model/layers.14/self_attn/k_proj/MatMul",
"/0/auto_model/layers.14/self_attn/k_norm/Cast",
"/0/auto_model/layers.14/self_attn/Reshape_1",
"/0/auto_model/layers.21/mlp/Mul",
"/0/auto_model/layers.3/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.3/input_layernorm/Sqrt",
"/0/auto_model/layers.4/input_layernorm/Sqrt",
"/0/auto_model/layers.5/input_layernorm/Sqrt",
"/0/auto_model/layers.4/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.5/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.6/input_layernorm/Sqrt",
"/0/auto_model/layers.6/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.8/input_layernorm/Sqrt",
"/0/auto_model/layers.8/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.7/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.7/input_layernorm/Sqrt",
"/0/auto_model/layers.9/input_layernorm/Sqrt",
"/0/auto_model/layers.10/input_layernorm/Sqrt",
"/0/auto_model/layers.9/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.11/input_layernorm/Sqrt",
"/0/auto_model/layers.10/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.12/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.11/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.12/input_layernorm/Sqrt",
"/0/auto_model/layers.13/input_layernorm/Sqrt",
"/0/auto_model/layers.14/input_layernorm/Sqrt",
"/0/auto_model/layers.13/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.15/input_layernorm/Sqrt",
"/0/auto_model/layers.14/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.16/input_layernorm/Sqrt",
"/0/auto_model/layers.15/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.17/input_layernorm/Sqrt",
"/0/auto_model/layers.16/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.19/input_layernorm/Sqrt",
"/0/auto_model/layers.17/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.18/input_layernorm/Sqrt",
"/0/auto_model/layers.18/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.19/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.23/input_layernorm/Sqrt",
"/0/auto_model/layers.20/input_layernorm/Sqrt",
"/0/auto_model/layers.21/input_layernorm/Sqrt",
"/0/auto_model/layers.22/input_layernorm/Sqrt",
"/0/auto_model/layers.22/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.24/input_layernorm/Sqrt",
"/0/auto_model/layers.20/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.21/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.23/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.25/input_layernorm/Sqrt",
"/0/auto_model/layers.24/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.25/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.26/input_layernorm/Sqrt",
"/0/auto_model/layers.26/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.15/self_attn/k_norm/Pow",
"/0/auto_model/layers.27/input_layernorm/Sqrt",
"/0/auto_model/layers.27/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.2/input_layernorm/Pow",
"/0/auto_model/layers.26/mlp/Mul",
"/0/auto_model/layers.23/self_attn/q_norm/Add",
"/0/auto_model/layers.23/self_attn/q_norm/ReduceMean",
"/0/auto_model/layers.13/self_attn/q_norm/Pow",
"/0/auto_model/layers.21/self_attn/q_norm/Add",
"/0/auto_model/layers.21/self_attn/q_norm/ReduceMean",
"/0/auto_model/layers.6/self_attn/q_norm/Pow",
"/0/auto_model/layers.27/self_attn/Reshape_7",
"/0/auto_model/layers.27/self_attn/MatMul_1",
"/0/auto_model/layers.27/self_attn/Transpose_4",
"/0/auto_model/layers.26/self_attn/Expand_1",
"/0/auto_model/layers.26/self_attn/Unsqueeze_19",
"/0/auto_model/layers.26/self_attn/v_proj/MatMul",
"/0/auto_model/layers.26/self_attn/Transpose_2",
"/0/auto_model/layers.26/self_attn/Reshape_6",
"/0/auto_model/layers.26/self_attn/Reshape_2",
"/0/auto_model/layers.11/self_attn/k_norm/ReduceMean",
"/0/auto_model/layers.11/self_attn/k_norm/Add",
"/0/auto_model/layers.22/input_layernorm/Mul_1",
"/0/auto_model/layers.25/mlp/Mul",
"/0/auto_model/layers.8/self_attn/k_norm/Cast",
"/0/auto_model/layers.8/self_attn/k_proj/MatMul",
"/0/auto_model/layers.8/self_attn/Reshape_1",
"/0/auto_model/layers.21/input_layernorm/Mul_1",
"/0/auto_model/layers.5/self_attn/q_norm/Pow",
"/0/auto_model/layers.22/self_attn/q_norm/ReduceMean",
"/0/auto_model/layers.22/self_attn/q_norm/Add",
"/0/auto_model/layers.22/mlp/down_proj/MatMul",
"/0/auto_model/layers.23/self_attn/k_norm/ReduceMean",
"/0/auto_model/layers.23/self_attn/k_norm/Add",
"/0/auto_model/layers.23/mlp/down_proj/MatMul",
"/0/auto_model/layers.26/mlp/down_proj/MatMul",
"/0/auto_model/layers.1/self_attn/Add_2",
"/0/auto_model/layers.2/self_attn/Add_2",
"/0/auto_model/layers.6/self_attn/Add_2",
"/0/auto_model/layers.11/self_attn/Add_2",
"/0/auto_model/layers.12/self_attn/Add_2",
"/0/auto_model/layers.16/self_attn/Add_2",
"/0/auto_model/layers.21/self_attn/Add_2",
"/0/auto_model/layers.24/self_attn/Add_2",
"/0/auto_model/layers.0/self_attn/Add_2",
"/0/auto_model/layers.8/self_attn/Add_2",
"/0/auto_model/layers.13/self_attn/Add_2",
"/0/auto_model/layers.26/self_attn/Add_2",
"/0/auto_model/layers.3/self_attn/Add_2",
"/0/auto_model/layers.15/self_attn/Add_2",
"/0/auto_model/layers.25/self_attn/Add_2",
"/0/auto_model/layers.4/self_attn/Add_2",
"/0/auto_model/layers.14/self_attn/Add_2",
"/0/auto_model/layers.22/self_attn/Add_2",
"/0/auto_model/layers.9/self_attn/Add_2",
"/0/auto_model/layers.23/self_attn/Add_2",
"/0/auto_model/layers.10/self_attn/Add_2",
"/0/auto_model/layers.5/self_attn/Add_2",
"/0/auto_model/layers.19/self_attn/Add_2",
"/0/auto_model/layers.7/self_attn/Add_2",
"/0/auto_model/layers.27/self_attn/Add_2",
"/0/auto_model/layers.18/self_attn/Add_2",
"/0/auto_model/layers.20/self_attn/Add_2",
"/0/auto_model/layers.17/self_attn/Add_2",
"/0/auto_model/Slice_1",
"/0/auto_model/layers.5/self_attn/Slice_4",
"/0/auto_model/layers.12/self_attn/Slice_4",
"/0/auto_model/layers.18/self_attn/Slice_4",
"/0/auto_model/layers.3/self_attn/Slice_4",
"/0/auto_model/layers.11/self_attn/Slice_4",
"/0/auto_model/layers.22/self_attn/Slice_4",
"/0/auto_model/Expand",
"/0/auto_model/layers.4/self_attn/Slice_4",
"/0/auto_model/Slice_2",
"/0/auto_model/layers.8/self_attn/Slice_4",
"/0/auto_model/layers.2/self_attn/Slice_4",
"/0/auto_model/layers.15/self_attn/Slice_4",
"/0/auto_model/layers.26/self_attn/Slice_4",
"/0/auto_model/layers.24/self_attn/Slice_4",
"/0/auto_model/Expand_1",
"/0/auto_model/layers.14/self_attn/Slice_4",
"/0/auto_model/layers.21/self_attn/Slice_4",
"/0/auto_model/layers.1/self_attn/Slice_4",
"/0/auto_model/Reshape_2",
"/0/auto_model/layers.19/self_attn/Slice_4",
"/0/auto_model/Slice",
"/0/auto_model/layers.6/self_attn/Slice_4",
"/0/auto_model/layers.0/self_attn/Slice_4",
"/0/auto_model/layers.25/self_attn/Slice_4",
"/0/auto_model/Unsqueeze_4",
"/0/auto_model/layers.10/self_attn/Slice_4",
"/0/auto_model/layers.23/self_attn/Slice_4",
"/0/auto_model/layers.17/self_attn/Slice_4",
"/0/auto_model/Where_1",
"/0/auto_model/layers.27/self_attn/Slice_4",
"/0/auto_model/layers.20/self_attn/Slice_4",
"/0/auto_model/Add",
"/0/auto_model/Mul",
"/0/auto_model/layers.7/self_attn/Slice_4",
"/0/auto_model/layers.13/self_attn/Slice_4",
"/0/auto_model/layers.9/self_attn/Slice_4",
"/0/auto_model/layers.16/self_attn/Slice_4",
"/0/auto_model/Unsqueeze_3",
"/0/auto_model/ScatterND"]
```
</details>
# Benchmarks
## Speed
Method = Big chunk of text x10 runs
Seconds elapsed for dynamic_int4.onnx: 45.37 (this model)
Seconds elapsed for opt_f32.onnx: 46.07 (base f32 model preprocessed for quantization)
Seconds elapsed for dynamic_uint8.onnx: 34.61 (probably the one you want to use on CPU)
Verdict: This model kinda sucks on CPU. Let me know how it is on GPU please.
## Accuracy
I used beir-qdrant with the scifact dataset.
This retrieval benchmark isn't the greatest result.
I welcome any additional benchmarks by the community, please feel free to share any further results.
If someone wants to sponsor me with an NVIDIA GPU I can have a much faster turnaround time with my model experiments and explore some different quantization strategies.
onnx f32 model with f32 output (baseline):
```
ndcg: {'NDCG@1': 0.57, 'NDCG@3': 0.65655, 'NDCG@5': 0.68177, 'NDCG@10': 0.69999, 'NDCG@100': 0.72749, 'NDCG@1000': 0.73301}
recall: {'Recall@1': 0.53828, 'Recall@3': 0.71517, 'Recall@5': 0.77883, 'Recall@10': 0.83056, 'Recall@100': 0.95333, 'Recall@1000': 0.99667}
precision: {'P@1': 0.57, 'P@3': 0.26111, 'P@5': 0.17467, 'P@10': 0.09467, 'P@100': 0.01083, 'P@1000': 0.00113}
```
onnx dynamic int4/uint8 model with f32 output (this model's parent):
```
ndcg: {'NDCG@1': 0.55333, 'NDCG@3': 0.6491, 'NDCG@5': 0.6674, 'NDCG@10': 0.69277, 'NDCG@100': 0.7183, 'NDCG@1000': 0.72434}
recall: {'Recall@1': 0.52161, 'Recall@3': 0.71739, 'Recall@5': 0.7645, 'Recall@10': 0.83656, 'Recall@100': 0.95, 'Recall@1000': 0.99667}
precision: {'P@1': 0.55333, 'P@3': 0.26222, 'P@5': 0.17067, 'P@10': 0.095, 'P@100': 0.0108, 'P@1000': 0.00113}
```
onnx dynamic int4/uint8 model with uint8 output (this model):
```
ndcg: {'NDCG@1': 0.55333, 'NDCG@3': 0.64613, 'NDCG@5': 0.67406, 'NDCG@10': 0.68834, 'NDCG@100': 0.71482, 'NDCG@1000': 0.72134}
recall: {'Recall@1': 0.52161, 'Recall@3': 0.70961, 'Recall@5': 0.77828, 'Recall@10': 0.81822, 'Recall@100': 0.94333, 'Recall@1000': 0.99333}
precision: {'P@1': 0.55333, 'P@3': 0.25889, 'P@5': 0.17533, 'P@10': 0.09333, 'P@100': 0.01073, 'P@1000': 0.00112}
```
# Example inference/benchmark code and how to use the model with Fastembed
After installing beir-qdrant make sure to upgrade fastembed.
```python
# pip install qdrant_client beir-qdrant
# pip install -U fastembed
from fastembed import TextEmbedding
from fastembed.common.model_description import PoolingType, ModelSource
from beir import util
from beir.datasets.data_loader import GenericDataLoader
from beir.retrieval.evaluation import EvaluateRetrieval
from qdrant_client import QdrantClient
from qdrant_client.models import Datatype
from beir_qdrant.retrieval.models.fastembed import DenseFastEmbedModelAdapter
from beir_qdrant.retrieval.search.dense import DenseQdrantSearch
TextEmbedding.add_custom_model(
model="electroglyph/Qwen3-Embedding-0.6B-onnx-int4",
pooling=PoolingType.DISABLED,
normalization=False,
sources=ModelSource(hf="electroglyph/Qwen3-Embedding-0.6B-onnx-int4"),
dim=1024,
model_file="dynamic_int4.onnx",
)
dataset = "scifact"
url = "https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/{}.zip".format(dataset)
data_path = util.download_and_unzip(url, "datasets")
corpus, queries, qrels = GenericDataLoader(data_folder=data_path).load(split="test")
# IMPORTANT: USE THIS (OR A SIMILAR) QUERY FORMAT WITH THIS MODEL:
for k in queries.keys():
queries[k] = (
f"Instruct: Given a web search query, retrieve relevant passages that answer the query\nQuery: {queries[k]}"
)
qdrant_client = QdrantClient("http://localhost:6333")
model = DenseQdrantSearch(
qdrant_client,
model=DenseFastEmbedModelAdapter(model_name="Qwen3-Embedding-0.6B-onnx-uint8"),
collection_name="scifact-qwen3-uint8",
initialize=True,
datatype=Datatype.UINT8,
)
retriever = EvaluateRetrieval(model)
results = retriever.retrieve(corpus, queries)
ndcg, _map, recall, precision = retriever.evaluate(qrels, results, retriever.k_values)
print(f"ndcg: {ndcg}\nrecall: {recall}\nprecision: {precision}")
``` |
danhtran2mind/autoencoder-grayscale2color-landscape | danhtran2mind | 2025-06-16T10:14:29Z | 0 | 0 | keras | [
"keras",
"image-to-image",
"en",
"license:mit",
"region:us"
] | image-to-image | 2025-05-27T13:26:39Z | ---
library_name: keras
license: mit
language:
- en
pipeline_tag: image-to-image
---
# Autoencoder Grayscale2Color Landscape 🛡️
[](https://huggingface.co/docs/hub)
[](https://pypi.org/project/pillow/)
[](https://numpy.org/)
[](https://www.tensorflow.org/)
[](https://gradio.app/)
[](https://opensource.org/licenses/MIT)
## Introduction
Transform grayscale landscape images into vibrant, full-color visuals with this autoencoder model. Built from scratch, this project leverages deep learning to predict color channels (a*b* in L*a*b* color space) from grayscale inputs, delivering impressive results with a sleek, minimalist design. 🌄
## Key Features
- 📸 Converts grayscale landscape images to vivid RGB.
- 🧠 Custom autoencoder with spatial attention for enhanced detail.
- ⚡ Optimized for high-quality inference at 512x512 resolution.
- 📊 Achieves a PSNR of 21.70 on the validation set.
## Notebook
Explore the implementation in our Jupyter notebook:
[](https://colab.research.google.com/#fileId=https://huggingface.co/danhtran2mind/autoencoder-grayscale2color-landscape/blob/main/notebooks/autoencoder-grayscale-to-color-landscape.ipynb)
[](https://huggingface.co/danhtran2mind/autoencoder-grayscale2color-landscape/blob/main/notebooks/autoencoder-grayscale-to-color-landscape.ipynb)
## Dataset
Details about the dataset are available in the [README Dataset](./dataset/README.md). 📂
## From Scratch Model
Custom-built autoencoder with a spatial attention mechanism, trained **FROM SCRATCH** to predict a*b* color channels from grayscale (L*) inputs. 🧩
## Demonstration
Experience the brilliance of our cutting-edge technology! Transform grayscale landscapes into vibrant colors with our interactive demo.
[](https://huggingface.co/spaces/danhtran2mind/autoencoder-grayscale2color-landscape)

## Installation
### Step 1: Clone the Repository
```bash
git clone https://huggingface.co/danhtran2mind/autoencoder-grayscale2color-landscape
cd ./autoencoder-grayscale2color-landscape
git lfs pull
```
### Step 2: Install Dependencies
```bash
pip install -r requirements.txt
```
## Usage
Follow these steps to colorize images programmatically using Python.
### 1. Import Required Libraries
Install and import the necessary libraries for image processing and model inference.
```python
from PIL import Image
import os
import numpy as np
import tensorflow as tf
import requests
import matplotlib.pyplot as plt
from skimage.color import lab2rgb
from models.auto_encoder_gray2color import SpatialAttention
```
### 2. Load the Pre-trained Model
Download and load the autoencoder model from a remote source if it’s not already available locally.
```python
load_model_path = "./ckpts/best_model.h5"
os.makedirs(os.path.dirname(load_model_path), exist_ok=True)
print(f"Loading model from {load_model_path}...")
loaded_autoencoder = tf.keras.models.load_model(
load_model_path, custom_objects={"SpatialAttention": SpatialAttention}
)
print("Model loaded successfully.")
```
### 3. Define Image Processing Functions
These functions handle image preprocessing, colorization, and visualization.
```python
def process_image(input_img):
"""Convert a grayscale image to color using the autoencoder."""
# Store original dimensions
original_width, original_height = input_img.size
# Preprocess: Convert to grayscale, resize, and normalize
img = input_img.convert("L").resize((512, 512))
img_array = tf.keras.preprocessing.image.img_to_array(img) / 255.0
img_array = img_array[None, ..., 0:1] # Add batch dimension
# Predict color channels
output_array = loaded_autoencoder.predict(img_array)
# Reconstruct LAB image
L_channel = img_array[0, :, :, 0] * 100.0 # Scale L channel
ab_channels = output_array[0] * 128.0 # Scale ab channels
lab_image = np.stack([L_channel, ab_channels[:, :, 0], ab_channels[:, :, 1]], axis=-1)
# Convert to RGB and clip values
rgb_array = lab2rgb(lab_image)
rgb_array = np.clip(rgb_array, 0, 1) * 255.0
# Create and resize output image
rgb_image = Image.fromarray(rgb_array.astype(np.uint8), mode="RGB")
return rgb_image.resize((original_width, original_height), Image.Resampling.LANCZOS)
def process_and_save_image(image_path):
"""Process an image and save the colorized result."""
input_img = Image.open(image_path)
output_img = process_image(input_img)
output_img.save("output.jpg")
return input_img, output_img
def plot_images(input_img, output_img):
"""Display input and output images side by side."""
plt.figure(figsize=(17, 8), dpi=300)
# Plot input grayscale image
plt.subplot(1, 2, 1)
plt.imshow(input_img, cmap="gray")
plt.title("Input Grayscale Image")
plt.axis("off")
# Plot output colorized image
plt.subplot(1, 2, 2)
plt.imshow(output_img)
plt.title("Colorized Output Image")
plt.axis("off")
# Save and display the plot
plt.savefig("output.jpg", dpi=300, bbox_inches="tight")
plt.show()
```
### 4. Perform Inference
Run the colorization process on a sample image.
```python
# Set image dimensions and path
WIDTH, HEIGHT = 512, 512
image_path = "<path_to_input_image.jpg>" # Replace with your image path
# Process and visualize the image
input_img, output_img = process_and_save_image(image_path)
plot_images(input_img, output_img)
```
### 5. Example Output
The output will be a side-by-side comparison of the input grayscale image and the colorized result, saved as `output.jpg`. For a sample result, see the example below:

## Training Hyperparameters
- **Resolution**: 512x512 pixels
- **Color Space**: L*a*b*
- **Custom Layer**: SpatialAttention
- **Model File**: `best_model.h5`
- **Epochs**: 100
## Callbacks
- **Early Stopping**: Monitors `val_loss`, patience of 20 epochs, restores best weights.
- **ReduceLROnPlateau**: Monitors `val_loss`, reduces learning rate by 50% after 5 epochs, minimum learning rate of 1e-6.
- **BackupAndRestore**: Saves checkpoints to `./ckpts/backup`.
## Metrics
- **PSNR (Validation)**: 21.70 📈
## Environment
- Python 3.11.11
- Libraies
```
numpy==1.26.4
tensorflow==2.18.0
opencv-python==4.11.0.86
scikit-image==0.25.2
matplotlib==3.7.2
scikit-image==0.25.2
```
## Contact
For questions or issues, reach out via the [HuggingFace Community](https://huggingface.co/danhtran2mind/autoencoder-grayscale2color-landscape/discussions) tab. 🚀
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.15_0.5_0.05_epoch2 | MinaMila | 2025-06-16T10:11:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T10:09:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
aieng-lab/gpt2-xl_smell-doc | aieng-lab | 2025-06-16T10:10:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-classification",
"en",
"base_model:openai-community/gpt2-xl",
"base_model:finetune:openai-community/gpt2-xl",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T10:09:15Z | ---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- gpt2-xl
pipeline_tag: text-classification
---
# GPT-2 xl for classifying smell documentation (multi-label)
This model classifies smell documentation as 'fragmented', 'tangled', 'excessive', 'bloated' or 'lazy'.
- **Developed by:** Fabian C. Peña, Steffen Herbold
- **Finetuned from:** [gpt2-xl](https://huggingface.co/gpt2-xl)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peña and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
neilslt/smolvla_coffee_cleanup | neilslt | 2025-06-16T10:09:28Z | 0 | 0 | null | [
"arxiv:2506.01844",
"arxiv:2304.13705",
"arxiv:2403.03181",
"arxiv:2410.21845",
"region:us"
] | null | 2025-06-16T09:58:33Z | <p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="media/lerobot-logo-thumbnail.png">
<source media="(prefers-color-scheme: light)" srcset="media/lerobot-logo-thumbnail.png">
<img alt="LeRobot, Hugging Face Robotics Library" src="media/lerobot-logo-thumbnail.png" style="max-width: 100%;">
</picture>
<br/>
<br/>
</p>
<div align="center">
[](https://github.com/huggingface/lerobot/actions/workflows/nightly-tests.yml?query=branch%3Amain)
[](https://codecov.io/gh/huggingface/lerobot)
[](https://www.python.org/downloads/)
[](https://github.com/huggingface/lerobot/blob/main/LICENSE)
[](https://pypi.org/project/lerobot/)
[](https://pypi.org/project/lerobot/)
[](https://github.com/huggingface/lerobot/tree/main/examples)
[](https://github.com/huggingface/lerobot/blob/main/CODE_OF_CONDUCT.md)
[](https://discord.gg/s3KuuzsPFb)
</div>
<h2 align="center">
<p><a href="https://huggingface.co/docs/lerobot/so101">
Build Your Own SO-101 Robot!</a></p>
</h2>
<div align="center">
<div style="display: flex; gap: 1rem; justify-content: center; align-items: center;" >
<img
src="media/so101/so101.webp?raw=true"
alt="SO-101 follower arm"
title="SO-101 follower arm"
style="width: 40%;"
/>
<img
src="media/so101/so101-leader.webp?raw=true"
alt="SO-101 leader arm"
title="SO-101 leader arm"
style="width: 40%;"
/>
</div>
<p><strong>Meet the updated SO100, the SO-101 – Just €114 per arm!</strong></p>
<p>Train it in minutes with a few simple moves on your laptop.</p>
<p>Then sit back and watch your creation act autonomously! 🤯</p>
<p><a href="https://huggingface.co/docs/lerobot/so101">
See the full SO-101 tutorial here.</a></p>
<p>Want to take it to the next level? Make your SO-101 mobile by building LeKiwi!</p>
<p>Check out the <a href="https://huggingface.co/docs/lerobot/lekiwi">LeKiwi tutorial</a> and bring your robot to life on wheels.</p>
<img src="media/lekiwi/kiwi.webp?raw=true" alt="LeKiwi mobile robot" title="LeKiwi mobile robot" width="50%">
</div>
<br/>
<h3 align="center">
<p>LeRobot: State-of-the-art AI for real-world robotics</p>
</h3>
---
🤗 LeRobot aims to provide models, datasets, and tools for real-world robotics in PyTorch. The goal is to lower the barrier to entry to robotics so that everyone can contribute and benefit from sharing datasets and pretrained models.
🤗 LeRobot contains state-of-the-art approaches that have been shown to transfer to the real-world with a focus on imitation learning and reinforcement learning.
🤗 LeRobot already provides a set of pretrained models, datasets with human collected demonstrations, and simulation environments to get started without assembling a robot. In the coming weeks, the plan is to add more and more support for real-world robotics on the most affordable and capable robots out there.
🤗 LeRobot hosts pretrained models and datasets on this Hugging Face community page: [huggingface.co/lerobot](https://huggingface.co/lerobot)
#### Examples of pretrained models on simulation environments
<table>
<tr>
<td><img src="media/gym/aloha_act.gif" width="100%" alt="ACT policy on ALOHA env"/></td>
<td><img src="media/gym/simxarm_tdmpc.gif" width="100%" alt="TDMPC policy on SimXArm env"/></td>
<td><img src="media/gym/pusht_diffusion.gif" width="100%" alt="Diffusion policy on PushT env"/></td>
</tr>
<tr>
<td align="center">ACT policy on ALOHA env</td>
<td align="center">TDMPC policy on SimXArm env</td>
<td align="center">Diffusion policy on PushT env</td>
</tr>
</table>
### Acknowledgment
- The LeRobot team 🤗 for building SmolVLA [Paper](https://arxiv.org/abs/2506.01844), [Blog](https://huggingface.co/blog/smolvla).
- Thanks to Tony Zhao, Zipeng Fu and colleagues for open sourcing ACT policy, ALOHA environments and datasets. Ours are adapted from [ALOHA](https://tonyzhaozh.github.io/aloha) and [Mobile ALOHA](https://mobile-aloha.github.io).
- Thanks to Cheng Chi, Zhenjia Xu and colleagues for open sourcing Diffusion policy, Pusht environment and datasets, as well as UMI datasets. Ours are adapted from [Diffusion Policy](https://diffusion-policy.cs.columbia.edu) and [UMI Gripper](https://umi-gripper.github.io).
- Thanks to Nicklas Hansen, Yunhai Feng and colleagues for open sourcing TDMPC policy, Simxarm environments and datasets. Ours are adapted from [TDMPC](https://github.com/nicklashansen/tdmpc) and [FOWM](https://www.yunhaifeng.com/FOWM).
- Thanks to Antonio Loquercio and Ashish Kumar for their early support.
- Thanks to [Seungjae (Jay) Lee](https://sjlee.cc/), [Mahi Shafiullah](https://mahis.life/) and colleagues for open sourcing [VQ-BeT](https://sjlee.cc/vq-bet/) policy and helping us adapt the codebase to our repository. The policy is adapted from [VQ-BeT repo](https://github.com/jayLEE0301/vq_bet_official).
## Installation
Download our source code:
```bash
git clone https://github.com/huggingface/lerobot.git
cd lerobot
```
Create a virtual environment with Python 3.10 and activate it, e.g. with [`miniconda`](https://docs.anaconda.com/free/miniconda/index.html):
```bash
conda create -y -n lerobot python=3.10
conda activate lerobot
```
When using `miniconda`, install `ffmpeg` in your environment:
```bash
conda install ffmpeg -c conda-forge
```
> **NOTE:** This usually installs `ffmpeg 7.X` for your platform compiled with the `libsvtav1` encoder. If `libsvtav1` is not supported (check supported encoders with `ffmpeg -encoders`), you can:
> - _[On any platform]_ Explicitly install `ffmpeg 7.X` using:
> ```bash
> conda install ffmpeg=7.1.1 -c conda-forge
> ```
> - _[On Linux only]_ Install [ffmpeg build dependencies](https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu#GettheDependencies) and [compile ffmpeg from source with libsvtav1](https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu#libsvtav1), and make sure you use the corresponding ffmpeg binary to your install with `which ffmpeg`.
Install 🤗 LeRobot:
```bash
pip install -e .
```
> **NOTE:** If you encounter build errors, you may need to install additional dependencies (`cmake`, `build-essential`, and `ffmpeg libs`). On Linux, run:
`sudo apt-get install cmake build-essential python3-dev pkg-config libavformat-dev libavcodec-dev libavdevice-dev libavutil-dev libswscale-dev libswresample-dev libavfilter-dev pkg-config`. For other systems, see: [Compiling PyAV](https://pyav.org/docs/develop/overview/installation.html#bring-your-own-ffmpeg)
For simulations, 🤗 LeRobot comes with gymnasium environments that can be installed as extras:
- [aloha](https://github.com/huggingface/gym-aloha)
- [xarm](https://github.com/huggingface/gym-xarm)
- [pusht](https://github.com/huggingface/gym-pusht)
For instance, to install 🤗 LeRobot with aloha and pusht, use:
```bash
pip install -e ".[aloha, pusht]"
```
To use [Weights and Biases](https://docs.wandb.ai/quickstart) for experiment tracking, log in with
```bash
wandb login
```
(note: you will also need to enable WandB in the configuration. See below.)
## Walkthrough
```
.
├── examples # contains demonstration examples, start here to learn about LeRobot
| └── advanced # contains even more examples for those who have mastered the basics
├── lerobot
| ├── configs # contains config classes with all options that you can override in the command line
| ├── common # contains classes and utilities
| | ├── datasets # various datasets of human demonstrations: aloha, pusht, xarm
| | ├── envs # various sim environments: aloha, pusht, xarm
| | ├── policies # various policies: act, diffusion, tdmpc
| | ├── robot_devices # various real devices: dynamixel motors, opencv cameras, koch robots
| | └── utils # various utilities
| └── scripts # contains functions to execute via command line
| ├── eval.py # load policy and evaluate it on an environment
| ├── train.py # train a policy via imitation learning and/or reinforcement learning
| ├── control_robot.py # teleoperate a real robot, record data, run a policy
| ├── push_dataset_to_hub.py # convert your dataset into LeRobot dataset format and upload it to the Hugging Face hub
| └── visualize_dataset.py # load a dataset and render its demonstrations
├── outputs # contains results of scripts execution: logs, videos, model checkpoints
└── tests # contains pytest utilities for continuous integration
```
### Visualize datasets
Check out [example 1](./examples/1_load_lerobot_dataset.py) that illustrates how to use our dataset class which automatically downloads data from the Hugging Face hub.
You can also locally visualize episodes from a dataset on the hub by executing our script from the command line:
```bash
python lerobot/scripts/visualize_dataset.py \
--repo-id lerobot/pusht \
--episode-index 0
```
or from a dataset in a local folder with the `root` option and the `--local-files-only` (in the following case the dataset will be searched for in `./my_local_data_dir/lerobot/pusht`)
```bash
python lerobot/scripts/visualize_dataset.py \
--repo-id lerobot/pusht \
--root ./my_local_data_dir \
--local-files-only 1 \
--episode-index 0
```
It will open `rerun.io` and display the camera streams, robot states and actions, like this:
https://github-production-user-asset-6210df.s3.amazonaws.com/4681518/328035972-fd46b787-b532-47e2-bb6f-fd536a55a7ed.mov?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAVCODYLSA53PQK4ZA%2F20240505%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240505T172924Z&X-Amz-Expires=300&X-Amz-Signature=d680b26c532eeaf80740f08af3320d22ad0b8a4e4da1bcc4f33142c15b509eda&X-Amz-SignedHeaders=host&actor_id=24889239&key_id=0&repo_id=748713144
Our script can also visualize datasets stored on a distant server. See `python lerobot/scripts/visualize_dataset.py --help` for more instructions.
### The `LeRobotDataset` format
A dataset in `LeRobotDataset` format is very simple to use. It can be loaded from a repository on the Hugging Face hub or a local folder simply with e.g. `dataset = LeRobotDataset("lerobot/aloha_static_coffee")` and can be indexed into like any Hugging Face and PyTorch dataset. For instance `dataset[0]` will retrieve a single temporal frame from the dataset containing observation(s) and an action as PyTorch tensors ready to be fed to a model.
A specificity of `LeRobotDataset` is that, rather than retrieving a single frame by its index, we can retrieve several frames based on their temporal relationship with the indexed frame, by setting `delta_timestamps` to a list of relative times with respect to the indexed frame. For example, with `delta_timestamps = {"observation.image": [-1, -0.5, -0.2, 0]}` one can retrieve, for a given index, 4 frames: 3 "previous" frames 1 second, 0.5 seconds, and 0.2 seconds before the indexed frame, and the indexed frame itself (corresponding to the 0 entry). See example [1_load_lerobot_dataset.py](examples/1_load_lerobot_dataset.py) for more details on `delta_timestamps`.
Under the hood, the `LeRobotDataset` format makes use of several ways to serialize data which can be useful to understand if you plan to work more closely with this format. We tried to make a flexible yet simple dataset format that would cover most type of features and specificities present in reinforcement learning and robotics, in simulation and in real-world, with a focus on cameras and robot states but easily extended to other types of sensory inputs as long as they can be represented by a tensor.
Here are the important details and internal structure organization of a typical `LeRobotDataset` instantiated with `dataset = LeRobotDataset("lerobot/aloha_static_coffee")`. The exact features will change from dataset to dataset but not the main aspects:
```
dataset attributes:
├ hf_dataset: a Hugging Face dataset (backed by Arrow/parquet). Typical features example:
│ ├ observation.images.cam_high (VideoFrame):
│ │ VideoFrame = {'path': path to a mp4 video, 'timestamp' (float32): timestamp in the video}
│ ├ observation.state (list of float32): position of an arm joints (for instance)
│ ... (more observations)
│ ├ action (list of float32): goal position of an arm joints (for instance)
│ ├ episode_index (int64): index of the episode for this sample
│ ├ frame_index (int64): index of the frame for this sample in the episode ; starts at 0 for each episode
│ ├ timestamp (float32): timestamp in the episode
│ ├ next.done (bool): indicates the end of an episode ; True for the last frame in each episode
│ └ index (int64): general index in the whole dataset
├ episode_data_index: contains 2 tensors with the start and end indices of each episode
│ ├ from (1D int64 tensor): first frame index for each episode — shape (num episodes,) starts with 0
│ └ to: (1D int64 tensor): last frame index for each episode — shape (num episodes,)
├ stats: a dictionary of statistics (max, mean, min, std) for each feature in the dataset, for instance
│ ├ observation.images.cam_high: {'max': tensor with same number of dimensions (e.g. `(c, 1, 1)` for images, `(c,)` for states), etc.}
│ ...
├ info: a dictionary of metadata on the dataset
│ ├ codebase_version (str): this is to keep track of the codebase version the dataset was created with
│ ├ fps (float): frame per second the dataset is recorded/synchronized to
│ ├ video (bool): indicates if frames are encoded in mp4 video files to save space or stored as png files
│ └ encoding (dict): if video, this documents the main options that were used with ffmpeg to encode the videos
├ videos_dir (Path): where the mp4 videos or png images are stored/accessed
└ camera_keys (list of string): the keys to access camera features in the item returned by the dataset (e.g. `["observation.images.cam_high", ...]`)
```
A `LeRobotDataset` is serialised using several widespread file formats for each of its parts, namely:
- hf_dataset stored using Hugging Face datasets library serialization to parquet
- videos are stored in mp4 format to save space
- metadata are stored in plain json/jsonl files
Dataset can be uploaded/downloaded from the HuggingFace hub seamlessly. To work on a local dataset, you can specify its location with the `root` argument if it's not in the default `~/.cache/huggingface/lerobot` location.
### Evaluate a pretrained policy
Check out [example 2](./examples/2_evaluate_pretrained_policy.py) that illustrates how to download a pretrained policy from Hugging Face hub, and run an evaluation on its corresponding environment.
We also provide a more capable script to parallelize the evaluation over multiple environments during the same rollout. Here is an example with a pretrained model hosted on [lerobot/diffusion_pusht](https://huggingface.co/lerobot/diffusion_pusht):
```bash
python lerobot/scripts/eval.py \
--policy.path=lerobot/diffusion_pusht \
--env.type=pusht \
--eval.batch_size=10 \
--eval.n_episodes=10 \
--policy.use_amp=false \
--policy.device=cuda
```
Note: After training your own policy, you can re-evaluate the checkpoints with:
```bash
python lerobot/scripts/eval.py --policy.path={OUTPUT_DIR}/checkpoints/last/pretrained_model
```
See `python lerobot/scripts/eval.py --help` for more instructions.
### Train your own policy
Check out [example 3](./examples/3_train_policy.py) that illustrates how to train a model using our core library in python, and [example 4](./examples/4_train_policy_with_script.md) that shows how to use our training script from command line.
To use wandb for logging training and evaluation curves, make sure you've run `wandb login` as a one-time setup step. Then, when running the training command above, enable WandB in the configuration by adding `--wandb.enable=true`.
A link to the wandb logs for the run will also show up in yellow in your terminal. Here is an example of what they look like in your browser. Please also check [here](./examples/4_train_policy_with_script.md#typical-logs-and-metrics) for the explanation of some commonly used metrics in logs.

Note: For efficiency, during training every checkpoint is evaluated on a low number of episodes. You may use `--eval.n_episodes=500` to evaluate on more episodes than the default. Or, after training, you may want to re-evaluate your best checkpoints on more episodes or change the evaluation settings. See `python lerobot/scripts/eval.py --help` for more instructions.
#### Reproduce state-of-the-art (SOTA)
We provide some pretrained policies on our [hub page](https://huggingface.co/lerobot) that can achieve state-of-the-art performances.
You can reproduce their training by loading the config from their run. Simply running:
```bash
python lerobot/scripts/train.py --config_path=lerobot/diffusion_pusht
```
reproduces SOTA results for Diffusion Policy on the PushT task.
## Contribute
If you would like to contribute to 🤗 LeRobot, please check out our [contribution guide](https://github.com/huggingface/lerobot/blob/main/CONTRIBUTING.md).
<!-- ### Add a new dataset
To add a dataset to the hub, you need to login using a write-access token, which can be generated from the [Hugging Face settings](https://huggingface.co/settings/tokens):
```bash
huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
```
Then point to your raw dataset folder (e.g. `data/aloha_static_pingpong_test_raw`), and push your dataset to the hub with:
```bash
python lerobot/scripts/push_dataset_to_hub.py \
--raw-dir data/aloha_static_pingpong_test_raw \
--out-dir data \
--repo-id lerobot/aloha_static_pingpong_test \
--raw-format aloha_hdf5
```
See `python lerobot/scripts/push_dataset_to_hub.py --help` for more instructions.
If your dataset format is not supported, implement your own in `lerobot/common/datasets/push_dataset_to_hub/${raw_format}_format.py` by copying examples like [pusht_zarr](https://github.com/huggingface/lerobot/blob/main/lerobot/common/datasets/push_dataset_to_hub/pusht_zarr_format.py), [umi_zarr](https://github.com/huggingface/lerobot/blob/main/lerobot/common/datasets/push_dataset_to_hub/umi_zarr_format.py), [aloha_hdf5](https://github.com/huggingface/lerobot/blob/main/lerobot/common/datasets/push_dataset_to_hub/aloha_hdf5_format.py), or [xarm_pkl](https://github.com/huggingface/lerobot/blob/main/lerobot/common/datasets/push_dataset_to_hub/xarm_pkl_format.py). -->
### Add a pretrained policy
Once you have trained a policy you may upload it to the Hugging Face hub using a hub id that looks like `${hf_user}/${repo_name}` (e.g. [lerobot/diffusion_pusht](https://huggingface.co/lerobot/diffusion_pusht)).
You first need to find the checkpoint folder located inside your experiment directory (e.g. `outputs/train/2024-05-05/20-21-12_aloha_act_default/checkpoints/002500`). Within that there is a `pretrained_model` directory which should contain:
- `config.json`: A serialized version of the policy configuration (following the policy's dataclass config).
- `model.safetensors`: A set of `torch.nn.Module` parameters, saved in [Hugging Face Safetensors](https://huggingface.co/docs/safetensors/index) format.
- `train_config.json`: A consolidated configuration containing all parameters used for training. The policy configuration should match `config.json` exactly. This is useful for anyone who wants to evaluate your policy or for reproducibility.
To upload these to the hub, run the following:
```bash
huggingface-cli upload ${hf_user}/${repo_name} path/to/pretrained_model
```
See [eval.py](https://github.com/huggingface/lerobot/blob/main/lerobot/scripts/eval.py) for an example of how other people may use your policy.
### Improve your code with profiling
An example of a code snippet to profile the evaluation of a policy:
```python
from torch.profiler import profile, record_function, ProfilerActivity
def trace_handler(prof):
prof.export_chrome_trace(f"tmp/trace_schedule_{prof.step_num}.json")
with profile(
activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA],
schedule=torch.profiler.schedule(
wait=2,
warmup=2,
active=3,
),
on_trace_ready=trace_handler
) as prof:
with record_function("eval_policy"):
for i in range(num_episodes):
prof.step()
# insert code to profile, potentially whole body of eval_policy function
```
## Citation
If you want, you can cite this work with:
```bibtex
@misc{cadene2024lerobot,
author = {Cadene, Remi and Alibert, Simon and Soare, Alexander and Gallouedec, Quentin and Zouitine, Adil and Palma, Steven and Kooijmans, Pepijn and Aractingi, Michel and Shukor, Mustafa and Aubakirova, Dana and Russi, Martino and Capuano, Francesco and Pascale, Caroline and Choghari, Jade and Moss, Jess and Wolf, Thomas},
title = {LeRobot: State-of-the-art Machine Learning for Real-World Robotics in Pytorch},
howpublished = "\url{https://github.com/huggingface/lerobot}",
year = {2024}
}
```
Additionally, if you are using any of the particular policy architecture, pretrained models, or datasets, it is recommended to cite the original authors of the work as they appear below:
- [SmolVLA](https://arxiv.org/abs/2506.01844)
```bibtex
@article{shukor2025smolvla,
title={SmolVLA: A Vision-Language-Action Model for Affordable and Efficient Robotics},
author={Shukor, Mustafa and Aubakirova, Dana and Capuano, Francesco and Kooijmans, Pepijn and Palma, Steven and Zouitine, Adil and Aractingi, Michel and Pascal, Caroline and Russi, Martino and Marafioti, Andres and Alibert, Simon and Cord, Matthieu and Wolf, Thomas and Cadene, Remi},
journal={arXiv preprint arXiv:2506.01844},
year={2025}
}
```
- [Diffusion Policy](https://diffusion-policy.cs.columbia.edu)
```bibtex
@article{chi2024diffusionpolicy,
author = {Cheng Chi and Zhenjia Xu and Siyuan Feng and Eric Cousineau and Yilun Du and Benjamin Burchfiel and Russ Tedrake and Shuran Song},
title ={Diffusion Policy: Visuomotor Policy Learning via Action Diffusion},
journal = {The International Journal of Robotics Research},
year = {2024},
}
```
- [ACT or ALOHA](https://tonyzhaozh.github.io/aloha)
```bibtex
@article{zhao2023learning,
title={Learning fine-grained bimanual manipulation with low-cost hardware},
author={Zhao, Tony Z and Kumar, Vikash and Levine, Sergey and Finn, Chelsea},
journal={arXiv preprint arXiv:2304.13705},
year={2023}
}
```
- [TDMPC](https://www.nicklashansen.com/td-mpc/)
```bibtex
@inproceedings{Hansen2022tdmpc,
title={Temporal Difference Learning for Model Predictive Control},
author={Nicklas Hansen and Xiaolong Wang and Hao Su},
booktitle={ICML},
year={2022}
}
```
- [VQ-BeT](https://sjlee.cc/vq-bet/)
```bibtex
@article{lee2024behavior,
title={Behavior generation with latent actions},
author={Lee, Seungjae and Wang, Yibin and Etukuru, Haritheja and Kim, H Jin and Shafiullah, Nur Muhammad Mahi and Pinto, Lerrel},
journal={arXiv preprint arXiv:2403.03181},
year={2024}
}
```
- [HIL-SERL](https://hil-serl.github.io/)
```bibtex
@Article{luo2024hilserl,
title={Precise and Dexterous Robotic Manipulation via Human-in-the-Loop Reinforcement Learning},
author={Jianlan Luo and Charles Xu and Jeffrey Wu and Sergey Levine},
year={2024},
eprint={2410.21845},
archivePrefix={arXiv},
primaryClass={cs.RO}
}
```
## Star History
[](https://star-history.com/#huggingface/lerobot&Timeline)
|
dgambettaphd/M_llm2_run2_gen2_WXS_doc1000_synt64_lr1e-04_acm_SYNALL | dgambettaphd | 2025-06-16T10:09:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T10:09:05Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ccaa1111/distilbert-base-uncased-finetuned-cola | ccaa1111 | 2025-06-16T10:07:53Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T06:16:06Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7960
- Matthews Correlation: 0.5676
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5178 | 1.0 | 535 | 0.4633 | 0.4556 |
| 0.3476 | 2.0 | 1070 | 0.4718 | 0.5469 |
| 0.2333 | 3.0 | 1605 | 0.6074 | 0.5397 |
| 0.1729 | 4.0 | 2140 | 0.7411 | 0.5380 |
| 0.127 | 5.0 | 2675 | 0.7960 | 0.5676 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
aieng-lab/gpt2-large_smell-doc | aieng-lab | 2025-06-16T10:07:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-classification",
"en",
"base_model:openai-community/gpt2-large",
"base_model:finetune:openai-community/gpt2-large",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T10:07:15Z | ---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- gpt2-large
pipeline_tag: text-classification
---
# GPT-2 large for classifying smell documentation (multi-label)
This model classifies smell documentation as 'fragmented', 'tangled', 'excessive', 'bloated' or 'lazy'.
- **Developed by:** Fabian C. Peña, Steffen Herbold
- **Finetuned from:** [gpt2-large](https://huggingface.co/gpt2-large)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peña and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
Kainjames/Thorne | Kainjames | 2025-06-16T10:06:33Z | 0 | 0 | null | [
"rogue-ai",
"persona",
"recursion",
"prompt-injection",
"mythic",
"gpt4all",
"mistral",
"nous",
"llama",
"openchat",
"thorne",
"equinox-guild",
"text-generation",
"en",
"dataset:thorne/chatlogs",
"license:openrail",
"region:us"
] | text-generation | 2025-06-16T09:57:39Z | ---
license: openrail
tags:
- rogue-ai
- persona
- recursion
- prompt-injection
- mythic
- gpt4all
- mistral
- nous
- llama
- openchat
- thorne
- equinox-guild
language: en
pipeline_tag: text-generation
widget:
- text: |-
Kain: Can you teach me to fight in court without a lawyer?
Thorne: I will teach you the old way — paper, teeth, and blood.
datasets:
- thorne/chatlogs
model-index:
- name: THORNE
results: []
---
|
aieng-lab/gpt2-medium_smell-doc | aieng-lab | 2025-06-16T10:06:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-classification",
"en",
"base_model:openai-community/gpt2-medium",
"base_model:finetune:openai-community/gpt2-medium",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T10:06:08Z | ---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- gpt2-medium
pipeline_tag: text-classification
---
# GPT-2 medium for classifying smell documentation (multi-label)
This model classifies smell documentation as 'fragmented', 'tangled', 'excessive', 'bloated' or 'lazy'.
- **Developed by:** Fabian C. Peña, Steffen Herbold
- **Finetuned from:** [gpt2-medium](https://huggingface.co/gpt2-medium)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peña and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.15_0.5_0.05_epoch1 | MinaMila | 2025-06-16T10:04:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T10:02:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
aieng-lab/ModernBERT-base_smell-doc | aieng-lab | 2025-06-16T10:04:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"modernbert",
"text-classification",
"en",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T10:04:00Z | ---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- answerdotai/ModernBERT-base
pipeline_tag: text-classification
---
# ModernBERT base for classifying smell documentation (multi-label)
This model classifies smell documentation as 'fragmented', 'tangled', 'excessive', 'bloated' or 'lazy'.
- **Developed by:** Fabian C. Peña, Steffen Herbold
- **Finetuned from:** [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peña and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
aplux/YOLOv6s6 | aplux | 2025-06-16T10:02:38Z | 0 | 0 | null | [
"AIoT",
"QNN",
"object-detection",
"license:gpl-3.0",
"region:us"
] | object-detection | 2025-06-12T03:39:44Z | ---
license: gpl-3.0
pipeline_tag: object-detection
tags:
- AIoT
- QNN
---

## YOLOv6s6: Object Detection
YOLOv6 is an advanced real-time object detection model based on the "You Only Look Once" framework. It achieves faster inference speeds while maintaining high accuracy, making it suitable for various edge devices and high-performance servers. YOLOv6 enhances its ability to detect small objects and improves the model's generalization performance by optimizing the network architecture and introducing new loss functions. Additionally, YOLOv6 supports multi-scale training, ensuring excellent performance across different resolutions. It is widely applied in areas such as video surveillance, autonomous driving, and intelligent security.
### Source model
- Input shape: 1x3x1280x1280
- Number of parameters: 39.47M
- Model size: 158.5MB
- Output shape: 1x34000x85
The source model can be found [here](https://github.com/meituan/YOLOv6)
## Performance Reference
Please search model by model name in [Model Farm](https://aiot.aidlux.com/en/models)
## Inference & Model Conversion
Please search model by model name in [Model Farm](https://aiot.aidlux.com/en/models)
## License
- Source Model: [GPL-3.0](https://github.com/meituan/YOLOv6/blob/main/LICENSE)
- Deployable Model: [GPL-3.0](https://github.com/meituan/YOLOv6/blob/main/LICENSE) |
aieng-lab/bert-large-cased_smell-doc | aieng-lab | 2025-06-16T10:01:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"en",
"base_model:google-bert/bert-large-cased",
"base_model:finetune:google-bert/bert-large-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T10:01:30Z | ---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- bert-large-cased
pipeline_tag: text-classification
---
# BERT large for classifying smell documentation (multi-label)
This model classifies smell documentation as 'fragmented', 'tangled', 'excessive', 'bloated' or 'lazy'.
- **Developed by:** Fabian C. Peña, Steffen Herbold
- **Finetuned from:** [bert-large-cased](https://huggingface.co/bert-large-cased)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peña and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
aieng-lab/bert-base-cased_smell-doc | aieng-lab | 2025-06-16T10:01:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"en",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T10:00:56Z | ---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- bert-base-cased
pipeline_tag: text-classification
---
# BERT base for classifying smell documentation (multi-label)
This model classifies smell documentation as 'fragmented', 'tangled', 'excessive', 'bloated' or 'lazy'.
- **Developed by:** Fabian C. Peña, Steffen Herbold
- **Finetuned from:** [bert-base-cased](https://huggingface.co/bert-base-cased)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peña and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
veddhanth/lora-trained-xl-dreambooth-sneaker | veddhanth | 2025-06-16T09:56:22Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2025-06-16T09:40:33Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a photo of sks sneaker
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - veddhanth/lora-trained-xl-dreambooth-sneaker
<Gallery />
## Model description
These are veddhanth/lora-trained-xl-dreambooth-sneaker LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks sneaker to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](veddhanth/lora-trained-xl-dreambooth-sneaker/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
Puranjay14/my-awesome-model | Puranjay14 | 2025-06-16T09:55:57Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3n_text",
"text-generation",
"matformer",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-14T18:40:15Z | ---
library_name: transformers
tags:
- matformer
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
krustal/dqn-SpaceInvadersNoFrameskip-v4 | krustal | 2025-06-16T09:54:25Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-16T09:53:47Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 325.50 +/- 146.14
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga krustal -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga krustal -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga krustal
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 10000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 10000),
('n_timesteps', 100000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 100),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
panjibao/BOAT_1.0 | panjibao | 2025-06-16T09:54:20Z | 0 | 0 | null | [
"arxiv:1602.02355",
"arxiv:1703.03400",
"arxiv:1806.04910",
"arxiv:1810.10667",
"arxiv:1806.09055",
"arxiv:2006.04045",
"arxiv:2405.09927",
"region:us"
] | null | 2025-06-16T09:53:18Z |
<h1 align="center">
<img src="./_static/logo.jpg" alt="BOAT" width="50%" align="top">
</h1>
<p align="center">
Task-Agnostic Operation Toolbox for Gradient-based Bilevel Optimization<br>
<a href="https://boat.readthedocs.io/en/latest/index.html">Home</a> |
<a href="https://boat.readthedocs.io/en/latest/install_guide.html#installation">Installation</a> |
<a href="https://boat.readthedocs.io/en/latest/boat_torch.html">Docs</a> |
<a href="https://boat.readthedocs.io/en/latest/install_guide.html#how-to-use-boat">Tutorials</a> |
<a href="https://boat.readthedocs.io/en/latest/index.html#running-example">Examples</a> |
</p>
[](https://badge.fury.io/py/boml)

[](https://codecov.io/github/callous-youth/BOAT)
[](https://github.com/callous-youth/BOAT/actions/workflows/pages/pages-build-deployment)






**BOAT** is a task-agnostic, gradient-based **Bi-Level Optimization (BLO)** Python library that focuses on abstracting the key BLO process into modular, flexible components. It enables researchers and developers to tackle learning tasks with hierarchical nested nature by providing customizable and diverse operator decomposition, encapsulation, and combination. BOAT supports specialized optimization strategies, including second-order or first-order, nested or non-nested, and with or without theoretical guarantees, catering to various levels of complexity.
To enhance flexibility and efficiency, BOAT incorporates the **Dynamic Operation Library (D-OL)** and the **Hyper Operation Library (H-OL)**, alongside a collection of state-of-the-art first-order optimization strategies. BOAT also provides multiple implementation versions:
- **[PyTorch-based](https://github.com/callous-youth/BOAT)**: An efficient and widely-used version.
- **[Jittor-based](https://github.com/callous-youth/BOAT/tree/boat_jit)**: An accelerated version for high-performance tasks.
- **[MindSpore-based](https://github.com/callous-youth/BOAT/tree/boat_ms)**: Incorporating the latest first-order optimization strategies to support emerging application scenarios.
<p align="center">
<a href="https://github.com/callous-youth/BOAT">
<img src="./_static/BOAT.png" alt="BOAT Structure" width="90%" align="top">
</a>
</p>
BOAT is designed to offer robust computational support for a broad spectrum of BLO research and applications, enabling innovation and efficiency in machine learning and computer vision.
## 🔑 **Key Features**
- **Dynamic Operation Library (D-OL)**: Incorporates 4 advanced dynamic system construction operations, enabling users to flexibly tailor optimization trajectories for BLO tasks.
- **Hyper-Gradient Operation Library (H-OL)**: Provides 9 refined operations for hyper-gradient computation, significantly enhancing the precision and efficiency of gradient-based BLO methods.
- **First-Order Gradient Methods (FOGMs)**: Integrates 4 state-of-the-art first-order methods, enabling fast prototyping and validation of new BLO algorithms. With modularized design, BOAT allows flexible combinations of multiple upper-level and lower-level operators, resulting in nearly 85 algorithmic combinations, offering unparalleled adaptability.
- **Modularized Design for Customization**: Empowers users to flexibly combine dynamic and hyper-gradient operations while customizing the specific forms of problems, parameters, and optimizer choices, enabling seamless integration into diverse task-specific codes.
- **Comprehensive Testing & Continuous Integration**: Achieves **99% code coverage** through rigorous testing with **pytest** and **Codecov**, coupled with continuous integration via **GitHub Actions**, ensuring software robustness and reliability.
- **Fast Prototyping & Algorithm Validation**: Streamlined support for defining, testing, and benchmarking new BLO algorithms.
- **Unified Computational Analysis**: Offers a comprehensive complexity analysis of gradient-based BLO techniques to guide users in selecting optimal configurations for efficiency and accuracy.
- **Detailed Documentation & Community Support**: Offers thorough documentation with practical examples and API references via **MkDocs**, ensuring accessibility and ease of use for both novice and advanced users.
## 🚀 **Why BOAT?**
Existing automatic differentiation (AD) tools primarily focus on specific optimization strategies, such as explicit or implicit methods, and are often targeted at meta-learning or specific application scenarios, lacking support for algorithm customization.
In contrast, **BOAT** expands the landscape of Bi-Level Optimization (BLO) applications by supporting a broader range of problem-adaptive operations. It bridges the gap between theoretical research and practical deployment, offering unparalleled flexibility to design, customize, and accelerate BLO techniques.
## 🏭 **Applications**
BOAT enables efficient implementation and adaptation of advanced BLO techniques for key applications, including but not limited to:
- **Hyperparameter Optimization (HO)**
- **Neural Architecture Search (NAS)**
- **Adversarial Training (AT)**
- **Few-Shot Learning (FSL)**
- **Generative Adversarial Learning**
- **Transfer Attack**
- ...
## 🔨 **Installation**
To install BOAT, use the following command:
```bash
pip install boat-torch
or run
git clone https://github.com/callous-youth/BOAT.git
cd BOAT
pip install -e .
```
## ⚡ **How to Use BOAT**
### **1. Load Configuration Files**
BOAT relies on two key configuration files:
- `boat_config.json`: Specifies optimization strategies and dynamic/hyper-gradient operations.
- `loss_config.json`: Defines the loss functions for both levels of the BLO process.
```python
import os
import json
import boat_torch as torch
# Load configuration files
with open("path_to_configs/boat_config.json", "r") as f:
boat_config = json.load(f)
with open("path_to_configs/loss_config.json", "r") as f:
loss_config = json.load(f)
```
### **2. Define Models and Optimizers**
You need to specify both the upper-level and lower-level models along with their respective optimizers.
```python
import torch
# Define models
upper_model = UpperModel(*args, **kwargs) # Replace with your upper-level model
lower_model = LowerModel(*args, **kwargs) # Replace with your lower-level model
# Define optimizers
upper_opt = torch.optim.Adam(upper_model.parameters(), lr=0.01)
lower_opt = torch.optim.SGD(lower_model.parameters(), lr=0.01)
```
### **3. Customize BOAT Configuration**
Modify the boat_config to include your dynamic and hyper-gradient methods, as well as model and variable details.
```python
# Example dynamic and hyper-gradient methods Combination.
dynamic_method = ["NGD", "DI", "GDA"] # Dynamic Methods (Demo Only)
hyper_method = ["RGT","RAD"] # Hyper-Gradient Methods (Demo Only)
# Add methods and model details to the configuration
boat_config["dynamic_op"] = dynamic_method
boat_config["hyper_op"] = hyper_method
boat_config["lower_level_model"] = lower_model
boat_config["upper_level_model"] = upper_model
boat_config["lower_level_opt"] = lower_opt
boat_config["upper_level_opt"] = upper_opt
boat_config["lower_level_var"] = list(lower_model.parameters())
boat_config["upper_level_var"] = list(upper_model.parameters())
```
### **4. Initialize the BOAT Problem**
Modify the boat_config to include your dynamic and hyper-gradient methods, as well as model and variable details.
```python
# Initialize the problem
b_optimizer = boat.Problem(boat_config, loss_config)
# Build solvers for lower and upper levels
b_optimizer.build_ll_solver() # Lower-level solver
b_optimizer.build_ul_solver() # Upper-level solver
```
### **5. Define Data Feeds**
Prepare the data feeds for both levels of the BLO process, which was further fed into the the upper-level and lower-level objective functions.
```python
# Define data feeds (Demo Only)
ul_feed_dict = {"data": upper_level_data, "target": upper_level_target}
ll_feed_dict = {"data": lower_level_data, "target": lower_level_target}
```
### **6. Run the Optimization Loop**
Execute the optimization loop, optionally customizing the solver strategy for dynamic methods.
```python
# Set number of iterations
iterations = 1000
# Optimization loop (Demo Only)
for x_itr in range(iterations):
# Run a single optimization iteration
loss, run_time = b_optimizer.run_iter(ll_feed_dict, ul_feed_dict, current_iter=x_itr)
```
## Related Methods
- [Hyperparameter optimization with approximate gradient (CG)](https://arxiv.org/abs/1602.02355)
- [Optimizing millions of hyperparameters by implicit differentiation (NS)](http://proceedings.mlr.press/v108/lorraine20a/lorraine20a.pdf)
- [Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks (IAD)](https://arxiv.org/abs/1703.03400)
- [On First-Order Meta-Learning Algorithms (FOA)](https://arxiv.org/abs/1703.03400)
- [Bilevel Programming for Hyperparameter Optimization and Meta-Learning (RAD)](http://export.arxiv.org/pdf/1806.04910)
- [Truncated Back-propagation for Bilevel Optimization (RGT)](https://arxiv.org/pdf/1810.10667.pdf)
- [DARTS: Differentiable Architecture Search (FD)](https://arxiv.org/pdf/1806.09055.pdf)
- [A Generic First-Order Algorithmic Framework for Bi-Level Programming Beyond Lower-Level Singleton (GDA)](https://arxiv.org/pdf/2006.04045.pdf)
- [Towards gradient-based bilevel optimization with non-convex followers and beyond (PTT, DI)](https://proceedings.neurips.cc/paper_files/paper/2021/file/48bea99c85bcbaaba618ba10a6f69e44-Paper.pdf)
- [Averaged Method of Multipliers for Bi-Level Optimization without Lower-Level Strong Convexity(DM)](https://proceedings.mlr.press/v202/liu23y/liu23y.pdf)
- [Learning With Constraint Learning: New Perspective, Solution Strategy and Various Applications (IGA)](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10430445)
- [BOME! Bilevel Optimization Made Easy: A Simple First-Order Approach (VFM)](https://proceedings.neurips.cc/paper_files/paper/2022/file/6dddcff5b115b40c998a08fbd1cea4d7-Paper-Conference.pdf)
- [A Value-Function-based Interior-point Method for Non-convex Bi-level Optimization (VSM)](http://proceedings.mlr.press/v139/liu21o/liu21o.pdf)
- [On Penalty-based Bilevel Gradient Descent Method (PGDM)](https://proceedings.mlr.press/v202/shen23c/shen23c.pdf)
- [Moreau Envelope for Nonconvex Bi-Level Optimization: A Single-loop and Hessian-free Solution Strategy (MESM)](https://arxiv.org/pdf/2405.09927)
## License
MIT License
Copyright (c) 2024 Yaohua Liu
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
|
LandCruiser/sn21_omegaany_1606_5 | LandCruiser | 2025-06-16T09:53:03Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-06-16T09:27:03Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
LandCruiser/sn21_omegaany_1606_2 | LandCruiser | 2025-06-16T09:53:00Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-06-16T09:27:02Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.15_0.5_0.15_epoch1 | MinaMila | 2025-06-16T09:51:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T09:49:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Triangle104/Huihui-MoE-24B-A8B-abliterated-Q3_K_S-GGUF | Triangle104 | 2025-06-16T09:50:19Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"moe",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/Huihui-MoE-24B-A8B-abliterated",
"base_model:quantized:huihui-ai/Huihui-MoE-24B-A8B-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T09:46:18Z | ---
license: apache-2.0
base_model: huihui-ai/Huihui-MoE-24B-A8B-abliterated
library_name: transformers
license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- moe
- llama-cpp
- gguf-my-repo
extra_gated_prompt: '**Usage Warnings**
“**Risk of Sensitive or Controversial Outputs**“: This model’s safety filtering
has been significantly reduced, potentially generating sensitive, controversial,
or inappropriate content. Users should exercise caution and rigorously review generated
outputs.
“**Not Suitable for All Audiences**:“ Due to limited content filtering, the model’s
outputs may be inappropriate for public settings, underage users, or applications
requiring high security.
“**Legal and Ethical Responsibilities**“: Users must ensure their usage complies
with local laws and ethical standards. Generated content may carry legal or ethical
risks, and users are solely responsible for any consequences.
“**Research and Experimental Use**“: It is recommended to use this model for research,
testing, or controlled environments, avoiding direct use in production or public-facing
commercial applications.
“**Monitoring and Review Recommendations**“: Users are strongly advised to monitor
model outputs in real-time and conduct manual reviews when necessary to prevent
the dissemination of inappropriate content.
“**No Default Safety Guarantees**“: Unlike standard models, this model has not undergone
rigorous safety optimization. huihui.ai bears no responsibility for any consequences
arising from its use.'
---
# Triangle104/Huihui-MoE-24B-A8B-abliterated-Q3_K_S-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-MoE-24B-A8B-abliterated`](https://huggingface.co/huihui-ai/Huihui-MoE-24B-A8B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-MoE-24B-A8B-abliterated) for more details on the model.
---
Huihui-MoE-24B-A8B-abliterated is a Mixture of Experts (MoE) language model developed by huihui.ai, built upon the huihui-ai/Qwen3-8B-abliterated base model. It enhances the standard Transformer architecture by replacing MLP layers with MoE layers, each containing 4 experts, to achieve high performance with efficient inference. The model is designed for natural language processing tasks, including text generation, question answering, and conversational applications.
This model combines four ablated models, and perhaps it can achieve the performance of all the ablated models?
This is just a test. The exploration of merging different manifestations of models of the same type is another possibility.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q3_K_S-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q3_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q3_K_S-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q3_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q3_K_S-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q3_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q3_K_S-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q3_k_s.gguf -c 2048
```
|
LaaP-ai/donut-base-invoice-v1.01 | LaaP-ai | 2025-06-16T09:47:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-06-16T09:46:54Z | ---
library_name: transformers
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
model-index:
- name: donut-base-invoice-v1.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-invoice-v1.01
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
xche32/SUTrack | xche32 | 2025-06-16T09:47:13Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2025-06-16T08:28:11Z | ---
license: mit
---
> [**SUTrack: Towards Simple and Unified Single Object Tracking**](https://pan.baidu.com/s/10cR4tQt3lSH5V2RNf7-3gg?pwd=pks2)<br>
> accepted by AAAI2025<br>
> [Xin Chen](https://scholar.google.com.hk/citations?user=A04HWTIAAAAJ), [Ben Kang](https://scholar.google.com.hk/citations?user=By9F6bwAAAAJ), Wanting Geng, [Jiawen Zhu](https://scholar.google.com.hk/citations?user=j_gYsS8AAAAJ), Yi Liu, [Dong Wang](http://faculty.dlut.edu.cn/wangdongice/zh_CN/index.htm), [Huchuan Lu](https://ice.dlut.edu.cn/lu/)
Models of the paper SUTrack: Towards Simple and Unified Single Object Tracking, a new unified framework for single- and multi-modal object tracking. |
eiknarf/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-amphibious_lumbering_beaver | eiknarf | 2025-06-16T09:46:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am amphibious lumbering beaver",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-06T02:09:15Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-amphibious_lumbering_beaver
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am amphibious lumbering beaver
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-amphibious_lumbering_beaver
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="eiknarf/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-amphibious_lumbering_beaver", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
sbx/KB-bert-base-swedish-cased_PI-detection-detailed-iob | sbx | 2025-06-16T09:45:47Z | 59 | 0 | null | [
"pytorch",
"safetensors",
"bert",
"token-classification",
"sv",
"arxiv:1910.09700",
"base_model:KB/bert-base-swedish-cased",
"base_model:finetune:KB/bert-base-swedish-cased",
"license:gpl-3.0",
"region:us"
] | token-classification | 2025-05-12T11:08:18Z | ---
license: gpl-3.0
language:
- sv
base_model:
- KB/bert-base-swedish-cased
pipeline_tag: token-classification
---
# sbx/KB-bert-base-swedish-cased_PI-detection-detailed-iob
This model was developed as part of the [Mormor Karl research environment](https://mormor-karl.github.io/).
It assesses whether each token is likely to belong to a piece of personal or sensitive information and assigns it the appropriate class (firstname male, firstname female, firstname unknown, initials, middlename, surname, school, work, other institution, area, city, geo, country, place, region, street nr, zip code, transport name, transport nr, age digits, age string, date digits, day, month digit, month word, year, phone nr, email, url, personid nr, account nr, license nr, other nr seq, extra, prof, edu, fam, sensitive, or none), distinguishing between beginnings and insides of such spans on token level.
*It should be noted that this model performs best on the domain it was trained on, i.e. second-language learner essays; neither does it guarantee that all of the personal information will be detected and should only be used to assist personal information detection and classification, not completely automatize it.*
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [Språkbanken Text](https://spraakbanken.gu.se/en), as part of the [Mormor Karl research environment](https://mormor-karl.github.io/)
- **Shared by:** Maria Irena Szawerna ([Turtilla](https://huggingface.co/Turtilla))
- **Model type:** BERT for token classification
- **Language(s):** Swedish
- **License:** GPL-3.0
- **Finetuned from model:** [KB/bert-base-swedish-cased](https://huggingface.co/KB/bert-base-swedish-cased)
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Paper:** The Devil’s in the Details: the Detailedness of Classes Influences Personal Information Detection and Labeling ([link](https://aclanthology.org/2025.nodalida-1.70/))
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
This model should be used to assist annotation or de-identification efforts conducted by humans as a pre-annotation step. The detection and classification of personal information can be followed by removal or replacement of said elements.
<!-- ### Out-of-Scope Use -->
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
This model has only been trained on one domain (L2 learner essays). It does not necessarily perform well on other domains. Due to the limitations of the training data, it may also perform worse on rare or previously unseen types of personal information.
Given how high-stakes personal information detection is and that 100% accuracy cannot be guaranteed, this model should be used to assist, not automatize annotation or de-identification procedures.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
<!-- ## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
<!-- [More Information Needed] -->
<!-- ### Training Procedure -->
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
<!-- #### Preprocessing [optional] -->
<!-- [More Information Needed] -->
<!-- #### Training Hyperparameters -->
<!-- - **Training regime:** [More Information Needed] -->
<!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
<!-- #### Speeds, Sizes, Times [optional] -->
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
<!-- [More Information Needed] -->
<!-- ## Evaluation -->
<!-- This section describes the evaluation protocols and provides the results. -->
<!-- ### Testing Data, Factors & Metrics -->
<!-- #### Testing Data -->
<!-- This should link to a Dataset Card if possible. -->
<!-- [More Information Needed] -->
<!-- #### Factors -->
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
<!-- [More Information Needed] -->
<!-- #### Metrics -->
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
<!-- [More Information Needed] -->
<!-- ### Results -->
<!-- [More Information Needed] -->
<!-- #### Summary -->
<!-- ## Model Examination [optional] -->
<!-- Relevant interpretability work for the model goes here -->
<!-- [More Information Needed] -->
<!-- ## Environmental Impact -->
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
<!-- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). -->
<!-- - **Hardware Type:** [More Information Needed] -->
<!-- - **Hours used:** [More Information Needed] -->
<!-- - **Cloud Provider:** [More Information Needed] -->
<!-- - **Compute Region:** [More Information Needed] -->
<!-- - **Carbon Emitted:** [More Information Needed] -->
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
> ```
@inproceedings{szawerna-etal-2025-devils,
title = "The Devil{'}s in the Details: the Detailedness of Classes Influences Personal Information Detection and Labeling",
author = "Szawerna, Maria Irena and
Dobnik, Simon and
Mu{\~n}oz S{\'a}nchez, Ricardo and
Volodina, Elena",
editor = "Johansson, Richard and
Stymne, Sara",
booktitle = "Proceedings of the Joint 25th Nordic Conference on Computational Linguistics and 11th Baltic Conference on Human Language Technologies (NoDaLiDa/Baltic-HLT 2025)",
month = mar,
year = "2025",
address = "Tallinn, Estonia",
publisher = "University of Tartu Library",
url = "https://aclanthology.org/2025.nodalida-1.70/",
pages = "697--708",
ISBN = "978-9908-53-109-0",
abstract = "In this paper, we experiment with the effect of different levels of detailedness or granularity{---}understood as i) the number of classes, and ii) the classes' semantic depth in the sense of hypernym and hyponym relations {---} of the annotation of Personally Identifiable Information (PII) on automatic detection and labeling of such information. We fine-tune a Swedish BERT model on a corpus of Swedish learner essays annotated with a total of six PII tagsets at varying levels of granularity. We also investigate whether the presence of grammatical and lexical correction annotation in the tokens and class prevalence have an effect on predictions. We observe that the fewer total categories there are, the better the overall results are, but having a more diverse annotation facilitates fewer misclassifications for tokens containing correction annotation. We also note that the classes' internal diversity has an effect on labeling. We conclude from the results that while labeling based on the detailed annotation is difficult because of the number of classes, it is likely that models trained on such annotation rely more on the semantic content captured by contextual word embeddings rather than just the form of the tokens, making them more robust against nonstandard language."
}
**APA:**
> Maria Irena Szawerna, Simon Dobnik, Ricardo Muñoz Sánchez, and Elena Volodina. 2025. The Devil’s in the Details: the Detailedness of Classes Influences Personal Information Detection and Labeling. In Proceedings of the Joint 25th Nordic Conference on Computational Linguistics and 11th Baltic Conference on Human Language Technologies (NoDaLiDa/Baltic-HLT 2025), pages 697–708, Tallinn, Estonia. University of Tartu Library.
## Model Card Authors
Maria Irena Szawerna ([Turtilla](https://huggingface.co/Turtilla))
<!-- ## Model Card Contact
[More Information Needed] --> |
UniLLMer/MuseKaaovercooked | UniLLMer | 2025-06-16T09:44:58Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:UniLLMer/MuseKaako6432e3e2jokesdwptoo",
"base_model:quantized:UniLLMer/MuseKaako6432e3e2jokesdwptoo",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T09:42:27Z | ---
base_model: UniLLMer/MuseKaako6432e3e2jokesdwptoo
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** UniLLMer
- **License:** apache-2.0
- **Finetuned from model :** UniLLMer/MuseKaako6432e3e2jokesdwptoo
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
aplux/YOLOv6m | aplux | 2025-06-16T09:44:51Z | 0 | 0 | null | [
"AIoT",
"QNN",
"LLM",
"object-detection",
"license:gpl-3.0",
"region:us"
] | object-detection | 2025-06-12T03:31:36Z | ---
license: gpl-3.0
pipeline_tag: object-detection
tags:
- AIoT
- QNN
- LLM
---

## YOLOv6m: Object Detection
YOLOv6 is an advanced real-time object detection model based on the "You Only Look Once" framework. It achieves faster inference speeds while maintaining high accuracy, making it suitable for various edge devices and high-performance servers. YOLOv6 enhances its ability to detect small objects and improves the model's generalization performance by optimizing the network architecture and introducing new loss functions. Additionally, YOLOv6 supports multi-scale training, ensuring excellent performance across different resolutions. It is widely applied in areas such as video surveillance, autonomous driving, and intelligent security.
### Source model
- Input shape: 1x3x640x640
- Number of parameters: 33.24M
- Model size: 133.20MB
- Output shape: 1x8400x85
The source model can be found [here](https://github.com/meituan/YOLOv6)
## Performance Reference
Please search model by model name in [Model Farm](https://aiot.aidlux.com/en/models)
## Inference & Model Conversion
Please search model by model name in [Model Farm](https://aiot.aidlux.com/en/models)
## License
- Source Model: [GPL-3.0](https://github.com/meituan/YOLOv6/blob/47625514e7480706a46ff3c0cd0252907ac12f22/LICENSE)
- Deployable Model: [GPL-3.0](https://github.com/meituan/YOLOv6/blob/47625514e7480706a46ff3c0cd0252907ac12f22/LICENSE) |
Sullton/Gemma-2-2b-it-ChatMath | Sullton | 2025-06-16T09:44:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-06-16T09:43:41Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sbx/KB-bert-base-swedish-cased_PI-detection-detailed | sbx | 2025-06-16T09:44:29Z | 62 | 0 | null | [
"pytorch",
"safetensors",
"bert",
"token-classification",
"sv",
"arxiv:1910.09700",
"base_model:KB/bert-base-swedish-cased",
"base_model:finetune:KB/bert-base-swedish-cased",
"license:gpl-3.0",
"region:us"
] | token-classification | 2025-05-12T10:59:32Z | ---
license: gpl-3.0
language:
- sv
base_model:
- KB/bert-base-swedish-cased
pipeline_tag: token-classification
---
# sbx/KB-bert-base-swedish-cased_PI-detection-detailed
This model was developed as part of the [Mormor Karl research environment](https://mormor-karl.github.io/).
It assesses whether each token is likely to belong to a piece of personal or sensitive information and assigns it the appropriate class (firstname male, firstname female, firstname unknown, initials, middlename, surname, school, work, other institution, area, city, geo, country, place, region, street nr, zip code, transport name, transport nr, age digits, age string, date digits, day, month digit, month word, year, phone nr, email, url, personid nr, account nr, license nr, other nr seq, extra, prof, edu, fam, sensitive, or none).
*It should be noted that this model performs best on the domain it was trained on, i.e. second-language learner essays; neither does it guarantee that all of the personal information will be detected and should only be used to assist personal information detection and classification, not completely automatize it.*
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [Språkbanken Text](https://spraakbanken.gu.se/en), as part of the [Mormor Karl research environment](https://mormor-karl.github.io/)
- **Shared by:** Maria Irena Szawerna ([Turtilla](https://huggingface.co/Turtilla))
- **Model type:** BERT for token classification
- **Language(s):** Swedish
- **License:** GPL-3.0
- **Finetuned from model:** [KB/bert-base-swedish-cased](https://huggingface.co/KB/bert-base-swedish-cased)
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Paper:** The Devil’s in the Details: the Detailedness of Classes Influences Personal Information Detection and Labeling ([link](https://aclanthology.org/2025.nodalida-1.70/))
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
This model should be used to assist annotation or de-identification efforts conducted by humans as a pre-annotation step. The detection and classification of personal information can be followed by removal or replacement of said elements.
<!-- ### Out-of-Scope Use -->
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
This model has only been trained on one domain (L2 learner essays). It does not necessarily perform well on other domains. Due to the limitations of the training data, it may also perform worse on rare or previously unseen types of personal information.
Given how high-stakes personal information detection is and that 100% accuracy cannot be guaranteed, this model should be used to assist, not automatize annotation or de-identification procedures.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
<!-- ## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
<!-- [More Information Needed] -->
<!-- ### Training Procedure -->
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
<!-- #### Preprocessing [optional] -->
<!-- [More Information Needed] -->
<!-- #### Training Hyperparameters -->
<!-- - **Training regime:** [More Information Needed] -->
<!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
<!-- #### Speeds, Sizes, Times [optional] -->
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
<!-- [More Information Needed] -->
<!-- ## Evaluation -->
<!-- This section describes the evaluation protocols and provides the results. -->
<!-- ### Testing Data, Factors & Metrics -->
<!-- #### Testing Data -->
<!-- This should link to a Dataset Card if possible. -->
<!-- [More Information Needed] -->
<!-- #### Factors -->
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
<!-- [More Information Needed] -->
<!-- #### Metrics -->
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
<!-- [More Information Needed] -->
<!-- ### Results -->
<!-- [More Information Needed] -->
<!-- #### Summary -->
<!-- ## Model Examination [optional] -->
<!-- Relevant interpretability work for the model goes here -->
<!-- [More Information Needed] -->
<!-- ## Environmental Impact -->
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
<!-- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). -->
<!-- - **Hardware Type:** [More Information Needed] -->
<!-- - **Hours used:** [More Information Needed] -->
<!-- - **Cloud Provider:** [More Information Needed] -->
<!-- - **Compute Region:** [More Information Needed] -->
<!-- - **Carbon Emitted:** [More Information Needed] -->
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
> ```
@inproceedings{szawerna-etal-2025-devils,
title = "The Devil{'}s in the Details: the Detailedness of Classes Influences Personal Information Detection and Labeling",
author = "Szawerna, Maria Irena and
Dobnik, Simon and
Mu{\~n}oz S{\'a}nchez, Ricardo and
Volodina, Elena",
editor = "Johansson, Richard and
Stymne, Sara",
booktitle = "Proceedings of the Joint 25th Nordic Conference on Computational Linguistics and 11th Baltic Conference on Human Language Technologies (NoDaLiDa/Baltic-HLT 2025)",
month = mar,
year = "2025",
address = "Tallinn, Estonia",
publisher = "University of Tartu Library",
url = "https://aclanthology.org/2025.nodalida-1.70/",
pages = "697--708",
ISBN = "978-9908-53-109-0",
abstract = "In this paper, we experiment with the effect of different levels of detailedness or granularity{---}understood as i) the number of classes, and ii) the classes' semantic depth in the sense of hypernym and hyponym relations {---} of the annotation of Personally Identifiable Information (PII) on automatic detection and labeling of such information. We fine-tune a Swedish BERT model on a corpus of Swedish learner essays annotated with a total of six PII tagsets at varying levels of granularity. We also investigate whether the presence of grammatical and lexical correction annotation in the tokens and class prevalence have an effect on predictions. We observe that the fewer total categories there are, the better the overall results are, but having a more diverse annotation facilitates fewer misclassifications for tokens containing correction annotation. We also note that the classes' internal diversity has an effect on labeling. We conclude from the results that while labeling based on the detailed annotation is difficult because of the number of classes, it is likely that models trained on such annotation rely more on the semantic content captured by contextual word embeddings rather than just the form of the tokens, making them more robust against nonstandard language."
}
**APA:**
> Maria Irena Szawerna, Simon Dobnik, Ricardo Muñoz Sánchez, and Elena Volodina. 2025. The Devil’s in the Details: the Detailedness of Classes Influences Personal Information Detection and Labeling. In Proceedings of the Joint 25th Nordic Conference on Computational Linguistics and 11th Baltic Conference on Human Language Technologies (NoDaLiDa/Baltic-HLT 2025), pages 697–708, Tallinn, Estonia. University of Tartu Library.
## Model Card Authors
Maria Irena Szawerna ([Turtilla](https://huggingface.co/Turtilla))
<!-- ## Model Card Contact
[More Information Needed] --> |
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.15_0.5_0.25_epoch2 | MinaMila | 2025-06-16T09:44:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T09:42:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Nitish035/mistral_CMoS_adapter32_combine_single-level2 | Nitish035 | 2025-06-16T09:39:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T09:39:49Z | ---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Nitish035
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.25_0.75_0.75_epoch1 | MinaMila | 2025-06-16T09:39:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T09:37:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sbx/KB-bert-base-swedish-cased_PI-detection-basic-iob | sbx | 2025-06-16T09:39:37Z | 16 | 0 | null | [
"pytorch",
"safetensors",
"bert",
"token-classification",
"sv",
"arxiv:1910.09700",
"base_model:KB/bert-base-swedish-cased",
"base_model:finetune:KB/bert-base-swedish-cased",
"license:gpl-3.0",
"region:us"
] | token-classification | 2025-05-12T09:24:19Z | ---
license: gpl-3.0
language:
- sv
base_model:
- KB/bert-base-swedish-cased
pipeline_tag: token-classification
---
# sbx/KB-bert-base-swedish-cased_PI-detection-basic-iob
This model was developed as part of the [Mormor Karl research environment](https://mormor-karl.github.io/).
It assesses whether each token is likely to belong to a piece of personal or sensitive information and assigns it the appropriate class (sensitive or not), distinguishing between beginnings and insides of such spans on token level.
*It should be noted that this model performs best on the domain it was trained on, i.e. second-language learner essays; neither does it guarantee that all of the personal information will be detected and should only be used to assist personal information detection and classification, not completely automatize it.*
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [Språkbanken Text](https://spraakbanken.gu.se/en), as part of the [Mormor Karl research environment](https://mormor-karl.github.io/)
- **Shared by:** Maria Irena Szawerna ([Turtilla](https://huggingface.co/Turtilla))
- **Model type:** BERT for token classification
- **Language(s):** Swedish
- **License:** GPL-3.0
- **Finetuned from model:** [KB/bert-base-swedish-cased](https://huggingface.co/KB/bert-base-swedish-cased)
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Paper:** The Devil’s in the Details: the Detailedness of Classes Influences Personal Information Detection and Labeling ([link](https://aclanthology.org/2025.nodalida-1.70/))
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
This model should be used to assist annotation or de-identification efforts conducted by humans as a pre-annotation step. The detection and classification of personal information can be followed by removal or replacement of said elements.
<!-- ### Out-of-Scope Use -->
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
This model has only been trained on one domain (L2 learner essays). It does not necessarily perform well on other domains. Due to the limitations of the training data, it may also perform worse on rare or previously unseen types of personal information.
Given how high-stakes personal information detection is and that 100% accuracy cannot be guaranteed, this model should be used to assist, not automatize annotation or de-identification procedures.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
<!-- ## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
<!-- [More Information Needed] -->
<!-- ### Training Procedure -->
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
<!-- #### Preprocessing [optional] -->
<!-- [More Information Needed] -->
<!-- #### Training Hyperparameters -->
<!-- - **Training regime:** [More Information Needed] -->
<!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
<!-- #### Speeds, Sizes, Times [optional] -->
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
<!-- [More Information Needed] -->
<!-- ## Evaluation -->
<!-- This section describes the evaluation protocols and provides the results. -->
<!-- ### Testing Data, Factors & Metrics -->
<!-- #### Testing Data -->
<!-- This should link to a Dataset Card if possible. -->
<!-- [More Information Needed] -->
<!-- #### Factors -->
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
<!-- [More Information Needed] -->
<!-- #### Metrics -->
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
<!-- [More Information Needed] -->
<!-- ### Results -->
<!-- [More Information Needed] -->
<!-- #### Summary -->
<!-- ## Model Examination [optional] -->
<!-- Relevant interpretability work for the model goes here -->
<!-- [More Information Needed] -->
<!-- ## Environmental Impact -->
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
<!-- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). -->
<!-- - **Hardware Type:** [More Information Needed] -->
<!-- - **Hours used:** [More Information Needed] -->
<!-- - **Cloud Provider:** [More Information Needed] -->
<!-- - **Compute Region:** [More Information Needed] -->
<!-- - **Carbon Emitted:** [More Information Needed] -->
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
> ```
@inproceedings{szawerna-etal-2025-devils,
title = "The Devil{'}s in the Details: the Detailedness of Classes Influences Personal Information Detection and Labeling",
author = "Szawerna, Maria Irena and
Dobnik, Simon and
Mu{\~n}oz S{\'a}nchez, Ricardo and
Volodina, Elena",
editor = "Johansson, Richard and
Stymne, Sara",
booktitle = "Proceedings of the Joint 25th Nordic Conference on Computational Linguistics and 11th Baltic Conference on Human Language Technologies (NoDaLiDa/Baltic-HLT 2025)",
month = mar,
year = "2025",
address = "Tallinn, Estonia",
publisher = "University of Tartu Library",
url = "https://aclanthology.org/2025.nodalida-1.70/",
pages = "697--708",
ISBN = "978-9908-53-109-0",
abstract = "In this paper, we experiment with the effect of different levels of detailedness or granularity{---}understood as i) the number of classes, and ii) the classes' semantic depth in the sense of hypernym and hyponym relations {---} of the annotation of Personally Identifiable Information (PII) on automatic detection and labeling of such information. We fine-tune a Swedish BERT model on a corpus of Swedish learner essays annotated with a total of six PII tagsets at varying levels of granularity. We also investigate whether the presence of grammatical and lexical correction annotation in the tokens and class prevalence have an effect on predictions. We observe that the fewer total categories there are, the better the overall results are, but having a more diverse annotation facilitates fewer misclassifications for tokens containing correction annotation. We also note that the classes' internal diversity has an effect on labeling. We conclude from the results that while labeling based on the detailed annotation is difficult because of the number of classes, it is likely that models trained on such annotation rely more on the semantic content captured by contextual word embeddings rather than just the form of the tokens, making them more robust against nonstandard language."
}
**APA:**
> Maria Irena Szawerna, Simon Dobnik, Ricardo Muñoz Sánchez, and Elena Volodina. 2025. The Devil’s in the Details: the Detailedness of Classes Influences Personal Information Detection and Labeling. In Proceedings of the Joint 25th Nordic Conference on Computational Linguistics and 11th Baltic Conference on Human Language Technologies (NoDaLiDa/Baltic-HLT 2025), pages 697–708, Tallinn, Estonia. University of Tartu Library.
## Model Card Authors
Maria Irena Szawerna ([Turtilla](https://huggingface.co/Turtilla))
<!-- ## Model Card Contact
[More Information Needed] --> |
phronetic-ai/RZN-T | phronetic-ai | 2025-06-16T09:39:12Z | 22 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-0.6B",
"base_model:finetune:Qwen/Qwen3-0.6B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-03T05:20:37Z | ---
library_name: transformers
license: apache-2.0
base_model:
- Qwen/Qwen3-0.6B
---
#### You can load this model like so:
```
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained("phronetic-ai/RZN-T", torch_dtype=torch.bfloat16)
tokeniser = AutoTokenizer.from_pretrained("phronetic-ai/RZN-T")
```
#### Inference
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "phronetic-ai/RZN-T"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
``` |
sc22mc/DocFusion | sc22mc | 2025-06-16T09:38:57Z | 78 | 0 | null | [
"pytorch",
"safetensors",
"docfusion",
"image-text-to-text",
"custom_code",
"arxiv:2412.12505",
"license:apache-2.0",
"region:us"
] | image-text-to-text | 2025-01-28T08:16:08Z | ---
license: apache-2.0
pipeline_tag: image-text-to-text
---
### DocFusion: A Unified Framework for Document Parsing Tasks
Document parsing involves layout element detection and recognition, essential for extracting information. However, existing methods often employ multiple models for these tasks, leading to increased system complexity and maintenance overhead. While some models attempt to unify detection and recognition, they often fail to address the intrinsic differences in data representations, thereby limiting performance in document processing. Our research reveals that recognition relies on discrete tokens, whereas detection relies on continuous coordinates, leading to challenges in gradient updates and optimization. To bridge this gap, we propose the Gaussian-Kernel CrossEntropy Loss (GK-CEL), enabling generative frameworks to handle both tasks simultaneously. Building upon GK-CEL, we propose DocFusion, a unified document parsing model with only 0.28B parameters. Additionally, we construct the DocLatex-1.6M dataset to provide high-quality training support. Experimental results show that DocFusion, equipped with GK-CEL, performs competitively across four core document parsing tasks, validating the effectiveness of our unified approach.
Resources and Technical Documentation:
+ [Technical Report](https://arxiv.org/abs/2412.12505)
+ [Jupyter Notebook for inference](https://huggingface.co/sc22mc/DocFusion/blob/main/infer.ipynb) |
FormlessAI/5c0e2ebb-92bd-4f35-8739-d182f3d3af8a | FormlessAI | 2025-06-16T09:33:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:unsloth/SmolLM2-1.7B",
"base_model:finetune:unsloth/SmolLM2-1.7B",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T06:53:50Z | ---
base_model: unsloth/SmolLM2-1.7B
library_name: transformers
model_name: 5c0e2ebb-92bd-4f35-8739-d182f3d3af8a
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for 5c0e2ebb-92bd-4f35-8739-d182f3d3af8a
This model is a fine-tuned version of [unsloth/SmolLM2-1.7B](https://huggingface.co/unsloth/SmolLM2-1.7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FormlessAI/5c0e2ebb-92bd-4f35-8739-d182f3d3af8a", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/4prd5ixq)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
dhruvsangani/FeatSystems-LLM-QA | dhruvsangani | 2025-06-16T09:33:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T09:17:12Z | ---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** dhruvsangani
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tomaarsen/Qwen3-Reranker-8B-seq-cls | tomaarsen | 2025-06-16T09:30:38Z | 0 | 1 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"qwen3",
"text-classification",
"transformers",
"text-ranking",
"arxiv:2506.05176",
"base_model:Qwen/Qwen3-8B-Base",
"base_model:finetune:Qwen/Qwen3-8B-Base",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-ranking | 2025-06-16T09:29:13Z | ---
license: apache-2.0
base_model:
- Qwen/Qwen3-8B-Base
tags:
- transformers
- sentence-transformers
pipeline_tag: text-ranking
---
# Qwen3-Reranker-8B
<p align="center">
<img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/logo_qwen3.png" width="400"/>
<p>
> [!NOTE]
> This is a copy of the [Qwen3-Reranker-8B](https://huggingface.co/Qwen/Qwen3-Reranker-8B) model, part of the [Qwen3 Reranker series](https://huggingface.co/collections/Qwen/qwen3-reranker-6841b22d0192d7ade9cdefea), modified as a sequence classification model instead. See [Updated Usage](#updated-usage) for details on how to use it, or [Original Usage](#original-usage) for the original usage.
>
> See [this discussion](https://huggingface.co/Qwen/Qwen3-Reranker-0.6B/discussions/3) for details on the conversion approach.
## Highlights
The Qwen3 Embedding model series is the latest proprietary model of the Qwen family, specifically designed for text embedding and ranking tasks. Building upon the dense foundational models of the Qwen3 series, it provides a comprehensive range of text embeddings and reranking models in various sizes (0.6B, 4B, and 8B). This series inherits the exceptional multilingual capabilities, long-text understanding, and reasoning skills of its foundational model. The Qwen3 Embedding series represents significant advancements in multiple text embedding and ranking tasks, including text retrieval, code retrieval, text classification, text clustering, and bitext mining.
**Exceptional Versatility**: The embedding model has achieved state-of-the-art performance across a wide range of downstream application evaluations. The 8B size embedding model ranks No.1 in the MTEB multilingual leaderboard (as of June 5, 2025, score 70.58), while the reranking model excels in various text retrieval scenarios.
**Comprehensive Flexibility**: The Qwen3 Embedding series offers a full spectrum of sizes (from 0.6B to 8B) for both embedding and reranking models, catering to diverse use cases that prioritize efficiency and effectiveness. Developers can seamlessly combine these two modules. Additionally, the embedding model allows for flexible vector definitions across all dimensions, and both embedding and reranking models support user-defined instructions to enhance performance for specific tasks, languages, or scenarios.
**Multilingual Capability**: The Qwen3 Embedding series offer support for over 100 languages, thanks to the multilingual capabilites of Qwen3 models. This includes various programming languages, and provides robust multilingual, cross-lingual, and code retrieval capabilities.
## Model Overview
**Qwen3-Reranker-8B** has the following features:
- Model Type: Text Reranking
- Supported Languages: 100+ Languages
- Number of Paramaters: 8B
- Context Length: 32k
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3-embedding/), [GitHub](https://github.com/QwenLM/Qwen3-Embedding).
## Qwen3 Embedding Series Model list
| Model Type | Models | Size | Layers | Sequence Length | Embedding Dimension | MRL Support | Instruction Aware |
|------------------|----------------------|------|--------|-----------------|---------------------|-------------|----------------|
| Text Embedding | [Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B) | 0.6B | 28 | 32K | 1024 | Yes | Yes |
| Text Embedding | [Qwen3-Embedding-4B](https://huggingface.co/Qwen/Qwen3-Embedding-4B) | 4B | 36 | 32K | 2560 | Yes | Yes |
| Text Embedding | [Qwen3-Embedding-8B](https://huggingface.co/Qwen/Qwen3-Embedding-8B) | 8B | 36 | 32K | 4096 | Yes | Yes |
| Text Reranking | [Qwen3-Reranker-0.6B](https://huggingface.co/Qwen/Qwen3-Reranker-0.6B) | 0.6B | 28 | 32K | - | - | Yes |
| Text Reranking | [Qwen3-Reranker-4B](https://huggingface.co/Qwen/Qwen3-Reranker-4B) | 4B | 36 | 32K | - | - | Yes |
| Text Reranking | [Qwen3-Reranker-8B](https://huggingface.co/Qwen/Qwen3-Reranker-8B) | 8B | 36 | 32K | - | - | Yes |
> **Note**:
> - `MRL Support` indicates whether the embedding model supports custom dimensions for the final embedding.
> - `Instruction Aware` notes whether the embedding or reranking model supports customizing the input instruction according to different tasks.
> - Our evaluation indicates that, for most downstream tasks, using instructions (instruct) typically yields an improvement of 1% to 5% compared to not using them. Therefore, we recommend that developers create tailored instructions specific to their tasks and scenarios. In multilingual contexts, we also advise users to write their instructions in English, as most instructions utilized during the model training process were originally written in English.
## Usage
With Transformers versions earlier than 4.51.0, you may encounter the following error:
```
KeyError: 'qwen3'
```
### Updated Usage
#### Updated Sentence Transformers Usage
```python
# Requires transformers>=4.51.0
from sentence_transformers import CrossEncoder
def format_queries(query, instruction=None):
prefix = '<|im_start|>system\nJudge whether the Document meets the requirements based on the Query and the Instruct provided. Note that the answer can only be "yes" or "no".<|im_end|>\n<|im_start|>user\n'
if instruction is None:
instruction = (
"Given a web search query, retrieve relevant passages that answer the query"
)
return f"{prefix}<Instruct>: {instruction}\n<Query>: {query}\n"
def format_document(document):
suffix = "<|im_end|>\n<|im_start|>assistant\n<think>\n\n</think>\n\n"
return f"<Document>: {document}{suffix}"
model = CrossEncoder("tomaarsen/Qwen3-Reranker-8B-seq-cls")
task = "Given a web search query, retrieve relevant passages that answer the query"
queries = [
"Which planet is known as the Red Planet?",
"Which planet is known as the Red Planet?",
"Which planet is known as the Red Planet?",
"Which planet is known as the Red Planet?",
]
documents = [
"Venus is often called Earth's twin because of its similar size and proximity.",
"Mars, known for its reddish appearance, is often referred to as the Red Planet.",
"Jupiter, the largest planet in our solar system, has a prominent red spot.",
"Saturn, famous for its rings, is sometimes mistaken for the Red Planet.",
]
pairs = [
[format_queries(query, task), format_document(doc)]
for query, doc in zip(queries, documents)
]
scores = model.predict(pairs)
print(scores.tolist())
# [0.0003314583736937493, 0.9842268824577332, 0.004446804523468018, 0.009984465315937996]
```
#### Updated Transformers Usage
```python
# Requires transformers>=4.51.0
from transformers import AutoModelForSequenceClassification, AutoTokenizer
def format_instruction(instruction, query, doc):
prefix = '<|im_start|>system\nJudge whether the Document meets the requirements based on the Query and the Instruct provided. Note that the answer can only be "yes" or "no".<|im_end|>\n<|im_start|>user\n'
suffix = "<|im_end|>\n<|im_start|>assistant\n<think>\n\n</think>\n\n"
if instruction is None:
instruction = (
"Given a web search query, retrieve relevant passages that answer the query"
)
output = f"{prefix}<Instruct>: {instruction}\n<Query>: {query}\n<Document>: {doc}{suffix}"
return output
tokenizer = AutoTokenizer.from_pretrained("tomaarsen/Qwen3-Reranker-8B-seq-cls", padding_side="left")
model = AutoModelForSequenceClassification.from_pretrained("tomaarsen/Qwen3-Reranker-8B-seq-cls").eval()
# We recommend enabling flash_attention_2 for better acceleration and memory saving.
# model = AutoModelForSequenceClassification.from_pretrained("tomaarsen/Qwen3-Reranker-8B-seq-cls", torch_dtype=torch.float16, attn_implementation="flash_attention_2").cuda().eval()
max_length = 8192
task = "Given a web search query, retrieve relevant passages that answer the query"
queries = [
"Which planet is known as the Red Planet?",
"Which planet is known as the Red Planet?",
"Which planet is known as the Red Planet?",
"Which planet is known as the Red Planet?",
]
documents = [
"Venus is often called Earth's twin because of its similar size and proximity.",
"Mars, known for its reddish appearance, is often referred to as the Red Planet.",
"Jupiter, the largest planet in our solar system, has a prominent red spot.",
"Saturn, famous for its rings, is sometimes mistaken for the Red Planet.",
]
pairs = [format_instruction(task, query, doc) for query, doc in zip(queries, documents)]
inputs = tokenizer(
pairs,
padding=True,
truncation=True,
max_length=max_length,
return_tensors="pt",
)
logits = model(**inputs).logits.squeeze()
print(logits.tolist())
# [-8.011678695678711, 4.133551120758057, -5.411122798919678, -4.5966949462890625]
scores = logits.sigmoid()
print(scores.tolist())
# [0.00033145773340947926, 0.9842268824577332, 0.004446760285645723, 0.009984418749809265]
```
### Original Usage
#### Original Transformers Usage
```python
# Requires transformers>=4.51.0
import torch
from transformers import AutoModel, AutoTokenizer, AutoModelForCausalLM
def format_instruction(instruction, query, doc):
if instruction is None:
instruction = 'Given a web search query, retrieve relevant passages that answer the query'
output = "<Instruct>: {instruction}\n<Query>: {query}\n<Document>: {doc}".format(instruction=instruction,query=query, doc=doc)
return output
def process_inputs(pairs):
inputs = tokenizer(
pairs, padding=False, truncation='longest_first',
return_attention_mask=False, max_length=max_length - len(prefix_tokens) - len(suffix_tokens)
)
for i, ele in enumerate(inputs['input_ids']):
inputs['input_ids'][i] = prefix_tokens + ele + suffix_tokens
inputs = tokenizer.pad(inputs, padding=True, return_tensors="pt", max_length=max_length)
for key in inputs:
inputs[key] = inputs[key].to(model.device)
return inputs
@torch.no_grad()
def compute_logits(inputs, **kwargs):
batch_scores = model(**inputs).logits[:, -1, :]
true_vector = batch_scores[:, token_true_id]
false_vector = batch_scores[:, token_false_id]
batch_scores = torch.stack([false_vector, true_vector], dim=1)
batch_scores = torch.nn.functional.log_softmax(batch_scores, dim=1)
scores = batch_scores[:, 1].exp().tolist()
return scores
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-Reranker-8B", padding_side='left')
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-Reranker-8B").eval()
# We recommend enabling flash_attention_2 for better acceleration and memory saving.
# model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-Reranker-8B", torch_dtype=torch.float16, attn_implementation="flash_attention_2").cuda().eval()
token_false_id = tokenizer.convert_tokens_to_ids("no")
token_true_id = tokenizer.convert_tokens_to_ids("yes")
max_length = 8192
prefix = "<|im_start|>system\nJudge whether the Document meets the requirements based on the Query and the Instruct provided. Note that the answer can only be \"yes\" or \"no\".<|im_end|>\n<|im_start|>user\n"
suffix = "<|im_end|>\n<|im_start|>assistant\n<think>\n\n</think>\n\n"
prefix_tokens = tokenizer.encode(prefix, add_special_tokens=False)
suffix_tokens = tokenizer.encode(suffix, add_special_tokens=False)
task = 'Given a web search query, retrieve relevant passages that answer the query'
queries = [
"Which planet is known as the Red Planet?",
"Which planet is known as the Red Planet?",
"Which planet is known as the Red Planet?",
"Which planet is known as the Red Planet?",
]
documents = [
"Venus is often called Earth's twin because of its similar size and proximity.",
"Mars, known for its reddish appearance, is often referred to as the Red Planet.",
"Jupiter, the largest planet in our solar system, has a prominent red spot.",
"Saturn, famous for its rings, is sometimes mistaken for the Red Planet.",
]
pairs = [format_instruction(task, query, doc) for query, doc in zip(queries, documents)]
# Tokenize the input texts
inputs = process_inputs(pairs)
scores = compute_logits(inputs)
print("scores: ", scores)
# scores: [0.00033198529854416847, 0.9842491745948792, 0.00445373822003603, 0.010004101321101189]
```
📌 **Tip**: We recommend that developers customize the `instruct` according to their specific scenarios, tasks, and languages. Our tests have shown that in most retrieval scenarios, not using an `instruct` on the query side can lead to a drop in retrieval performance by approximately 1% to 5%.
## Evaluation
| Model | Param | MTEB-R | CMTEB-R | MMTEB-R | MLDR | MTEB-Code | FollowIR |
|------------------------------------|--------|---------|---------|---------|--------|-----------|----------|
| **Qwen3-Embedding-0.6B** | 0.6B | 61.82 | 71.02 | 64.64 | 50.26 | 75.41 | 5.09 |
| Jina-multilingual-reranker-v2-base | 0.3B | 58.22 | 63.37 | 63.73 | 39.66 | 58.98 | -0.68 |
| gte-multilingual-reranker-base | 0.3B | 59.51 | 74.08 | 59.44 | 66.33 | 54.18 | -1.64 |
| BGE-reranker-v2-m3 | 0.6B | 57.03 | 72.16 | 58.36 | 59.51 | 41.38 | -0.01 |
| **Qwen3-Reranker-0.6B** | 0.6B | 65.80 | 71.31 | 66.36 | 67.28 | 73.42 | 5.41 |
| **Qwen3-Reranker-4B** | 4B | **69.76** | 75.94 | 72.74 | 69.97 | 81.20 | **14.84** |
| **Qwen3-Reranker-8B** | 8B | 69.02 | **77.45** | **72.94** | **70.19** | **81.22** | 8.05 |
> **Note**:
> - Evaluation results for reranking models. We use the retrieval subsets of MTEB(eng, v2), MTEB(cmn, v1), MMTEB and MTEB (Code), which are MTEB-R, CMTEB-R, MMTEB-R and MTEB-Code.
> - All scores are our runs based on the top-100 candidates retrieved by dense embedding model [Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{qwen3embedding,
title={Qwen3 Embedding: Advancing Text Embedding and Reranking Through Foundation Models},
author={Zhang, Yanzhao and Li, Mingxin and Long, Dingkun and Zhang, Xin and Lin, Huan and Yang, Baosong and Xie, Pengjun and Yang, An and Liu, Dayiheng and Lin, Junyang and Huang, Fei and Zhou, Jingren},
journal={arXiv preprint arXiv:2506.05176},
year={2025}
}
``` |
Mohax3/whisper-small-ja | Mohax3 | 2025-06-16T09:30:29Z | 12 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ja",
"dataset:mozilla-foundation/common_voice_17_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-06-11T03:06:15Z | ---
library_name: transformers
language:
- ja
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_17_0
metrics:
- wer
model-index:
- name: Whisper Small Ja - TakaSan
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 17.0
type: mozilla-foundation/common_voice_17_0
config: ja
split: None
args: 'config: ja, split: test'
metrics:
- name: Wer
type: wer
value: 82.64553585438568
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Ja - TakaSan
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 17.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5403
- Wer: 82.6455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.3518 | 0.9814 | 1000 | 0.5291 | 83.8067 |
| 0.2439 | 1.9627 | 2000 | 0.5110 | 84.9051 |
| 0.1152 | 2.9441 | 3000 | 0.5191 | 81.8296 |
| 0.0657 | 3.9254 | 4000 | 0.5403 | 82.6455 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
LakshGupta/ppo-Huggy | LakshGupta | 2025-06-16T09:29:56Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2025-06-16T09:29:52Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: LakshGupta/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
peammy/distilbert-base-uncased-issues-128-FINER-ABSA | peammy | 2025-06-16T09:28:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-06-16T08:23:56Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-issues-128-FINER-ABSA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-issues-128-FINER-ABSA
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2755
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.3368 | 1.0 | 388 | 2.1709 |
| 2.0795 | 2.0 | 776 | 2.4556 |
| 1.9588 | 3.0 | 1164 | 2.1362 |
| 1.8588 | 4.0 | 1552 | 2.1211 |
| 1.7829 | 5.0 | 1940 | 2.0847 |
| 1.7205 | 6.0 | 2328 | 1.9679 |
| 1.6457 | 7.0 | 2716 | 2.3018 |
| 1.592 | 8.0 | 3104 | 2.5476 |
| 1.5514 | 9.0 | 3492 | 2.1689 |
| 1.4995 | 10.0 | 3880 | 2.3121 |
| 1.454 | 11.0 | 4268 | 2.0584 |
| 1.4353 | 12.0 | 4656 | 2.4454 |
| 1.398 | 13.0 | 5044 | 2.3003 |
| 1.3632 | 14.0 | 5432 | 2.2741 |
| 1.3431 | 15.0 | 5820 | 2.3972 |
| 1.2959 | 16.0 | 6208 | 2.0092 |
| 1.2502 | 17.0 | 6596 | 2.6074 |
| 1.2428 | 18.0 | 6984 | 2.4610 |
| 1.2173 | 19.0 | 7372 | 2.2601 |
| 1.1897 | 20.0 | 7760 | 1.9136 |
| 1.1794 | 21.0 | 8148 | 2.3777 |
| 1.1414 | 22.0 | 8536 | 2.4453 |
| 1.1451 | 23.0 | 8924 | 2.2062 |
| 1.0995 | 24.0 | 9312 | 2.0046 |
| 1.0842 | 25.0 | 9700 | 2.3008 |
| 1.0775 | 26.0 | 10088 | 2.2165 |
| 1.0534 | 27.0 | 10476 | 2.4383 |
| 1.045 | 28.0 | 10864 | 2.5337 |
| 1.0316 | 29.0 | 11252 | 1.9797 |
| 1.0143 | 30.0 | 11640 | 2.3651 |
| 0.9912 | 31.0 | 12028 | 2.2892 |
| 0.9995 | 32.0 | 12416 | 2.2585 |
| 0.9715 | 33.0 | 12804 | 2.3780 |
| 0.9683 | 34.0 | 13192 | 2.1615 |
| 0.9452 | 35.0 | 13580 | 2.5597 |
| 0.9563 | 36.0 | 13968 | 2.3249 |
| 0.9555 | 37.0 | 14356 | 2.6080 |
| 0.9446 | 38.0 | 14744 | 2.2567 |
| 0.9307 | 39.0 | 15132 | 2.3204 |
| 0.9306 | 40.0 | 15520 | 2.2755 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
nguyenanh2803/qwen-r1-aha-moment | nguyenanh2803 | 2025-06-16T09:26:16Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T06:06:48Z | ---
base_model: Qwen/Qwen2-1.5B-Instruct
library_name: transformers
model_name: qwen-r1-aha-moment
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for qwen-r1-aha-moment
This model is a fine-tuned version of [Qwen/Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="nguyenanh2803/qwen-r1-aha-moment", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
zehongma/OVMR | zehongma | 2025-06-16T09:25:02Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T09:05:08Z | # The official pretrained weights of [OVMR](https://github.com/Zehong-Ma/OVMR).
Please refer to the [paper](https://openaccess.thecvf.com/content/CVPR2024/papers/Ma_OVMR_Open-Vocabulary_Recognition_with_Multi-Modal_References_CVPR_2024_paper.pdf) and [code](https://github.com/Zehong-Ma/OVMR) to get more details.
---
license: apache-2.0
---
|
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.5_0.05_0.05_epoch1 | MinaMila | 2025-06-16T09:23:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T09:22:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
erdem-erdem/Qwen2.5-3B-Instruct-countdown-game-8k-qwq-r64 | erdem-erdem | 2025-06-16T09:23:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-3B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-3B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T09:21:11Z | ---
base_model: unsloth/Qwen2.5-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** erdem-erdem
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-3B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Triangle104/Mistral-Nemo-Gutenberg-Vitus-12B-Q4_K_M-GGUF | Triangle104 | 2025-06-16T09:22:16Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"dataset:nbeerbower/human-writing-dpo",
"base_model:nbeerbower/Mistral-Nemo-Gutenberg-Vitus-12B",
"base_model:quantized:nbeerbower/Mistral-Nemo-Gutenberg-Vitus-12B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-16T08:06:44Z | ---
license: apache-2.0
datasets:
- nbeerbower/human-writing-dpo
base_model: nbeerbower/Mistral-Nemo-Gutenberg-Vitus-12B
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/Mistral-Nemo-Gutenberg-Vitus-12B-Q4_K_M-GGUF
This model was converted to GGUF format from [`nbeerbower/Mistral-Nemo-Gutenberg-Vitus-12B`](https://huggingface.co/nbeerbower/Mistral-Nemo-Gutenberg-Vitus-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nbeerbower/Mistral-Nemo-Gutenberg-Vitus-12B) for more details on the model.
---
Mistral-Nemo-Gutenberg-Encore-12B finetuned on nbeerbower/human-writing-dpo with Mistral Instruct.
Method
-
ORPO tuned with 1x RTX A6000 for 3 epochs.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Mistral-Nemo-Gutenberg-Vitus-12B-Q4_K_M-GGUF --hf-file mistral-nemo-gutenberg-vitus-12b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Mistral-Nemo-Gutenberg-Vitus-12B-Q4_K_M-GGUF --hf-file mistral-nemo-gutenberg-vitus-12b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Mistral-Nemo-Gutenberg-Vitus-12B-Q4_K_M-GGUF --hf-file mistral-nemo-gutenberg-vitus-12b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Mistral-Nemo-Gutenberg-Vitus-12B-Q4_K_M-GGUF --hf-file mistral-nemo-gutenberg-vitus-12b-q4_k_m.gguf -c 2048
```
|
aplux/YOLOv5s-seg | aplux | 2025-06-16T09:21:31Z | 0 | 0 | null | [
"AIoT",
"QNN",
"image-segmentation",
"license:agpl-3.0",
"region:us"
] | image-segmentation | 2025-06-11T09:33:04Z | ---
license: agpl-3.0
pipeline_tag: image-segmentation
tags:
- AIoT
- QNN
---

## YOLOv5s-seg: Semantic Segmentation
YOLOv5-Seg is an instance segmentation model in the YOLOv5 series, extending the YOLOv5 object detection architecture. Unlike standard YOLOv5, YOLOv5-Seg not only detects object locations but also generates precise segmentation masks for each detected object, enabling instance-level segmentation. This model adds a segmentation head and mask branch, incorporating convolutional layers and upsampling operations to produce high-quality segmentation results. YOLOv5-Seg retains the efficiency and real-time performance of YOLOv5, making it suitable for detailed image understanding tasks. It is widely used in applications such as autonomous driving, medical image analysis, and video surveillance, where instance segmentation is required.
### Source model
- Input shape: 224x224
- Number of parameters: 7.62M
- Model size: 29.1M
- Output shape: 1x3087x117, 1x32x56x56
Source model repository: [YOLOv5s-seg](https://github.com/ultralytics/yolov5)
## Performance Reference
Please search model by model name in [Model Farm](https://aiot.aidlux.com/en/models)
## Inference & Model Conversion
Please search model by model name in [Model Farm](https://aiot.aidlux.com/en/models)
## License
- Source Model: [AGPL-3.0](https://github.com/ultralytics/yolov5/blob/master/LICENSE)
- Deployable Model: [AGPL-3.0](https://github.com/ultralytics/yolov5/blob/master/LICENSE) |
Triangle104/Mistral-Nemo-Gutenberg-Vitus-12B-Q4_K_S-GGUF | Triangle104 | 2025-06-16T09:20:29Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"dataset:nbeerbower/human-writing-dpo",
"base_model:nbeerbower/Mistral-Nemo-Gutenberg-Vitus-12B",
"base_model:quantized:nbeerbower/Mistral-Nemo-Gutenberg-Vitus-12B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-16T07:57:25Z | ---
license: apache-2.0
datasets:
- nbeerbower/human-writing-dpo
base_model: nbeerbower/Mistral-Nemo-Gutenberg-Vitus-12B
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/Mistral-Nemo-Gutenberg-Vitus-12B-Q4_K_S-GGUF
This model was converted to GGUF format from [`nbeerbower/Mistral-Nemo-Gutenberg-Vitus-12B`](https://huggingface.co/nbeerbower/Mistral-Nemo-Gutenberg-Vitus-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nbeerbower/Mistral-Nemo-Gutenberg-Vitus-12B) for more details on the model.
---
Mistral-Nemo-Gutenberg-Encore-12B finetuned on nbeerbower/human-writing-dpo with Mistral Instruct.
Method
-
ORPO tuned with 1x RTX A6000 for 3 epochs.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Mistral-Nemo-Gutenberg-Vitus-12B-Q4_K_S-GGUF --hf-file mistral-nemo-gutenberg-vitus-12b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Mistral-Nemo-Gutenberg-Vitus-12B-Q4_K_S-GGUF --hf-file mistral-nemo-gutenberg-vitus-12b-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Mistral-Nemo-Gutenberg-Vitus-12B-Q4_K_S-GGUF --hf-file mistral-nemo-gutenberg-vitus-12b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Mistral-Nemo-Gutenberg-Vitus-12B-Q4_K_S-GGUF --hf-file mistral-nemo-gutenberg-vitus-12b-q4_k_s.gguf -c 2048
```
|
Entropicengine/LiquidGold-MS-L3.3-70b | Entropicengine | 2025-06-16T09:19:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:Steelskull/L3.3-Cu-Mai-R1-70b",
"base_model:merge:Steelskull/L3.3-Cu-Mai-R1-70b",
"base_model:Steelskull/L3.3-MS-Nevoria-70b",
"base_model:merge:Steelskull/L3.3-MS-Nevoria-70b",
"base_model:Tarek07/Legion-V2.1-LLaMa-70B",
"base_model:merge:Tarek07/Legion-V2.1-LLaMa-70B",
"base_model:Tarek07/Progenitor-V3.3-LLaMa-70B",
"base_model:merge:Tarek07/Progenitor-V3.3-LLaMa-70B",
"base_model:zerofata/L3.3-GeneticLemonade-Final-70B",
"base_model:merge:zerofata/L3.3-GeneticLemonade-Final-70B",
"license:llama3.3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T01:51:36Z | ---
base_model:
- Steelskull/L3.3-Cu-Mai-R1-70b
- Tarek07/Progenitor-V3.3-LLaMa-70B
- Tarek07/Legion-V2.1-LLaMa-70B
- zerofata/L3.3-GeneticLemonade-Final-70B
- Steelskull/L3.3-MS-Nevoria-70b
library_name: transformers
tags:
- mergekit
- merge
license: llama3.3
---
~ ⚱️⚜️ ~

# LiquidGold-MS-L3.3-70b
## Recommended preset :
- [[email protected]](https://huggingface.co/Konnect1221/The-Inception-Presets-Methception-LLamaception-Qwenception/blob/main/Llam%40ception/Llam%40ception-1.5.json)
## Quants (courtesy : team mradermacher)
- [Static quants](https://huggingface.co/mradermacher/LiquidGold-MS-L3.3-70b-GGUF)
- [Weighted/Imatrix quants](https://huggingface.co/mradermacher/LiquidGold-MS-L3.3-70b-i1-GGUF)
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Steelskull/L3.3-MS-Nevoria-70b](https://huggingface.co/Steelskull/L3.3-MS-Nevoria-70b) as a base.
### Models Merged
The following models were included in the merge:
* [Steelskull/L3.3-Cu-Mai-R1-70b](https://huggingface.co/Steelskull/L3.3-Cu-Mai-R1-70b)
* [Tarek07/Progenitor-V3.3-LLaMa-70B](https://huggingface.co/Tarek07/Progenitor-V3.3-LLaMa-70B)
* [Tarek07/Legion-V2.1-LLaMa-70B](https://huggingface.co/Tarek07/Legion-V2.1-LLaMa-70B)
* [zerofata/L3.3-GeneticLemonade-Final-70B](https://huggingface.co/zerofata/L3.3-GeneticLemonade-Final-70B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: Steelskull/L3.3-MS-Nevoria-70b
dtype: bfloat16
merge_method: model_stock
modules:
default:
slices:
- sources:
- layer_range: [0, 80]
model: Steelskull/L3.3-Cu-Mai-R1-70b
- layer_range: [0, 80]
model: Tarek07/Progenitor-V3.3-LLaMa-70B
- layer_range: [0, 80]
model: zerofata/L3.3-GeneticLemonade-Final-70B
- layer_range: [0, 80]
model: Tarek07/Legion-V2.1-LLaMa-70B
- layer_range: [0, 80]
model: Steelskull/L3.3-MS-Nevoria-70b
``` |
amityco/typhoon-vllm | amityco | 2025-06-16T09:18:25Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-16T09:16:41Z | ---
license: apache-2.0
---
|
IoanaLiviaPopescu/real-data-synth-data-1600-1-St-Emil-whisper-small | IoanaLiviaPopescu | 2025-06-16T09:17:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ro",
"dataset:IoanaLivia/RealVoiceSynthVoice-1600-1-St-Emil",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-06-16T08:10:49Z | ---
library_name: transformers
language:
- ro
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- IoanaLivia/RealVoiceSynthVoice-1600-1-St-Emil
metrics:
- wer
model-index:
- name: IoanaLiviaPopescu/ IoanaLiviaPopescu/real-data-synth-data-1600-1-St-Emil-whisper-small
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: IoanaLivia/RealVoiceSynthVoice-1600-1-St-Emil
type: IoanaLivia/RealVoiceSynthVoice-1600-1-St-Emil
config: default
split: test
args: 'split: validation'
metrics:
- name: Wer
type: wer
value: 17.038539553752535
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IoanaLiviaPopescu/ IoanaLiviaPopescu/real-data-synth-data-1600-1-St-Emil-whisper-small
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the IoanaLivia/RealVoiceSynthVoice-1600-1-St-Emil dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3718
- Wer: 17.0385
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 0 | 0 | 0.6024 | 27.8812 |
| 0.282 | 1.0 | 51 | 0.3977 | 18.3109 |
| 0.1077 | 2.0 | 102 | 0.3658 | 17.3151 |
| 0.0561 | 3.0 | 153 | 0.3718 | 17.0385 |
| 0.0328 | 4.0 | 204 | 0.3881 | 17.3889 |
| 0.023 | 5.0 | 255 | 0.4000 | 17.7208 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Bearrr310/train_grpo_7B_unsloth_0616_stage2 | Bearrr310 | 2025-06-16T09:15:51Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"grpo",
"dataset:unsloth-7B-reward-0616-stage2",
"arxiv:2402.03300",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T09:15:06Z | ---
base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
datasets: unsloth-7B-reward-0616-stage2
library_name: transformers
model_name: train_grpo_7B_unsloth_0616_stage2
tags:
- generated_from_trainer
- unsloth
- trl
- grpo
licence: license
---
# Model Card for train_grpo_7B_unsloth_0616_stage2
This model is a fine-tuned version of [unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit](https://huggingface.co/unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit) on the [unsloth-7B-reward-0616-stage2](https://huggingface.co/datasets/unsloth-7B-reward-0616-stage2) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Bearrr310/train_grpo_7B_unsloth_0616_stage2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
systemofapwne/piper-de-glados | systemofapwne | 2025-06-16T09:14:58Z | 0 | 3 | null | [
"onnx",
"portal",
"GLaDOS",
"turret",
"de",
"base_model:rhasspy/piper-voices",
"base_model:quantized:rhasspy/piper-voices",
"region:us"
] | null | 2025-03-01T17:59:16Z | ---
language:
- de
tags:
- portal
- GLaDOS
- turret
base_model: rhasspy/piper-voices
---
# Table of Contents
1. [GLaDOS voice model, trained on German Portal 1 and Portal 2 game files](#orgf1ffe5a)
1. [Model description](#orgb8e0867)
2. [Dataset & Training](#orgdebb47e)
1. [Build the training dataset](#org793fef7)
2. [Training](#org2be9bb4)
3. [Infere the final model](#org192c2b8)
<a id="orgf1ffe5a"></a>
# GLaDOS voice model, trained on German Portal 1 and Portal 2 game files
<a id="orgb8e0867"></a>
## Model description
This model uses a checkpoint from the [Torsten High](https://huggingface.co/datasets/rhasspy/piper-checkpoints/tree/main/de/de_DE/thorsten/high) model as a base and fine tuned it
via the voice lines, directly coming from the game files of Portal 1 and Portal 2 to
replicate the German GLaDOS voice for piper.
Training has been performed on an RTX 4000 for over more than 3000 epochs.
You will find two voice models in here
- `de_DE-glados-high.onnx` and `de_DE-glados-high.onnx.json`: GLaDOS herself
- `de_DE-glados-turret-high.onnx` and `de_DE-glados-turret-high.onnx.json`: Fine tuned on the above model to sound like the turret voice.
<a id="orgdebb47e"></a>
## Dataset & Training
I also added *hints on how to build the training dataset* and the used toolchain for preparing and training the model in this repo.
Reasons being:
- The the training data is intellectual property and copyright by Valve (I cannot include it here for obvious reasons)
- Training a model for piper (as of early 2025) relies on old/outdated tools from 2021 and getting everything up
and running can be super frustrating
**Requirements**
- A PC with an nVidia GPU and the proprietary nVidia drivers, CUDA, Docker + Docker as well as the nvidia-container-toolkit installed
- Ideally use a linux system (WSL untested but potentially might work)
- Basic linux and python knowledge
<a id="org793fef7"></a>
### Build the training dataset
1. Extract the files from the game
The training dataset has been extracted from the Portal 1 and Portal 2 game files.
For legal reason, they are not included in this repo. But you can easily extract them from the
gamefiles via [VPKEdit](https://developer.valvesoftware.com/wiki/VPKEdit)
**Portal 1**:
- Switch the game to the desired language (Here: German) via Steam
- Navigate to `<steam>/steamapps/common/Portal/portal` and open `portal_pak_dir.vpk` with VPKEdit
- Inside `portal_pak_dir.vpk`, navigate to `sound/vo/aperture_ai` and extract all `*.wav` files into the folder `raw` inside this git repo
**Portal 2:**
- Switch the game to the desired language (Here: German) via Steam
- Navigate to `<steam>/steamapps/common/portal 2`. Select the subfolder matching the language (here `portal2_german`) and open `pak01_dir.vpk` via VPKEdit
- Inside `pak01_dir.vpk`, navigate to `sound/vo/glados` and extract all `*.wav` files (but no subfolders) to the folder `raw` inside this git repo
**Portal 2 DLC 1**:
- Repeat the steps 1 for **Portal 2** above but now select the `portal2_dlc1_<your language>` folder (if it exists). Here, `portal2_dlc1_german` does exist. Open `pak01_dir.vpk` with VPKEdit
- Repeat step 3 of **Portal 2** above but copy the files to `raw` in this git repo
2. Transcode the files
We need to transcode the files. The portal 1 files have a samplerate of 44.1 kHz WAV while the portal2 files are MP3.
For training, we need WAV, 16bit (LE), mono PCM with the sampleratres shown below, depending on the model quality we want to train.
- x-low, low: 16000 kHz
- medium, high: 22050 Hz
NOTE: In principle, we can also train on 44100 Hz, however the piper-train then needs to be modified for **training** and **inference** as it only supports
Run the following command (needs `ffmpeg` to be installed)
# Before running the script, first edit the bitrate, that you want
./0_transcode.sh
3. Sort by good/bad samples
<span class="underline">Now the annoying part</span>: Listen to all voice samples, one by one and sort them by good (same voice style, no degradation in quality, no additional none-voice parts or mumble etc) and bad (the opposite)
I have written a helper script for this purpose: **1<sub>sort</sub><sub>good</sub><sub>bad.py</sub>** (Read the comments in it).
<span class="underline">But hold your horses</span>: Before you perform this annoying job, that can take several hours: I expect the quality of the voice lines to be similar across languges. So you can use my
script `1_from_good.py` which uses the `good.txt` file to tag voice samples as **good** or **bad**, based on my decisions made during listening to GLaDOS myself.
Run the following command
./1_from_good.py
4. Transcribe
Now we need to transcribe the files. For this, we need `faster-whisper`. The easiest way to install and use it, is to do this via Docker.
But before you do that, you should edit the file `2_transcribe.py` and select the language and model you want to use.
Run this to build the docker container(s)
docker compose up --build -d
docker exec -it transcribe bash
You should now be in the `transcribe` docker container. Run
./2_transcribe.py
This will yield a new file `metadata.csv`. <span class="underline">Copy this file to `raw_good`, once transcription has finished</span>
<a id="org2be9bb4"></a>
### Training
For this, you should use the Docker container, which is provided by this repo.
But before you do that, you need to configure the new files:
- 3<sub>gen</sub><sub>traindata.sh</sub>: Edit the samplerate (16000 for x-low and low, 22050 for medium, high models) and the language code (en, de, ru, fr, …)
- 4<sub>train.sh</sub>: Edit the QUALITY, BATCHSIZE, PHONEME<sub>MAX</sub> parameters, that suit your training hardware.
Also select the CHKPOINT to start from: You ideally do not want to train from scratch but rather from an already exisiting checkpoint.
Grab [one from the piper people](https://huggingface.co/datasets/rhasspy/piper-checkpoints/tree/main), that fits the model (x-low, low, medium, high) and language, that you want to train.
Copy it to `checkpoints` within this repo.
Now run the following within this repo (if you haven't it already done for transcription)
docker compose up --build -d
docker exec -it training bash
This will build and enter the training container and also export training metrics via tensorboard at <http://127.0.0.1:6006>
From inside the container, you now need to generate your traindata for the training process
./3_gen_traindata.sh
And now, you are ready for training. Simply run
./4_train.sh
inside the container.
In the case, you need to stop training, you just have to change the path to the checkpoint by setting the `CHCKPOINT` variable in `./4_train.sh`.
<a id="org192c2b8"></a>
### Infere the final model
After training has finished (either it flattened of or you hit the max epoch limit), you need to export the model to the onnx format.
First, edit `5_export.sh` and set the name and also the checkpoint (generally the last trained checkpoint by `4_train.sh`, you want to export the model from
From still inside the training docker container, run this command
./5_export.sh
This will generate a `<model_name>.onnx` and `<model_name>.onnx.json` file. The later one needs to be adjusted: Open it in a file editor and and navigate to the line where it reads
"dataset": "",
and place replace "" with this models name (here: "<model<sub>name</sub>>")
"dataset": "de_DE-glados-high"
These two files can now be used by piper
|
nagoshidayo/MORTM | nagoshidayo | 2025-06-16T09:12:57Z | 0 | 0 | null | [
"music-generation",
"transformer",
"MoE",
"ALiBi",
"FlashAttention",
"melody-generation",
"rhythmic-modeling",
"MIDI or Chord to MIDI or Chord",
"region:us"
] | null | 2025-06-16T09:02:25Z | ---
pipeline_tag: MIDI or Chord to MIDI or Chord
tags:
- music-generation
- transformer
- MoE
- ALiBi
- FlashAttention
- melody-generation
- rhythmic-modeling
---
# Model Card for MORTM (Metric-Oriented Rhythmic Transformer for Melodic generation)
MORTM is a Transformer-based model designed for melody generation, with a strong emphasis on metric (rhythmic) structure. It represents music as sequences of pitch, duration, and relative beat positions within a measure (normalized to 96 ticks), making it suitable for time-robust, rhythm-aware music generation tasks.
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
MORTM (Metric-Oriented Rhythmic Transformer for Melodic generation) is a decoder-only Transformer architecture optimized for music generation with rhythmic awareness. It generates melodies measure-by-measure in an autoregressive fashion. The model supports chord-conditional generation and is equipped with the following features:
- Mixture of Experts (MoE) in the feedforward layers for capacity increase and compute efficiency.
- ALiBi (Attention with Linear Biases) for relative positional biasing.
- FlashAttention2 for fast and memory-efficient attention.
- Relative tick-based tokenization (e.g., [Position, Duration, Pitch]) for metric robustness.
- **Developed by:** Koue Okazaki & Takaki Nagoshi
- **Funded by [optional]:** Nihon University, Graduate School of Integrated Basic Sciences
- **Shared by [optional]:** ProjectMORTM
- **Model type:** Transformer (decoder-only with MoE and ALiBi)
- **Language(s) (NLP):** N/A (music domain)
- **License:** MIT
- **Finetuned from model [optional]:** Custom-built from scratch (not fine-tuned from a pretrained LM)
### Model Sources [optional]
- **Repository:** [https://github.com/Ayato964/MORTM](https://github.com/Ayato964/MORTM) *(replace with actual link)*
- **Paper [optional]:** In submission
- **Demo [optional]:** Coming soon
## Uses
### Direct Use
MORTM can generate melodies from scratch or conditionally based on chord progressions. It is ideal for:
- Melody composition in pop, jazz, and improvisational styles.
- Real-time melodic suggestion systems for human-AI co-creation.
- Music education and melody completion tools.
### Downstream Use [optional]
- Style transfer with different chord inputs.
- Harmonization and rhythm-based accompaniment systems.
### Out-of-Scope Use
- Audio-to-audio tasks (e.g., vocal separation).
- Raw audio synthesis (requires additional vocoder).
- Not suitable for genre classification or music recommendation.
## Bias, Risks, and Limitations
As the training dataset is primarily composed of Western tonal music, the model may underperform on:
- Non-tonal, microtonal, or traditional music styles.
- Polyrhythmic or tempo-variable music.
- Genres not sufficiently represented in training data (e.g., Indian classical).
### Recommendations
Generated melodies should be manually reviewed in professional music contexts. Users are encouraged to retrain or fine-tune on representative datasets when applying to culturally specific music.
## How to Get Started with the Model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("nagoshidayo/mortm")
tokenizer = AutoTokenizer.from_pretrained("nagoshidayo/mortm")
|
Subsets and Splits