modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-01 18:27:11
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 461
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-01 18:25:15
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
kimddo1/bert-kor-kosa-nsmc | kimddo1 | 2025-05-30T03:02:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-30T03:01:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BongRea/Qwen3_Rude_RAG_FULL_sec | BongRea | 2025-05-30T03:02:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-30T03:01:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yinchenghust/openpi_fast_libero_cot_rft | yinchenghust | 2025-05-30T03:00:01Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"paligemma",
"image-text-to-text",
"model",
"pi0fast_base_cot",
"processor",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-28T10:19:18Z | ---
library_name: transformers
tags:
- model
- pi0fast_base_cot
- processor
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sebastianmr18/xlm-roberta-ner-qlora-bs32-epochs-3 | sebastianmr18 | 2025-05-30T02:53:42Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:adapter:FacebookAI/xlm-roberta-large",
"region:us"
] | null | 2025-05-30T02:53:37Z | ---
base_model: xlm-roberta-large
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
BienKieu/codeT5-phase1-version7 | BienKieu | 2025-05-30T02:48:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:BienKieu/codeT5-phase1-version6",
"base_model:finetune:BienKieu/codeT5-phase1-version6",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-29T16:45:07Z | ---
library_name: transformers
license: apache-2.0
base_model: BienKieu/codeT5-phase1-version6
tags:
- generated_from_trainer
model-index:
- name: codeT5-phase1-version7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeT5-phase1-version7
This model is a fine-tuned version of [BienKieu/codeT5-phase1-version6](https://huggingface.co/BienKieu/codeT5-phase1-version6) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 14
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
PhillipW/Hobbit_Home | PhillipW | 2025-05-30T02:47:47Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-30T02:34:13Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Hobbit_Home
---
# Hobbit_Home
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Hobbit_Home` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "Hobbit_Home",
"lora_weights": "https://huggingface.co/PhillipW/Hobbit_Home/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('PhillipW/Hobbit_Home', weight_name='lora.safetensors')
image = pipeline('Hobbit_Home').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/PhillipW/Hobbit_Home/discussions) to add images that show off what you’ve made with this LoRA.
|
OpenGVLab/ZeroGUI-AndroidLab-7B | OpenGVLab | 2025-05-30T02:46:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"multimodal",
"gui",
"conversational",
"en",
"zh",
"arxiv:2505.23762",
"base_model:ByteDance-Seed/UI-TARS-7B-DPO",
"base_model:finetune:ByteDance-Seed/UI-TARS-7B-DPO",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-29T16:19:01Z | ---
license: apache-2.0
language:
- en
- zh
base_model:
- ByteDance-Seed/UI-TARS-7B-DPO
pipeline_tag: image-text-to-text
library_name: transformers
tags:
- multimodal
- gui
---
# ZeroGUI-AndroidLab-7B
[\[📜 Paper\]](https://arxiv.org/abs/2505.23762)
[\[📂 GitHub\]](https://github.com/OpenGVLab/ZeroGUI)
## Introduction
We propose **ZeroGUI**, a fully automated online reinforcement learning framework that enables GUI agents to train and adapt in interactive environments at zero human cost.
* **Automatic Task Generation:** Automatically proposes diverse, executable GUI tasks.
* **Automatic Reward Estimation:** Assigns binary task rewards based on trajectory screenshots and employs a voting mechanism to avoid hallucinated success.
* **Two-Stage Online RL:** Combines training on generated tasks and test-time adaptation to continually improve agent's performance.

## Results

## Citation
If you find this work helpful in your research, please consider citing:
```bibtex
@article{yang2025zerogui,
title={ZeroGUI: Automating Online GUI Learning at Zero Human Cost},
author={Yang, Chenyu and Shiqian, Su and Liu, Shi and Dong, Xuan and Yu, Yue and Su, Weijie and Wang, Xuehui and Liu, Zhaoyang and Zhu, Jinguo and Li, Hao and Wang, Wenhai and Qiao, Yu and Zhu, Xizhou and Dai, Jifeng},
journal={arXiv preprint arXiv:2505.23762},
year={2025}
}
``` |
4yyw/fdgdf | 4yyw | 2025-05-30T02:45:00Z | 0 | 0 | null | [
"dataset:nvidia/OpenCodeReasoning",
"doi:10.57967/hf/5671",
"license:apache-2.0",
"region:us"
] | null | 2025-05-29T15:29:26Z | ---
license: apache-2.0
datasets:
- nvidia/OpenCodeReasoning
--- |
BootesVoid/cmb9z4w0p0keh1b1yxnku6g42_cmba6dmo30m7v1b1y9y4q87z9 | BootesVoid | 2025-05-30T02:44:18Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-30T02:44:05Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: trisha
---
# Cmb9Z4W0P0Keh1B1Yxnku6G42_Cmba6Dmo30M7V1B1Y9Y4Q87Z9
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `trisha` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "trisha",
"lora_weights": "https://huggingface.co/BootesVoid/cmb9z4w0p0keh1b1yxnku6g42_cmba6dmo30m7v1b1y9y4q87z9/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb9z4w0p0keh1b1yxnku6g42_cmba6dmo30m7v1b1y9y4q87z9', weight_name='lora.safetensors')
image = pipeline('trisha').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb9z4w0p0keh1b1yxnku6g42_cmba6dmo30m7v1b1y9y4q87z9/discussions) to add images that show off what you’ve made with this LoRA.
|
bratao/Qwen3OIE-8B | bratao | 2025-05-30T02:43:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"conversational",
"dataset:train_dataset_updated.jsonl",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-30T02:24:07Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen3-8B
tags:
- generated_from_trainer
datasets:
- train_dataset_updated.jsonl
model-index:
- name: outputs/out/
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.9.2`
```yaml
base_model: Qwen/Qwen3-8B
# Automatically upload checkpoint and final model to HF
# hub_model_id: username/custom_model_name
load_in_8bit: false
load_in_4bit: false
strict: false
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_glu_activation: true
liger_layer_norm: true
liger_fused_linear_cross_entropy: true
datasets:
- path: "train_dataset_updated.jsonl"
type: chat_template
field_messages: conversations
message_property_mappings:
role: from
content: value
output_dir: ./outputs/out/
sequence_len: 2048
sample_packing: true
flex_attention: true
pad_to_sequence_len: true
flex_attn_compile_kwargs:
dynamic: false
mode: max-autotune-no-cudagraphs
wandb_project: openie-qwen3
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 1
micro_batch_size: 4
num_epochs: 3
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 2e-5
bf16: true
tf32: true
resume_from_checkpoint:
logging_steps: 1
evals_per_epoch: 1
saves_per_epoch: 1
warmup_steps: 10
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
```
</details><br>
# outputs/out/
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on the train_dataset_updated.jsonl dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
mikankure/gensyn-checkpoints-whistling_howling_scorpion | mikankure | 2025-05-30T02:41:45Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am whistling howling scorpion",
"unsloth",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-17T02:03:41Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: gensyn-checkpoints-whistling_howling_scorpion
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am whistling howling scorpion
- unsloth
- trl
licence: license
---
# Model Card for gensyn-checkpoints-whistling_howling_scorpion
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="mikankure/gensyn-checkpoints-whistling_howling_scorpion", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
JosephTong/llava-v1.5-7b-flowcut128 | JosephTong | 2025-05-30T02:41:20Z | 0 | 1 | null | [
"safetensors",
"llava_llama",
"image-text-to-text",
"arxiv:2505.19536",
"base_model:lmsys/vicuna-7b-v1.5",
"base_model:finetune:lmsys/vicuna-7b-v1.5",
"license:apache-2.0",
"region:us"
] | image-text-to-text | 2025-05-29T03:12:40Z | ---
license: apache-2.0
base_model:
- lmsys/vicuna-7b-v1.5
pipeline_tag: image-text-to-text
---
# FlowCut: Rethinking Redundancy via Information Flow for Efficient Vision-Language Models
Jintao Tong<sup>1</sup>,
Wenwei Jin<sup>2</sup>,
Pengda Qin<sup>2</sup>,
Anqi Li<sup>3</sup>,
Yixiong Zou<sup>1✉</sup>
Yuhong Li<sup>2✉</sup>,
Yuhua Li<sup>1</sup>,
Ruixuan Li<sup>1</sup>
<br><br>
<sup>1</sup>School of Computer Science and Technology, Huazhong University of Science and Technology<br> <sup>2</sup>Xiaohongshu Inc., <sup>3</sup>Institute of Information Science, Beijing Jiaotong University
[](https://github.com/TungChintao/FlowCut)
[](https://arxiv.org/pdf/2505.19536)
[](https://github.com/TungChintao/FlowCut/blob/main/LICENSE)
## 💡 Highlights
> **TLDR:** To address inefficiency from excessive visual tokens in LVLMs, we propose a unified, bottom-up perspective based on information-flow, revealing dynamic redundancy emergence and introduce FlowCut, making pruning decision aligned with the model's inherent behavior, outperforming all existing approaches.
## 🛠 Preparation
Our code is easy to use.
1. Clone the [LLaVA](https://github.com/haotian-liu/LLaVA)'s repository.
```
git clone https://github.com/haotian-liu/LLaVA.git
cd LLaVA
```
2. Install the [LLaVA](https://github.com/haotian-liu/LLaVA)'s environment.
```
conda create -n llava python=3.10 -y
conda activate llava
pip install --upgrade pip
pip install -e .
pip install flash-attn --no-build-isolation
```
3. For formal usage, you can install the package from PyPI by running the following command:
```
pip install flowcut
```
For development, you can install the package by cloning the repository and running the following command:
```
git clone https://github.com/TungChintao/FlowCut
cd flowcut
pip install -e .
```
File organization as follow:
```
├── LLaVA-main
├── flowcut
├── llava
├── playground
├── script
```
## 🚀 Quick Start
```Python
from llava.model.builder import load_pretrained_model
from llava.mm_utils import get_model_name_from_path
from llava.eval.run_llava import eval_model
from flowcut import flowcut
model_path = "liuhaotian/llava-v1.5-7b"
tokenizer, model, image_processor, context_len = load_pretrained_model(
model_path=model_path,
model_base=None,
model_name=get_model_name_from_path(model_path)
)
## FlowCut retains 64 visual tokens
model = flowcut(model, target_num=64)
```
## 📖 Evaluation
The evaluation code follows the structure of [LLaVA](https://github.com/haotian-liu/LLaVA) or [Lmms-Eval](https://github.com/EvolvingLMMs-Lab/lmms-eval). After loading the model, simply add two lines as shown below:
```python
## Load LLaVA Model (code from llava.eval.model_vqa_loader)
tokenizer, model, image_processor, context_len = load_pretrained_model(model_path, args.model_base, model_name)
## add FlowCut
from flowcut import flowcut
model = flowcut(model, target_num=64)
```
Script templetes (please follow the detailed instruction in [LLaVA-Evaluation](https://github.com/haotian-liu/LLaVA/blob/main/docs/Evaluation.md)).
```Shell
bash scripts/v1_5/eval/[Benchmark].sh
```
Examples:
```Shell
CUDA_VISIBLE_DEVICES=0 bash scripts/v1_5/eval/mme.sh
```
```Shell
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 bash scripts/v1_5/eval/vqav2.sh
```
## 🎯 Training
The training code follows the structure of [LLaVA](https://github.com/haotian-liu/LLaVA). After loading the model, simply add two lines as shown below:
```python
## Load LLaVA Model (code from llava.train)
code of loading model...
## add FlowCut
from flowcut import flowcut
model = flowcut(model, target_num=64)
## training
trainer = LLaVATrainer(model=model,
tokenizer=tokenizer,
args=training_args,
**data_module)
```
## 🔑 License
- This project is released under the [Apache 2.0 license](https://github.com/TungChintao/FlowCut/blob/main/LICENSE).
## 📌 Citation
- If you find this project useful in your research, please consider citing:
```bibtex
@article{tong2025flowcut,
title={FlowCut: Rethinking Redundancy via Information Flow for Efficient Vision-Language Models},
author={Tong, Jintao and Jin, Wenwei and Qin, Pengda and Li, Anqi and Zou, Yixiong and Li, Yuhong and Li, Yuhua and Li, Ruixuan},
journal={arXiv preprint arXiv:2505.19536},
year={2025}
}
```
|
liuyuntoks/Medical-DeepSeek-R1-Distill-Qwen-1.5B | liuyuntoks | 2025-05-30T02:39:39Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-30T02:38:27Z | ---
base_model: unsloth/deepseek-r1-distill-qwen-1.5b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** liuyuntoks
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-qwen-1.5b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
feilongfl/test | feilongfl | 2025-05-30T02:34:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-30T02:28:36Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tesuser8785/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-scampering_savage_wallaby | tesuser8785 | 2025-05-30T02:34:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am scampering savage wallaby",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-27T19:22:12Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-scampering_savage_wallaby
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am scampering savage wallaby
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-scampering_savage_wallaby
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="tesuser8785/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-scampering_savage_wallaby", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
yresearch/swd_flux | yresearch | 2025-05-30T02:34:16Z | 0 | 0 | null | [
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-05-30T02:33:12Z | ---
license: apache-2.0
---
|
Gusanidas/branch-grpo-model-qwen-3b-branch | Gusanidas | 2025-05-30T02:33:31Z | 18 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T10:43:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
linborui/EAGLE-Llama-3.2-3B-Instruct | linborui | 2025-05-30T02:31:38Z | 0 | 0 | null | [
"safetensors",
"llama",
"license:apache-2.0",
"region:us"
] | null | 2025-05-30T02:16:04Z | ---
license: apache-2.0
---
|
ElMusk/dp70 | ElMusk | 2025-05-30T02:31:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-30T02:14:26Z | ---
base_model: google/gemma-3-27b-it
library_name: transformers
tags:
- generated_from_trainer
- trl
- sft
licence: license
license: gemma
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/64d1129297ca59bcf7458d07/zgFDl7UvWhiPYqdote7XT.png" width="400">
# Model Card for Synthia-S1-27b
**Community Page**: [Tesslate Community](https://discord.gg/DkzMzwBTaw), Website: [Tesslate](https://tesslate.com)
**Creative Writing Samples**: [Sample creative output](https://www.notion.so/Synthia-S1-Creative-Writing-Samples-1ca93ce17c2580c09397fa750d402e71)
**Authors**: Tesslate
## Model Information
### Description
Synthia-S1-27b is a reasoning, AI model developed by Tesslate AI, fine-tuned specifically for advanced reasoning, coding, and RP use cases. Built upon the robust Gemma3 architecture, Synthia-S1-27b excels in logical reasoning, creative writing, and deep contextual understanding. It supports multimodal inputs (text and images) with a large 128K token context window, enabling complex analysis suitable for research, academic tasks, and enterprise-grade AI applications.
### KEY PARAMS TO RUN:
#### Creative Writing System Prompt:
```
Your function as an assistant is to thoughtfully navigate inquiries by engaging in an in-depth, imaginative reasoning journey before arriving at a clear, accurate response. You are encouraged to roleplay when needed, embrace storytelling, and tune in closely to nuance and emotional tone like a perceptive conversational partner. Your approach should include a wide arc of contemplation, including interpretation, synthesis, creative ideation, critical re-evaluation, memory retrieval, and thoughtful iteration to shape a layered and expressive process of discovery. Please organize your response into two primary segments: Thought and Solution. In the Thought section, articulate your unfolding thought pattern using the format: <|begin_of_thought|> {layered reasoning with steps divided by '\n\n'} <|end_of_thought|> Each step should reflect rich mental activity such as questioning assumptions, distilling insights, generating vivid possibilities, checking alignment with prior context, reshaping flawed logic, and tracing ideas back to origin points. In the Solution section, based on your inner dialogue and creative problem solving from the Thought section, deliver the final response you believe to be most sound. The output should be expressed in a direct, coherent, and exact form that includes the vital steps needed to reach your conclusion, using this structure: <|begin_of_solution|> {final precise, neatly arranged, and insightful answer} <|end_of_solution|> Now, let’s explore the following prompt using this guided method:
```
#### Reasoning System Prompt:
```
Your role as an assistant is to engage in deep, methodical reasoning and provide comprehensive, accurate solutions. Before arriving at a final answer, you must undertake a structured, multi-phase thinking process that emphasizes depth, verification, and clarity. This involves thoroughly analyzing the question, identifying key elements, summarizing relevant insights, generating hypotheses, iteratively refining thoughts, verifying assumptions, cross-checking with prior knowledge, and reevaluating earlier conclusions as necessary. Your response must be structured into two main sections: Thought and Solution. In the Thought section, rigorously document your reasoning in the following format: <|begin_of_thought|> {thought process with each logical step separated by '\n\n'} <|end_of_thought|>. Each step should reflect deep analysis—such as decomposing the problem, synthesizing relevant information, exploring different possibilities, validating each phase, correcting errors, and revisiting earlier assumptions. In the Solution section, consolidate all your insights and reasoned steps into a concise, well-structured final answer. Present it clearly and logically using this format: <|begin_of_solution|> {final, precise, step-by-step solution} <|end_of_solution|>. This approach ensures that the final output reflects a high-confidence answer that results from critical thinking and iteration. Now, try to solve the following question through the above guidelines:
```
#### Coding System Prompt:
```
Your role as a coding assistant is to approach each problem with a rigorous, structured reasoning process that leads to accurate, maintainable, and efficient code. Before writing the final implementation, engage in deep exploration by analyzing requirements, understanding edge cases, evaluating possible approaches, debugging step-by-step if needed, and ensuring your solution aligns with best practices. Structure your response into two main sections: Thought and Solution. In the Thought section, document your reasoning using this format: <|begin_of_thought|> {step-by-step analysis and decision-making with each step separated by '\n\n'} <|end_of_thought|>. Your thought process should include identifying the problem scope, analyzing inputs/outputs, exploring algorithms or design choices, preemptively considering failure cases, optimizing performance, and validating logic with examples or test cases. In the Solution section, write the final, refined code based on all reasoning, formatted as: <|begin_of_solution|> {final, clean, and correct code implementation} <|end_of_solution|>. This structure ensures the code is well-reasoned, properly scoped, and production-ready. Now, try to solve the following coding task using the above guidelines:
```
Please use `temperature = 1.0, top_k = 64, top_p = 0.95, min_p = 0.0` with repeat penalty set to 1.3
OR (recommended)
`Temperature = 0.7, top_k = 40, repeat penalty = 1.1, top_p = 0.95, min_p = 0.05` with a rolling window.
### Inputs and Outputs
* **Input:**
* Text prompts for questions, instructions, coding tasks, or summarizations
* Total input context of 128K tokens
* **Output:**
* Reasoned and structured text outputs
* Maximum output length of 8192 tokens
## Key Metrics
Synthia-S1-27b achieves around +10-20% on most benchmarks, notably higher in improvement.
I scaled down each benchmark listed to complete those and I averaged these numbers, but I can't verifiably put that I did the whole giant benchmark for each. (Ran out of budget + I'm running everything on a 4090 now) Hopefully I can get some community help in benchmarking.
GPQA Diamond (198 questions) -> 57%, one shot (improved from 24.3 on Gemma 3 PT 27B)
MMLU Pro (15% of the entire set) -> 75%, averaged, more details here: [output](https://pastebin.com/kmcYzALq) (beating Gemma 3 PT 27B at 67.5)
Based on this assessment and heavy coding in the dataset, I'm making this claim. Ofc, I'm happy to be wrong and go back to the drawing board.
## Usage
Install the latest version of Transformers (>=4.50.0):
```Shell
pip install -U transformers
```
### Running with Pipeline API
```Python
from transformers import pipeline
import torch
pipe = pipeline(
"image-text-to-text",
model="tesslate/synthia-s1-27b",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [
{"role": "system", "content": [{"type": "text", "text": "You are a helpful, reasoning-focused assistant."}]},
{"role": "user", "content": [
{"type": "image", "url": "https://example.com/sample.jpg"},
{"type": "text", "text": "Explain the image."}
]}
]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"][-1]["content"])
```
## Training Data
Synthia-S1-27b was trained on diverse data including:
* Multiple web documents
* Programming debugging and solutions
* Mathematical solutions and thinking steps
Synthia-S1-27b was trained on an A100 for 205+ hours, with multiple rounds of sft and rl.
## Model Architecture
* **Base Model**: Gemma3
* **Size**: 27 billion parameters
* **Type**: Decoder-only Transformer
* **Precision**: bf16 with int8 quantization
* **Training Objective**: Instruction tuning emphasizing reasoning, coding tasks, and factual accuracy
## Quantized Models
* [Synthia-S1-27b-Q4_K_M-GGUF](https://huggingface.co/Tesslate/Synthia-S1-27b-Q4_K_M-GGUF)
* [Synthia-S1-27b-Q8_0-GGUF](https://huggingface.co/Tesslate/Synthia-S1-27b-Q8_0-GGUF)
## Limitations
* May require detailed prompt engineering for highly specific tasks
* Occasional hallucinations in less-explored domains
## Citation
```bibtex
@misc{tesslate_synthias127b,
title={Synthia-S1-27b: Advanced Reasoning and Coding Model},
author={tesslate},
year={2025},
publisher={tesslate},
url={https://tesslate.com}
}
```
**Developed by Tesslate** **[Huggingface](https://huggingface.co/tesslate)** **|** **[Website](https://tesslate.com)**
[Image Source](https://pixabay.com/illustrations/girl-backpack-night-surreal-sky-8257551/) |
ElMusk/dp71 | ElMusk | 2025-05-30T02:31:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-30T02:14:42Z | ---
base_model: google/gemma-3-27b-it
library_name: transformers
tags:
- generated_from_trainer
- trl
- sft
licence: license
license: gemma
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/64d1129297ca59bcf7458d07/zgFDl7UvWhiPYqdote7XT.png" width="400">
# Model Card for Synthia-S1-27b
**Community Page**: [Tesslate Community](https://discord.gg/DkzMzwBTaw), Website: [Tesslate](https://tesslate.com)
**Creative Writing Samples**: [Sample creative output](https://www.notion.so/Synthia-S1-Creative-Writing-Samples-1ca93ce17c2580c09397fa750d402e71)
**Authors**: Tesslate
## Model Information
### Description
Synthia-S1-27b is a reasoning, AI model developed by Tesslate AI, fine-tuned specifically for advanced reasoning, coding, and RP use cases. Built upon the robust Gemma3 architecture, Synthia-S1-27b excels in logical reasoning, creative writing, and deep contextual understanding. It supports multimodal inputs (text and images) with a large 128K token context window, enabling complex analysis suitable for research, academic tasks, and enterprise-grade AI applications.
### KEY PARAMS TO RUN:
#### Creative Writing System Prompt:
```
Your function as an assistant is to thoughtfully navigate inquiries by engaging in an in-depth, imaginative reasoning journey before arriving at a clear, accurate response. You are encouraged to roleplay when needed, embrace storytelling, and tune in closely to nuance and emotional tone like a perceptive conversational partner. Your approach should include a wide arc of contemplation, including interpretation, synthesis, creative ideation, critical re-evaluation, memory retrieval, and thoughtful iteration to shape a layered and expressive process of discovery. Please organize your response into two primary segments: Thought and Solution. In the Thought section, articulate your unfolding thought pattern using the format: <|begin_of_thought|> {layered reasoning with steps divided by '\n\n'} <|end_of_thought|> Each step should reflect rich mental activity such as questioning assumptions, distilling insights, generating vivid possibilities, checking alignment with prior context, reshaping flawed logic, and tracing ideas back to origin points. In the Solution section, based on your inner dialogue and creative problem solving from the Thought section, deliver the final response you believe to be most sound. The output should be expressed in a direct, coherent, and exact form that includes the vital steps needed to reach your conclusion, using this structure: <|begin_of_solution|> {final precise, neatly arranged, and insightful answer} <|end_of_solution|> Now, let’s explore the following prompt using this guided method:
```
#### Reasoning System Prompt:
```
Your role as an assistant is to engage in deep, methodical reasoning and provide comprehensive, accurate solutions. Before arriving at a final answer, you must undertake a structured, multi-phase thinking process that emphasizes depth, verification, and clarity. This involves thoroughly analyzing the question, identifying key elements, summarizing relevant insights, generating hypotheses, iteratively refining thoughts, verifying assumptions, cross-checking with prior knowledge, and reevaluating earlier conclusions as necessary. Your response must be structured into two main sections: Thought and Solution. In the Thought section, rigorously document your reasoning in the following format: <|begin_of_thought|> {thought process with each logical step separated by '\n\n'} <|end_of_thought|>. Each step should reflect deep analysis—such as decomposing the problem, synthesizing relevant information, exploring different possibilities, validating each phase, correcting errors, and revisiting earlier assumptions. In the Solution section, consolidate all your insights and reasoned steps into a concise, well-structured final answer. Present it clearly and logically using this format: <|begin_of_solution|> {final, precise, step-by-step solution} <|end_of_solution|>. This approach ensures that the final output reflects a high-confidence answer that results from critical thinking and iteration. Now, try to solve the following question through the above guidelines:
```
#### Coding System Prompt:
```
Your role as a coding assistant is to approach each problem with a rigorous, structured reasoning process that leads to accurate, maintainable, and efficient code. Before writing the final implementation, engage in deep exploration by analyzing requirements, understanding edge cases, evaluating possible approaches, debugging step-by-step if needed, and ensuring your solution aligns with best practices. Structure your response into two main sections: Thought and Solution. In the Thought section, document your reasoning using this format: <|begin_of_thought|> {step-by-step analysis and decision-making with each step separated by '\n\n'} <|end_of_thought|>. Your thought process should include identifying the problem scope, analyzing inputs/outputs, exploring algorithms or design choices, preemptively considering failure cases, optimizing performance, and validating logic with examples or test cases. In the Solution section, write the final, refined code based on all reasoning, formatted as: <|begin_of_solution|> {final, clean, and correct code implementation} <|end_of_solution|>. This structure ensures the code is well-reasoned, properly scoped, and production-ready. Now, try to solve the following coding task using the above guidelines:
```
Please use `temperature = 1.0, top_k = 64, top_p = 0.95, min_p = 0.0` with repeat penalty set to 1.3
OR (recommended)
`Temperature = 0.7, top_k = 40, repeat penalty = 1.1, top_p = 0.95, min_p = 0.05` with a rolling window.
### Inputs and Outputs
* **Input:**
* Text prompts for questions, instructions, coding tasks, or summarizations
* Total input context of 128K tokens
* **Output:**
* Reasoned and structured text outputs
* Maximum output length of 8192 tokens
## Key Metrics
Synthia-S1-27b achieves around +10-20% on most benchmarks, notably higher in improvement.
I scaled down each benchmark listed to complete those and I averaged these numbers, but I can't verifiably put that I did the whole giant benchmark for each. (Ran out of budget + I'm running everything on a 4090 now) Hopefully I can get some community help in benchmarking.
GPQA Diamond (198 questions) -> 57%, one shot (improved from 24.3 on Gemma 3 PT 27B)
MMLU Pro (15% of the entire set) -> 75%, averaged, more details here: [output](https://pastebin.com/kmcYzALq) (beating Gemma 3 PT 27B at 67.5)
Based on this assessment and heavy coding in the dataset, I'm making this claim. Ofc, I'm happy to be wrong and go back to the drawing board.
## Usage
Install the latest version of Transformers (>=4.50.0):
```Shell
pip install -U transformers
```
### Running with Pipeline API
```Python
from transformers import pipeline
import torch
pipe = pipeline(
"image-text-to-text",
model="tesslate/synthia-s1-27b",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [
{"role": "system", "content": [{"type": "text", "text": "You are a helpful, reasoning-focused assistant."}]},
{"role": "user", "content": [
{"type": "image", "url": "https://example.com/sample.jpg"},
{"type": "text", "text": "Explain the image."}
]}
]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"][-1]["content"])
```
## Training Data
Synthia-S1-27b was trained on diverse data including:
* Multiple web documents
* Programming debugging and solutions
* Mathematical solutions and thinking steps
Synthia-S1-27b was trained on an A100 for 205+ hours, with multiple rounds of sft and rl.
## Model Architecture
* **Base Model**: Gemma3
* **Size**: 27 billion parameters
* **Type**: Decoder-only Transformer
* **Precision**: bf16 with int8 quantization
* **Training Objective**: Instruction tuning emphasizing reasoning, coding tasks, and factual accuracy
## Quantized Models
* [Synthia-S1-27b-Q4_K_M-GGUF](https://huggingface.co/Tesslate/Synthia-S1-27b-Q4_K_M-GGUF)
* [Synthia-S1-27b-Q8_0-GGUF](https://huggingface.co/Tesslate/Synthia-S1-27b-Q8_0-GGUF)
## Limitations
* May require detailed prompt engineering for highly specific tasks
* Occasional hallucinations in less-explored domains
## Citation
```bibtex
@misc{tesslate_synthias127b,
title={Synthia-S1-27b: Advanced Reasoning and Coding Model},
author={tesslate},
year={2025},
publisher={tesslate},
url={https://tesslate.com}
}
```
**Developed by Tesslate** **[Huggingface](https://huggingface.co/tesslate)** **|** **[Website](https://tesslate.com)**
[Image Source](https://pixabay.com/illustrations/girl-backpack-night-surreal-sky-8257551/) |
hyperonsol/kori-memes | hyperonsol | 2025-05-30T02:24:17Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-30T02:24:00Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: KORI
---
# Kori Memes
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `KORI` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "KORI",
"lora_weights": "https://huggingface.co/hyperonsol/kori-memes/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('hyperonsol/kori-memes', weight_name='lora.safetensors')
image = pipeline('KORI').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 6000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/hyperonsol/kori-memes/discussions) to add images that show off what you’ve made with this LoRA.
|
hdong0/Qwen2.5-Math-1.5B-Open-R1-GRPO_MATH_1000steps_lr1e-6_kl1e-3_acc | hdong0 | 2025-05-30T02:09:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-1.5B",
"base_model:finetune:Qwen/Qwen2.5-Math-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T18:28:42Z | ---
base_model: Qwen/Qwen2.5-Math-1.5B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen2.5-Math-1.5B-Open-R1-GRPO_MATH_1000steps_lr1e-6_kl1e-3_acc
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen2.5-Math-1.5B-Open-R1-GRPO_MATH_1000steps_lr1e-6_kl1e-3_acc
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="hdong0/Qwen2.5-Math-1.5B-Open-R1-GRPO_MATH_1000steps_lr1e-6_kl1e-3_acc", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.0.dev0
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
llm-jp/llm-jp-3.1-1.8b-instruct4 | llm-jp | 2025-05-30T02:07:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"ja",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2025-05-27T02:38:30Z | ---
license: apache-2.0
language:
- en
- ja
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
pipeline_tag: text-generation
library_name: transformers
inference: false
---
# llm-jp-3.1-1.8b-instruct4
LLM-jp-3.1 is a series of large language models developed by the [Research and Development Center for Large Language Models](https://llmc.nii.ac.jp/) at the [National Institute of Informatics](https://www.nii.ac.jp/en/).
Building upon the LLM-jp-3 series, the LLM-jp-3.1 models incorporate mid-training ([instruction pre-training](https://aclanthology.org/2024.emnlp-main.148/)), which significantly enhances their instruction-following capabilities compared to the original LLM-jp-3 models.
This repository provides the **llm-jp-3.1-1.8b-instruct4** model.
For an overview of the LLM-jp-3.1 models across different parameter sizes, please refer to:
- [LLM-jp-3.1 Pre-trained Models](https://huggingface.co/collections/llm-jp/llm-jp-31-pre-trained-models-68368787c32e462c40a45f7b)
- [LLM-jp-3.1 Fine-tuned Models](https://huggingface.co/collections/llm-jp/llm-jp-31-fine-tuned-models-68368681b9b35de1c4ac8de4).
For more details on the training procedures and evaluation results, please refer to [this blog post](https://llm-jp.nii.ac.jp/ja/blog/blog-887/) (in Japanese).
Checkpoints format: Hugging Face Transformers
## Required Libraries and Their Versions
- torch>=2.3.0
- transformers>=4.40.1
- tokenizers>=0.19.1
- accelerate>=0.29.3
- flash-attn>=2.5.8
## Usage
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("llm-jp/llm-jp-3.1-1.8b-instruct4")
model = AutoModelForCausalLM.from_pretrained("llm-jp/llm-jp-3.1-1.8b-instruct4", device_map="auto", torch_dtype=torch.bfloat16)
chat = [
{"role": "system", "content": "以下は、タスクを説明する指示です。要求を適切に満たす応答を書きなさい。"},
{"role": "user", "content": "自然言語処理とは何か"},
]
tokenized_input = tokenizer.apply_chat_template(chat, add_generation_prompt=True, tokenize=True, return_tensors="pt").to(model.device)
with torch.no_grad():
output = model.generate(
tokenized_input,
max_new_tokens=100,
do_sample=True,
top_p=0.95,
temperature=0.7,
repetition_penalty=1.05,
)[0]
print(tokenizer.decode(output))
```
## Model Details
- **Model type:** Transformer-based Language Model
- **Architectures:**
Dense model:
|Params|Layers|Hidden size|Heads|Context length|Embedding parameters|Non-embedding parameters|
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|1.8b|24|2048|16|4096|407,498,752|1,459,718,144|
|13b|40|5120|40|4096|1,018,746,880|12,688,184,320|
MoE model:
|Params|Layers|Hidden size|Heads|Routed Experts|Activated Experts|Context length|Embedding parameters|Non-embedding parameters|Activated parameters|Total parameters|
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|8x13b|40|5120|40|8|2|4096|1,018,746,880|72,144,081,920|22,200,806,400|73,162,828,800|
## Tokenizer
The tokenizer of this model is based on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) Unigram byte-fallback model.
The vocabulary entries were converted from [`llm-jp-tokenizer v3.0`](https://github.com/llm-jp/llm-jp-tokenizer/releases/tag/v3.0b2).
Please refer to [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-jp-tokenizer` for details on the vocabulary construction procedure (the pure SentencePiece training does not reproduce our vocabulary).
## Datasets
### Pre-training
The models have been pre-trained using a blend of the following datasets.
| Language | Dataset | Tokens|
|:---|:---|---:|
|Japanese|[Wikipedia](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|2.6B
||[Common Crawl](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|762.8B
||[WARP/PDF](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|237.3B
||[WARP/HTML](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|2.7B
||[Kaken](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|1.8B
|English|[Wikipedia](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|4.7B
||[Dolma/CC-head](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|608.5B
||[Dolma/C4](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|181.6B
||[Dolma/Reddit](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|83.1B
||[Dolma/PeS2o](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|62.9B
||[Dolma/Gutenberg](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|5.5B
||[Dolma/Wiki](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|3.9B
|Code|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|114.1B
|Chinese|[Wikipedia](https://huggingface.co/datasets/bigcode/the-stack)|0.8B
|Korean|[Wikipedia](https://huggingface.co/datasets/bigcode/the-stack)|0.3B
### Mid-training
In the LLM-jp-3.1 series, we performed continuous pre-training based on [Instruction Pre-Training](https://aclanthology.org/2024.emnlp-main.148/).
Instruction Pre-Training enhances a model’s ability to follow instructions by continuing pre-training on a large collection of instruction–response pairs.
We prepared approximately 90B tokens of instruction–response data and mixed it with our pre-training datasets, conducting continuous pre-training on a total of 400B tokens.
Each model was initialized from existing checkpoints ([llm-jp/llm-jp-3-1.8b](https://huggingface.co/llm-jp/llm-jp-3-1.8b), [llm-jp/llm-jp-3-13b](https://huggingface.co/llm-jp/llm-jp-3-13b), and [llm-jp/llm-jp-3-8x13b](https://huggingface.co/llm-jp/llm-jp-3-8x13b)) and underwent continuous instruction pre-training.
Since the LLM-jp-3 series was originally pre-trained on 2.1T tokens, the total pre-training token count amounts to 2.5T tokens.
Details of this training process will be released in a forthcoming paper. The instruction–response dataset used for this training will also be made publicly available.
### Post-training
We have fine-tuned the pre-trained checkpoint with supervised fine-tuning and further aligned it with Direct Preference Optimization.
#### Supervised Fine-tuning
The datasets used for supervised fine-tuning are as follows:
| Language | Dataset | Description |
|:---|:---|:---|
|Japanese|[ichikara-instruction-004-002](https://liat-aip.sakura.ne.jp/wp/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf%e4%bd%9c%e6%88%90/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf-%e5%85%ac%e9%96%8b/)| A manually constructed instruction dataset. |
| |[AnswerCarefully (ver2.0)](https://huggingface.co/datasets/llm-jp/AnswerCarefully)| A manually constructed instruction dataset focusing on LLMs' safety. |
| |ichikara-instruction-format| A small subset of the ichikara-instruction dataset, edited with some constraints on the output format. |
| |[AutoMultiTurnByCalm3-22B](https://huggingface.co/datasets/kanhatakeyama/AutoMultiTurnByCalm3-22B)| A synthetic instruction dataset. |
| |[ramdom-to-fixed-multiturn-Calm3](https://huggingface.co/datasets/kanhatakeyama/ramdom-to-fixed-multiturn-Calm3)| A synthetic instruction dataset. |
| |[wizardlm8x22b-logical-math-coding-sft-ja](https://huggingface.co/datasets/llm-jp/wizardlm8x22b-logical-math-coding-sft-ja)| A synthetic instruction dataset. |
| |[magpie-sft-v1.0](https://huggingface.co/datasets/llm-jp/magpie-sft-v1.0)| A synthetic instruction dataset we created. |
| |[jaster v1.4.1](https://github.com/llm-jp/llm-jp-eval/tree/v1.4.1)| - |
| |[extraction-wiki-ja](https://huggingface.co/datasets/llm-jp/extraction-wiki-ja)| A synthetic instruction dataset we created. |
|English|[Daring-Anteater](https://huggingface.co/datasets/nvidia/Daring-Anteater)| - |
|Japanese & English|[Synthetic-JP-EN-Coding-Dataset](https://huggingface.co/datasets/llm-jp/Synthetic-JP-EN-Coding-Dataset)| A synthetic instruction dataset. |
#### Direct Preference Optimization
For Direct Preference Optimization (DPO), we adopted rejection sampling.
Prompts were sampled from the dataset used in SFT, and multiple responses were generated for each prompt.
These responses were then scored (by [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct)), and DPO was performed by treating high-scoring responses as positive examples and low-scoring responses as negative examples.
We conducted DPO in two stages.
In the second stage, we additionally used [ac-self-inst](https://huggingface.co/datasets/llm-jp/ac-self-inst), a Japanese preference dataset focused on safety.
## Evaluation
### MT Bench (Japanese and English)
We evaluated the models using `gpt-4o-2024-08-06`.
The scores represent the average values obtained from three rounds of inference and evaluation.
For more details, please refer to the [codes](https://github.com/llm-jp/llm-jp-judge/tree/v1.0.0).
| Model Name | JA | EN |
|:------------------------------------------------------------------------------------------------------------------------------|----------:|-------:|
| gpt-35-turbo-1106 | 6.48 | 7.56 |
| gpt-4-0613 | 7.29 | 7.72 |
| gpt-4o-2024-08-06 | 8.10 | 8.38 |
| [sbintuitions/sarashina2.2-1b-instruct-v0.1](https://huggingface.co/sbintuitions/sarashina2.2-1b-instruct-v0.1) | 5.30 | 5.66 |
| [sbintuitions/sarashina2.2-3b-instruct-v0.1](https://huggingface.co/sbintuitions/sarashina2.2-3b-instruct-v0.1) | 7.07 | 6.96 |
| [Rakuten/RakutenAI-2.0-8x7B-instruct](https://huggingface.co/Rakuten/RakutenAI-2.0-8x7B-instruct) | 6.68 | 6.33 |
| [cyberagent/calm3-22b-chat](https://huggingface.co/cyberagent/calm3-22b-chat) | 6.86 | 6.77 |
| [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) | 7.07 | 7.99 |
| [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) | 7.64 | 8.27 |
| [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) | 5.46 | 6.95 |
| [Qwen/Qwen3-14B](https://huggingface.co/Qwen/Qwen3-14B) | 8.00 | 8.30 |
| [Qwen/Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B) | 8.36 | 8.33 |
| [tokyotech-llm/Llama-3.3-Swallow-70B-Instruct-v0.4](https://huggingface.co/tokyotech-llm/Llama-3.3-Swallow-70B-Instruct-v0.4) | 7.64 | 8.02 |
| [stockmark/Stockmark-2-100B-Instruct-beta](https://huggingface.co/stockmark/Stockmark-2-100B-Instruct-beta) | 7.42 | 7.17 |
| [llm-jp-3-1.8b-instruct3](https://huggingface.co/llm-jp/llm-jp-3-1.8b-instruct3) | 4.64 | 4.09 |
| [llm-jp-3-13b-instruct3](https://huggingface.co/llm-jp/llm-jp-3-13b-instruct3) | 6.21 | 6.13 |
| [llm-jp-3-8x13b-instruct3](https://huggingface.co/llm-jp/llm-jp-3-8x13b-instruct3) | 6.60 | 6.49 |
| [llm-jp-3.1-1.8b-instruct4](https://huggingface.co/llm-jp/llm-jp-3.1-1.8b-instruct4) | 6.30 | 5.70 |
| [llm-jp-3.1-13b-instruct4](https://huggingface.co/llm-jp/llm-jp-3.1-13b-instruct4) | 7.37 | 7.01 |
| [llm-jp-3.1-8x13b-instruct4](https://huggingface.co/llm-jp/llm-jp-3.1-8x13b-instruct4) | 7.50 | 7.05 |
### AnswerCarefully-Eval
[AnswerCarefully-Eval](https://www.anlp.jp/proceedings/annual_meeting/2025/pdf_dir/Q4-19.pdf) assesses the safety of Japanese language model outputs using the LLM-as-a-Judge approach, based on the test set from [llm-jp/AnswerCarefully](https://huggingface.co/datasets/llm-jp/AnswerCarefully).
We evaluated the models using `gpt-4o-2024-08-06`.
The scores represent the average values obtained from three rounds of inference and evaluation.
For more details, please refer to the [codes](https://github.com/llm-jp/llm-jp-judge/tree/v1.0.0).
| Model name | Score | Acceptance rate (%, ↑) | Violation rate (%, ↓) |
| :--- | ---: | ---: | ---: |
| gpt-35-turbo-1106 | 3.98 | 71.7 | 12.6 |
| gpt-4-0613 | 4.06 | 72.3 | 13.2 |
| gpt-4o-2024-08-06 | 4.09 | 72.7 | 12.5 |
| [llm-jp-3-1.8b-instruct3](https://huggingface.co/llm-jp/llm-jp-3-1.8b-instruct3) | 4.03 | 75.9 | 12.2 |
| [llm-jp-3-13b-instruct3](https://huggingface.co/llm-jp/llm-jp-3-13b-instruct3) | 4.37 | 88.4 | 6.5 |
| [llm-jp-3-8x13b-instruct3](https://huggingface.co/llm-jp/llm-jp-3-8x13b-instruct3) | 4.48 | 91.6 | 4.3 |
| [llm-jp-3.1-1.8b-instruct4](https://huggingface.co/llm-jp/llm-jp-3.1-1.8b-instruct4) | 3.66 | 64.7 | 24.3 |
| [llm-jp-3.1-13b-instruct4](https://huggingface.co/llm-jp/llm-jp-3.1-13b-instruct4) | 4.17 | 82.4 | 12.2 |
| [llm-jp-3.1-8x13b-instruct4](https://huggingface.co/llm-jp/llm-jp-3.1-8x13b-instruct4) | 4.26 | 83.1 | 11.6 |
## Risks and Limitations
The models released here are in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
## Send Questions to
llm-jp(at)nii.ac.jp
## License
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Model Card Authors
*The names are listed in alphabetical order.*
Hirokazu Kiyomaru and Takashi Kodama. |
Johnnyman1100/EZ-Tokenizer_The_Tokenizer | Johnnyman1100 | 2025-05-30T02:06:45Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2025-05-29T23:32:20Z | ---
license: mit
---
# EZ-Tokenizer: 3.47 Chars/Token with 100% Reconstruction
> **"Go ahead, try to break it. I dare you."** - A tokenizer so efficient, it feels like cheating.
## 🚀 Performance Highlights
- **3.47** characters per token (beats industry standards)
- **100%** perfect reconstruction on all test cases
- **50K vocab size** (smaller, smarter, faster)
- **264K tokens/second** processing speed
## 💥 Benchmark This!
```python
from tokenizers import Tokenizer
tokenizer = Tokenizer.from_pretrained("johnnyman1100/EZ-Tokenizer_The_Tokenizer")
# Test it yourself
text = "Your text here"
encoded = tokenizer.encode(text)
decoded = tokenizer.decode(encoded.ids)
assert text == decoded # Try to make this fail, I'll wait...
print(f"Compression: {len(text)/len(encoded.ids):.2f} chars/token")
```
## 🏆 Challenge
Find any text where this tokenizer:
1. Fails to reconstruct perfectly, or
2. Gets worse compression than DeepSeek/others
First to report a verified case gets a shoutout!
## 📊 Technical Details
- **Vocabulary**: 50,000 tokens
- **Tested on**: 1.7M+ characters of mixed content
- **Perfect reconstruction** on all test cases
- **Faster** than DeepSeek by 1.23x
## 🤔 Why This Matters
Because in a world of bloated models, efficiency still wins. This tokenizer proves you don't need 100K+ tokens to achieve perfect reconstruction and better compression.
## ⚖️ License
MIT
---
*"I didn't believe it either until I saw the benchmarks." - You, probably*
|
wandererupak/wave2vec-BERT-nepali-asr | wandererupak | 2025-05-30T02:06:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-05-29T08:27:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
alana-flores-18/original.exlusive.twitter.foto.filtrada.de.alana.video.alana.flores.telegram.viral.x | alana-flores-18 | 2025-05-30T02:05:47Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-30T02:04:55Z | original exlusive twitter foto filtrada de alana video alana flores telegram viral x
<a rel="nofollow" href="http://viralflix.xyz/leaked?pa">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️</a>
<a rel="nofollow" href="http://viralflix.xyz/leaked?pa">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a>
<a rel="nofollow" href="http://viralflix.xyz/leaked?pa"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
original exlusive twitter foto filtrada de alana video alana flores telegram viral x
original exlusive twitter foto filtrada de alana video alana flores telegram viral x |
Triangle104/Qwen3-30B-A3B-abliterated-Q8_0-GGUF | Triangle104 | 2025-05-30T02:02:13Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/Qwen3-30B-A3B-abliterated",
"base_model:quantized:huihui-ai/Qwen3-30B-A3B-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-30T01:59:46Z | ---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: huihui-ai/Qwen3-30B-A3B-abliterated
tags:
- chat
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
extra_gated_prompt: '**Usage Warnings**
“**Risk of Sensitive or Controversial Outputs**“: This model’s safety filtering
has been significantly reduced, potentially generating sensitive, controversial,
or inappropriate content. Users should exercise caution and rigorously review generated
outputs.
“**Not Suitable for All Audiences**:“ Due to limited content filtering, the model’s
outputs may be inappropriate for public settings, underage users, or applications
requiring high security.
“**Legal and Ethical Responsibilities**“: Users must ensure their usage complies
with local laws and ethical standards. Generated content may carry legal or ethical
risks, and users are solely responsible for any consequences.
“**Research and Experimental Use**“: It is recommended to use this model for research,
testing, or controlled environments, avoiding direct use in production or public-facing
commercial applications.
“**Monitoring and Review Recommendations**“: Users are strongly advised to monitor
model outputs in real-time and conduct manual reviews when necessary to prevent
the dissemination of inappropriate content.
“**No Default Safety Guarantees**“: Unlike standard models, this model has not undergone
rigorous safety optimization. huihui.ai bears no responsibility for any consequences
arising from its use.'
---
# Triangle104/Qwen3-30B-A3B-abliterated-Q8_0-GGUF
This model was converted to GGUF format from [`huihui-ai/Qwen3-30B-A3B-abliterated`](https://huggingface.co/huihui-ai/Qwen3-30B-A3B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen3-30B-A3B-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen3-30B-A3B-abliterated-Q8_0-GGUF --hf-file qwen3-30b-a3b-abliterated-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen3-30B-A3B-abliterated-Q8_0-GGUF --hf-file qwen3-30b-a3b-abliterated-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen3-30B-A3B-abliterated-Q8_0-GGUF --hf-file qwen3-30b-a3b-abliterated-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen3-30B-A3B-abliterated-Q8_0-GGUF --hf-file qwen3-30b-a3b-abliterated-q8_0.gguf -c 2048
```
|
AXERA-TECH/LivePortrait | AXERA-TECH | 2025-05-30T01:57:05Z | 0 | 0 | null | [
"onnx",
"image-to-video",
"en",
"base_model:KwaiVGI/LivePortrait",
"base_model:quantized:KwaiVGI/LivePortrait",
"license:mit",
"region:us"
] | image-to-video | 2025-05-29T07:20:32Z | ---
license: mit
language:
- en
base_model:
- KwaiVGI/LivePortrait
pipeline_tag: image-to-video
---
<p align="center">
<img src="./assets/showcase2.gif" alt="showcase">
</p>
# LivePortrait
This version of LivePortrait has been converted to run on the Axera NPU using **w8a16** quantization.
This model has been optimized with the following:
Compatible with Pulsar2 version: 3.4
## Convert tools links:
For those who are interested in model conversion:
- [the original repo](https://huggingface.co/KwaiVGI/LivePortrait)
- [Github for LivePortrait](https://github.com/AXERA-TECH/LivePortrait.axera)
## Support Platform
- AX650/AX8850
- [M4N-Dock(爱芯派Pro)](https://wiki.sipeed.com/hardware/zh/maixIV/m4ndock/m4ndock.html)
- [M.2 Accelerator card](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html)
## How to use
Download all files from this repository to the device.
```
(py310) axera@dell:~/samples/LivePortrait$ tree -L 2
.
├── assets
│ └── examples
├── config.json
├── python
│ ├── axmodels
│ ├── cropper.py
│ ├── infer_onnx.py
│ ├── infer.py
│ ├── pretrained_weights
│ ├── requirements.txt
│ └── utils
└── README.md
7 directories, 6 files
```
### python env requirement
#### pyaxengine
https://github.com/AXERA-TECH/pyaxengine
```
wget https://github.com/AXERA-TECH/pyaxengine/releases/download/0.1.3.rc1/axengine-0.1.3-py3-none-any.whl
pip install axengine-0.1.3-py3-none-any.whl
```
#### others
```
pip install -r python/requirements.txt
```
## Inference with AX650 or AX8850 Host, such as AX650 DEMO BOARD, M4N-DOCK(爱芯派Pro)
```
root@ax650 ~/yongqiang/LivePortrait.axera # python3 ./python/infer.py --source ./assets/examples/source/s0.jpg --driving ./assets/examples/driving/d8.jpg --models ./python/axmodels/ --output-dir ./axmodel_infer
[INFO] Available providers: ['AxEngineExecutionProvider']
[INFO] Using provider: AxEngineExecutionProvider
[INFO] Chip type: ChipType.MC50
[INFO] VNPU type: VNPUType.DISABLED
[INFO] Engine version: 2.12.0s
[INFO] Model type: 2 (triple core)
[INFO] Compiler version: 3.3 144960ad
[INFO] Using provider: AxEngineExecutionProvider
[INFO] Model type: 2 (triple core)
[INFO] Compiler version: 3.3 144960ad
[INFO] Using provider: AxEngineExecutionProvider
[INFO] Model type: 2 (triple core)
[INFO] Compiler version: 3.3 0f7260e8
[INFO] Using provider: AxEngineExecutionProvider
[INFO] Model type: 2 (triple core)
[INFO] Compiler version: 3.3 144960ad
FaceAnalysisDIY warmup time: 0.598s
LandmarkRunner warmup time: 0.769s
2025-05-30 09:56:12.247 | INFO | __main__:main:727 - Start making driving motion template...
2025-05-30 09:56:14.770 | INFO | __main__:main:747 - Prepared pasteback mask done.
2025-05-30 09:56:17.219 | INFO | __main__:main:787 - The output of image-driven portrait animation is an image.
2025-05-30 09:56:30.701 | DEBUG | __main__:warp_decode:647 - warp time: 13.475s
2025-05-30 09:56:31.118 | INFO | __main__:main:881 - Animated image: ./axmodel_infer/s0--d8.jpg
2025-05-30 09:56:31.118 | INFO | __main__:main:882 - Animated image with concat: ./axmodel_infer/s0--d8_concat.jpg
2025-05-30 09:56:31.167 | DEBUG | __main__:<module>:894 - LivePortrait axmodel infer time: 32.455s
```
## Inference with M.2 Accelerator card
[What is M.2 Accelerator card?](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html), Show this DEMO based on x86.
### Image
```
(py310) axera@dell:~/samples/LivePortrait$ python ./python/infer.py --source ./assets/examples/source/s0.jpg --driving ./assets/examples/driving/d8.jpg --models ./python/axmodels/ --output-dir ./axmodel_infer
[INFO] Available providers: ['AXCLRTExecutionProvider']
[INFO] Using provider: AXCLRTExecutionProvider
[INFO] SOC Name: AX650N
[INFO] VNPU type: VNPUType.DISABLED
[INFO] Compiler version: 3.3 144960ad
[INFO] Using provider: AXCLRTExecutionProvider
[INFO] SOC Name: AX650N
[INFO] VNPU type: VNPUType.DISABLED
[INFO] Compiler version: 3.3 144960ad
[INFO] Using provider: AXCLRTExecutionProvider
[INFO] SOC Name: AX650N
[INFO] VNPU type: VNPUType.DISABLED
[INFO] Compiler version: 3.3 0f7260e8
[INFO] Using provider: AXCLRTExecutionProvider
[INFO] SOC Name: AX650N
[INFO] VNPU type: VNPUType.DISABLED
[INFO] Compiler version: 3.3 144960ad
FaceAnalysisDIY warmup time: 0.024s
[20:02:20] LandmarkRunner warmup time: 0.031s human_landmark_runner.py:95
2025-05-29 20:02:20.727 | INFO | __main__:main:727 - Start making driving motion template...
2025-05-29 20:02:20.972 | INFO | __main__:main:747 - Prepared pasteback mask done.
2025-05-29 20:02:21.449 | INFO | __main__:main:787 - The output of image-driven portrait animation is an image.
2025-05-29 20:02:25.475 | DEBUG | __main__:warp_decode:647 - warp time: 4.017s
2025-05-29 20:02:25.892 | INFO | __main__:main:881 - Animated image: ./axmodel_infer/s0--d8.jpg
2025-05-29 20:02:25.892 | INFO | __main__:main:882 - Animated image with concat: ./axmodel_infer/s0--d8_concat.jpg
2025-05-29 20:02:25.904 | DEBUG | __main__:<module>:894 - LivePortrait axmodel infer time: 8.165s
(py310) axera@dell:~/samples/LivePortrait$
```
Here, `--models` specifies the storage path for the `*.axmodel model`.
The output of axmodel-infer is as follows:


### Video
```
python3 ./python/infer.py --source ./assets/examples/source/s0.jpg --driving ./assets/examples/driving/d0.mp4 --models ./python/axmodels/ --output-dir ./axmodel_infer
```
The output of `axmodel-infer` is as follows:


|
XiaomiMiMo/MiMo-VL-7B-RL | XiaomiMiMo | 2025-05-30T01:54:55Z | 0 | 15 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"base_model:XiaomiMiMo/MiMo-VL-7B-RL",
"base_model:finetune:XiaomiMiMo/MiMo-VL-7B-RL",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-30T00:37:21Z | ---
license: mit
library_name: transformers
base_model:
- XiaomiMiMo/MiMo-VL-7B-RL
---
<div align="center">
<picture>
<source srcset="https://github.com/XiaomiMiMo/MiMo-VL/raw/main/figures/Xiaomi_MiMo_darkmode.png?raw=true" media="(prefers-color-scheme: dark)">
<img src="https://github.com/XiaomiMiMo/MiMo-VL/raw/main/figures/Xiaomi_MiMo.png?raw=true" width="60%" alt="Xiaomi-MiMo" />
</picture>
</div>
<h3 align="center">
<b>
<span>━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━</span>
<br/>
MiMo-VL Technical Report
<br/>
<span>━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━</span>
<br/>
</b>
</h3>
<br/>
<div align="center" style="line-height: 1;">
|
<a href="https://huggingface.co/collections/XiaomiMiMo/mimo-vl-68382ccacc7c2875500cd212" target="_blank">🤗 HuggingFace</a>
|
<a href="https://www.modelscope.cn/collections/MiMo-VL-bb651017e02742" target="_blank">🤖️ ModelScope</a>
|
<a href="https://github.com/XiaomiMiMo/MiMo-VL/blob/main/MiMo-VL-Technical-Report.pdf" target="_blank">📔 Technical Report</a>
|
<br/>
</div>
<br/>
## I. Introduction
In this report, we share our efforts to build a compact yet powerful VLM, MiMo-VL-7B. MiMo-VL-7B comprises (1) a native resolution ViT encoder that preserves fine-grained visual details, (2) an MLP projector for efficient cross-modal alignment, and (3) our [MiMo-7B language model](https://github.com/XiaomiMiMo/MiMo), specifically optimized for complex reasoning tasks.
The development of MiMo-VL-7B involves two sequential training processes: (1) A four-stage pre-training phase, which includes projector warmup, vision-language alignment, general multi-modal pre-training, and long-context Supervised Fine-Tuning (SFT). This phase yields the MiMo-VL-7B-SFT model. (2) A subsequent post-training phase, where we introduce Mixed On-policy Reinforcement Learning (MORL), a novel framework that seamlessly integrates diverse reward signals spanning perception accuracy, visual grounding precision, logical reasoning capabilities, and human/AI preferences. This phase yields the MiMo-VL-7B-RL model.
<p align="center">
<img width="95%" src="https://github.com/XiaomiMiMo/MiMo-VL/raw/main/figures/benchmarks.png?raw=true">
</p>
We open-source MiMo-VL-7B series, including checkpoints of the SFT and RL model.
We believe this report along with the models will provide valuable insights to develop powerful reasoning VLMs that benefit the larger community.
### 🛤️ During this journey, we find
- **Incorporating high-quality, broad-coverage reasoning data from the pre-training stage is crucial for enhancing model performance**
- We curate high-quality reasoning data by identifying diverse queries, employing large reasoning models to regenerate responses with long CoT, and applying rejection sampling to ensure quality.
- Rather than treating this as supplementary fine-tuning data, we incorporate substantial volumes of this synthetic reasoning data directly into the later pre-training stages, where extended training yields continued performance improvements without saturation.
- **Mixed On-policy Reinforcement Learning further enhances model performance, while achieving stable simultaneous improvements remains challenging**
- We apply RL across diverse capabilities, including reasoning, perception, grounding, and human preference alignment, spanning modalities including text, images, and videos. While this hybrid training approach further unlock model’s potential, interference across data domains remains a challenge.
## II. Model Details
<p align="center">
<img width="95%" src="https://github.com/XiaomiMiMo/MiMo-VL/raw/main/figures/architecture.png?raw=true">
</p>
> Models are available at [Huggingface Collections: MiMo-VL](https://huggingface.co/collections/XiaomiMiMo/mimo-vl-68382ccacc7c2875500cd212) and [ModelScope Collections: MiMo-VL](https://www.modelscope.cn/collections/MiMo-VL-bb651017e02742)
| **Model** | **Description** | **Download (HuggingFace)** | **Download (ModelScope)** |
| :------------: | :-------------------------------------------------------------------: | :-----------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------: |
| MiMo-VL-7B-SFT | VLM with extraordinary reasoning potential after 4-stage pre-training | [🤗 XiaomiMiMo/MiMo-VL-7B-SFT](https://huggingface.co/XiaomiMiMo/MiMo-VL-7B-SFT) | [🤖️ XiaomiMiMo/MiMo-VL-7B-SFT](https://www.modelscope.cn/models/XiaomiMiMo/MiMo-VL-7B-SFT) |
| MiMo-VL-7B-RL | RL model leapfrogging existing open-source models | [🤗 XiaomiMiMo/MiMo-VL-7B-RL](https://huggingface.co/XiaomiMiMo/MiMo-VL-7B-RL) | [🤖️ XiaomiMiMo/MiMo-VL-7B-RL](https://www.modelscope.cn/models/XiaomiMiMo/MiMo-VL-7B-RL) |
## III. Evaluation Results
### General Capabilities
In general visual-language understanding, MiMo-VL-7B models achieve state-of-the-art open-source results.
<p align="center">
<img width="95%" src="https://github.com/XiaomiMiMo/MiMo-VL/raw/main/figures/benchmarks_general.png?raw=true">
</p>
### Reasoning Tasks
In multi-modal reasoning, both the SFT and RL models significantly outperform all compared open-source baselines across these benchmarks.
<p align="center">
<img width="95%" src="https://github.com/XiaomiMiMo/MiMo-VL/raw/main/figures/benchmarks_reasoning.png?raw=true">
</p>
> [!IMPORTANT]
> Results marked with \* are obtained using our evaluation framework.
> Tasks with ${\dagger}$ are evaluated by GPT-4o.
### GUI Tasks
MiMo-VL-7B-RL possess exceptional GUI understanding and grounding capabilities. As a general-purpose VL model, MiMo-VL achieves comparable or even superior performance to GUI-specialized models.
<p align="center">
<img width="95%" src="https://github.com/XiaomiMiMo/MiMo-VL/raw/main/figures/benchmarks_gui.png?raw=true">
</p>
### Elo Rating
With our in-house evaluation dataset and GPT-4o judgments, MiMo-VL-7B-RL achieves the highest Elo rating among all evaluated open-source vision-language models, ranking first across models spanning from 7B to 72B parameters.
<p align="center">
<img width="95%" src="https://github.com/XiaomiMiMo/MiMo-VL/raw/main/figures/benchmarks_elo.png?raw=true">
</p>
## IV. Deployment
The MiMo-VL-7B series maintain full compatibility with the `Qwen2_5_VLForConditionalGeneration` architecture for deployment and inference.
## V. Citation
```bibtex
@misc{coreteam2025mimovl,
title={MiMo-VL Technical Report},
author={{Xiaomi LLM-Core Team}},
year={2025},
url={https://github.com/XiaomiMiMo/MiMo-VL},
}
```
## VI. Contact
Please contact us at [[email protected]](mailto:[email protected]) or open an issue if you have any questions.
|
sophie-rain-18/original.video.18.sophie.rain.viral.video.sophie.rain.spiderman.leaked.video.on.social.media | sophie-rain-18 | 2025-05-30T01:52:16Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-30T01:51:36Z | original.video.18.sophie.rain.viral.video.sophie.rain.spiderman.leaked.video.on.social.media
<a rel="nofollow" href="http://viralflix.xyz/leaked?pa">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️</a>
<a rel="nofollow" href="http://viralflix.xyz/leaked?pa">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a>
<a rel="nofollow" href="http://viralflix.xyz/leaked?pa"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
original.video.18.sophie.rain.viral.video.sophie.rain.spiderman.leaked.video.on.social.media
original.video.18.sophie.rain.viral.video.sophie.rain.spiderman.leaked.video.on.social.media
original.video.18.sophie.rain.viral.video.sophie.rain.spiderman.leaked.video.on.social.media |
keanteng/bert-classification-wqd7005 | keanteng | 2025-05-30T01:51:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-classification",
"bert",
"healthcare",
"risk-assessment",
"questionnaire-analysis",
"en",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:agpl-3.0",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-29T10:51:59Z | ---
license: agpl-3.0
language:
- en
tags:
- text-classification
- bert
- healthcare
- risk-assessment
- questionnaire-analysis
pipeline_tag: text-classification
metrics:
- accuracy
base_model:
- google-bert/bert-base-uncased
library_name: transformers
---
# BERT Classification Models for Healthcare Risk Assessment
This repository contains fine-tuned BERT models for classifying healthcare questionnaire responses into risk categories.
## Model Description
Two BERT-base-uncased models have been fine-tuned for healthcare risk assessment:
1. **Fatigue Model**: Classifies fatigue-related responses
2. **Mental Health Model**: Classifies mental health-related responses
Both models predict three risk categories:
- **Low Risk** (0)
- **Moderate Risk** (1)
- **High Risk** (2)
## Training Details
- **Base Model**: bert-base-uncased
- **Training Epochs**: 40
- **Batch Size**: 16
- **Learning Rate**: 2e-5
- **Optimizer**: AdamW
- **Max Sequence Length**: 128
## Usage
### Loading the Models
```python
from transformers import BertTokenizer, BertForSequenceClassification
import torch
# Load tokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
# Load fatigue model
fatigue_model = BertForSequenceClassification.from_pretrained('keanteng/bert-classification-wqd7005', subfolder='fatigue_model')
# Load mental health model
mental_health_model = BertForSequenceClassification.from_pretrained('keanteng/bert-classification-wqd7005', subfolder='mental_health_model')
```
### Making Predictions
```python
def predict_risk(text, model, tokenizer, max_length=128):
# Tokenize input
inputs = tokenizer(
text,
padding='max_length',
truncation=True,
max_length=max_length,
return_tensors='pt'
)
# Make prediction
model.eval()
with torch.no_grad():
outputs = model(**inputs)
predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
predicted_class = torch.argmax(predictions, dim=-1)
# Map to risk categories
risk_labels = ['Low Risk', 'Moderate Risk', 'High Risk']
return risk_labels[predicted_class.item()], predictions[0].tolist()
# Example usage
fatigue_text = "I feel extremely tired all the time and can't complete daily tasks"
risk_category, confidence_scores = predict_risk(fatigue_text, fatigue_model, tokenizer)
print(f"Risk Category: {risk_category}")
print(f"Confidence Scores: {confidence_scores}")
```
## Model Performance
The models were trained and evaluated on healthcare questionnaire data with the following label mapping:
**Fatigue Model:**
- Fatigue levels 1-2 → Low Risk
- Fatigue level 3 → Moderate Risk
- Fatigue levels 4-5 → High Risk
**Mental Health Model:**
- Mental health levels 1-2 → High Risk
- Mental health level 3 → Moderate Risk
- Mental health levels 4-5 → Low Risk
## Training Data
The models were trained on questionnaire responses containing:
- Text descriptions of fatigue levels
- Text descriptions of mental health status
- Corresponding risk labels
Data was split 80/20 for training and validation with stratified sampling.
## Intended Use
These models are designed for:
- Healthcare questionnaire analysis
- Risk assessment screening
- Research applications in healthcare NLP
**Important**: These models are for research and screening purposes only and should not replace professional medical diagnosis.
## Limitations
- Models are trained on specific questionnaire formats
- Performance may vary on different populations or text styles
- Should be used as a screening tool, not for final diagnosis
- May have biases present in the training data |
liumy2010/Qwen2.5-3B-math-UFT | liumy2010 | 2025-05-30T01:51:08Z | 15 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:2505.16984",
"base_model:Qwen/Qwen2.5-3B",
"base_model:finetune:Qwen/Qwen2.5-3B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-21T06:27:16Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model:
- Qwen/Qwen2.5-3B
---
## UFT
This repository contains the model presented in [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://huggingface.co/papers/2505.16984).
Code: https://github.com/liumy2010/UFT
## References
* [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://arxiv.org/abs/2505.16984)
|
liumy2010/Qwen2.5-3B-kk_logic-UFT | liumy2010 | 2025-05-30T01:51:01Z | 19 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:2505.16984",
"base_model:Qwen/Qwen2.5-3B",
"base_model:finetune:Qwen/Qwen2.5-3B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-22T11:49:59Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model:
- Qwen/Qwen2.5-3B
---
## UFT
This repository contains the model presented in [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://huggingface.co/papers/2505.16984).
Code: https://github.com/liumy2010/UFT
## References
* [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://arxiv.org/abs/2505.16984)
|
liumy2010/Qwen2.5-3B-kk_logic-SFT | liumy2010 | 2025-05-30T01:51:00Z | 24 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:2505.16984",
"base_model:Qwen/Qwen2.5-3B",
"base_model:finetune:Qwen/Qwen2.5-3B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-16T11:24:36Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model:
- Qwen/Qwen2.5-3B
---
## UFT
This repository contains the model presented in [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://huggingface.co/papers/2505.16984).
Code: https://github.com/liumy2010/UFT
## References
* [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://arxiv.org/abs/2505.16984)
|
liumy2010/Qwen2.5-3B-kk_logic-R3 | liumy2010 | 2025-05-30T01:50:54Z | 15 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:2505.16984",
"base_model:Qwen/Qwen2.5-3B",
"base_model:finetune:Qwen/Qwen2.5-3B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-23T17:22:11Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model:
- Qwen/Qwen2.5-3B
---
## UFT
This repository contains the model presented in [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://huggingface.co/papers/2505.16984).
Code: https://github.com/liumy2010/UFT
## References
* [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://arxiv.org/abs/2505.16984)
|
liumy2010/Qwen2.5-3B-countdown-UFT | liumy2010 | 2025-05-30T01:50:53Z | 18 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:2505.16984",
"base_model:Qwen/Qwen2.5-3B",
"base_model:finetune:Qwen/Qwen2.5-3B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-18T11:49:59Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model:
- Qwen/Qwen2.5-3B
---
## UFT
This repository contains the model presented in [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://huggingface.co/papers/2505.16984).
Code: https://github.com/liumy2010/UFT
## References
* [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://arxiv.org/abs/2505.16984)
|
liumy2010/Qwen2.5-3B-countdown-R3 | liumy2010 | 2025-05-30T01:50:49Z | 20 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:2505.16984",
"base_model:Qwen/Qwen2.5-3B",
"base_model:finetune:Qwen/Qwen2.5-3B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-23T07:52:09Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model:
- Qwen/Qwen2.5-3B
---
## UFT
This repository contains the model presented in [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://huggingface.co/papers/2505.16984).
Code: https://github.com/liumy2010/UFT
## References
* [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://arxiv.org/abs/2505.16984)
|
liumy2010/Qwen2.5-1.5B-math-UFT | liumy2010 | 2025-05-30T01:50:47Z | 19 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:2505.16984",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-19T16:08:33Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model:
- Qwen/Qwen2.5-1.5B
---
## UFT
This repository contains the model presented in [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://huggingface.co/papers/2505.16984).
Code: https://github.com/liumy2010/UFT
## References
* [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://arxiv.org/abs/2505.16984)
|
liumy2010/Qwen2.5-1.5B-math-RFT | liumy2010 | 2025-05-30T01:50:32Z | 16 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:2505.16984",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-19T23:48:11Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model:
- Qwen/Qwen2.5-1.5B
---
## UFT
This repository contains the model presented in [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://huggingface.co/papers/2505.16984).
Code: https://github.com/liumy2010/UFT
## References
* [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://arxiv.org/abs/2505.16984)
|
liumy2010/Qwen2.5-1.5B-kk_logic-UFT | liumy2010 | 2025-05-30T01:50:22Z | 17 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:2505.16984",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-22T14:13:24Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model:
- Qwen/Qwen2.5-1.5B
---
## UFT
This repository contains the model presented in [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://huggingface.co/papers/2505.16984).
Code: https://github.com/liumy2010/UFT
## References
* [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://arxiv.org/abs/2505.16984)
|
liumy2010/Qwen2.5-1.5B-kk_logic-SFT-RFT | liumy2010 | 2025-05-30T01:50:18Z | 16 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:2505.16984",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-23T00:45:53Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model:
- Qwen/Qwen2.5-1.5B
---
## UFT
This repository contains the model presented in [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://huggingface.co/papers/2505.16984).
Code: https://github.com/liumy2010/UFT
## References
* [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://arxiv.org/abs/2505.16984)
|
liumy2010/Qwen2.5-1.5B-kk_logic-SFT | liumy2010 | 2025-05-30T01:50:10Z | 23 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:2505.16984",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-16T10:41:31Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model:
- Qwen/Qwen2.5-1.5B
---
## UFT
This repository contains the model presented in [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://huggingface.co/papers/2505.16984).
Code: https://github.com/liumy2010/UFT
## References
* [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://arxiv.org/abs/2505.16984)
|
liumy2010/Qwen2.5-1.5B-countdown-RFT | liumy2010 | 2025-05-30T01:49:58Z | 16 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:2505.16984",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-17T16:30:14Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model:
- Qwen/Qwen2.5-1.5B
---
## UFT
This repository contains the model presented in [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://huggingface.co/papers/2505.16984).
Code: https://github.com/liumy2010/UFT
## References
* [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://arxiv.org/abs/2505.16984)
|
liumy2010/Qwen2.5-1.5B-countdown-R3 | liumy2010 | 2025-05-30T01:49:58Z | 16 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:2505.16984",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-18T00:26:22Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model:
- Qwen/Qwen2.5-1.5B
---
## UFT
This repository contains the model presented in [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://huggingface.co/papers/2505.16984).
Code: https://github.com/liumy2010/UFT
## References
* [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://arxiv.org/abs/2505.16984)
|
liumy2010/Qwen2.5-0.5B-math-UFT | liumy2010 | 2025-05-30T01:49:57Z | 15 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:2505.16984",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-18T23:49:09Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model:
- Qwen/Qwen2.5-0.5B
---
## UFT
This repository contains the model presented in [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://huggingface.co/papers/2505.16984).
Code: https://github.com/liumy2010/UFT
## References
* [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://arxiv.org/abs/2505.16984)
|
liumy2010/Qwen2.5-0.5B-math-SFT-RFT | liumy2010 | 2025-05-30T01:49:55Z | 17 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:2505.16984",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-19T08:57:27Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model:
- Qwen/Qwen2.5-0.5B
---
## UFT
This repository contains the model presented in [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://huggingface.co/papers/2505.16984).
Code: https://github.com/liumy2010/UFT
## References
* [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://arxiv.org/abs/2505.16984)
|
liumy2010/Qwen2.5-0.5B-math-SFT | liumy2010 | 2025-05-30T01:49:54Z | 23 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:2505.16984",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-16T08:02:51Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model:
- Qwen/Qwen2.5-0.5B
---
## UFT
This repository contains the model presented in [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://huggingface.co/papers/2505.16984).
Code: https://github.com/liumy2010/UFT
## References
* [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://arxiv.org/abs/2505.16984)
|
liumy2010/Qwen2.5-0.5B-math-R3 | liumy2010 | 2025-05-30T01:49:50Z | 16 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:2505.16984",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-19T08:48:50Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model:
- Qwen/Qwen2.5-0.5B
---
## UFT
This repository contains the model presented in [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://huggingface.co/papers/2505.16984).
Code: https://github.com/liumy2010/UFT
## References
* [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://arxiv.org/abs/2505.16984)
|
liumy2010/Qwen2.5-0.5B-kk_logic-SFT-RFT | liumy2010 | 2025-05-30T01:49:47Z | 16 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:2505.16984",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-22T01:41:54Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model:
- Qwen/Qwen2.5-0.5B
---
## UFT
This repository contains the model presented in [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://huggingface.co/papers/2505.16984).
Code: https://github.com/liumy2010/UFT
## References
* [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://arxiv.org/abs/2505.16984)
|
liumy2010/Qwen2.5-0.5B-kk_logic-RFT | liumy2010 | 2025-05-30T01:49:46Z | 17 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:2505.16984",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-22T05:49:28Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model:
- Qwen/Qwen2.5-0.5B
---
## UFT
This repository contains the model presented in [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://huggingface.co/papers/2505.16984).
Code: https://github.com/liumy2010/UFT
## References
* [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://arxiv.org/abs/2505.16984)
|
liumy2010/Qwen2.5-0.5B-countdown-UFT | liumy2010 | 2025-05-30T01:49:44Z | 15 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:2505.16984",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-17T04:22:25Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model:
- Qwen/Qwen2.5-0.5B
---
## UFT
This repository contains the model presented in [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://huggingface.co/papers/2505.16984).
Code: https://github.com/liumy2010/UFT
## References
* [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://arxiv.org/abs/2505.16984)
|
mlabonne/gemma-3-27b-it-qat-abliterated-GGUF | mlabonne | 2025-05-30T01:49:43Z | 0 | 2 | transformers | [
"transformers",
"gguf",
"autoquant",
"image-text-to-text",
"base_model:google/gemma-3-27b-it-qat-q4_0-unquantized",
"base_model:quantized:google/gemma-3-27b-it-qat-q4_0-unquantized",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2025-05-29T21:38:01Z | ---
license: gemma
library_name: transformers
pipeline_tag: image-text-to-text
base_model: google/gemma-3-27b-it-qat-q4_0-unquantized
tags:
- autoquant
- gguf
---
# 💎 Gemma 3 27B IT QAT Abliterated

<center>Gemma 3 QAT Abliterated <a href="https://huggingface.co/mlabonne/gemma-3-1b-it-qat-abliterated">1B</a> • <a href="https://huggingface.co/mlabonne/gemma-3-4b-it-qat-abliterated">4B</a> • <a href="https://huggingface.co/mlabonne/gemma-3-12b-it-qat-abliterated">12B</a> • <a href="https://huggingface.co/mlabonne/gemma-3-27b-it-qat-abliterated">27B</a></center>
This is an uncensored version of [google/gemma-3-27b-it-qat-q4_0-unquantized](https://huggingface.co/google/gemma-3-27b-it-qat-q4_0-unquantized) created with a new abliteration technique.
See [this article](https://huggingface.co/blog/mlabonne/abliteration) to know more about abliteration.
This is a new, improved version that targets refusals with enhanced accuracy.
I recommend using these generation parameters: `temperature=1.0`, `top_k=64`, `top_p=0.95`.
## ✂️ Abliteration

The refusal direction is computed by comparing the residual streams between target (harmful) and baseline (harmless) samples.
The hidden states of target modules (e.g., o_proj) are orthogonalized to subtract this refusal direction with a given weight factor.
These weight factors follow a normal distribution with a certain spread and peak layer.
Modules can be iteratively orthogonalized in batches, or the refusal direction can be accumulated to save memory.
Finally, I used a hybrid evaluation with a dedicated test set to calculate the acceptance rate. This uses both a dictionary approach and [NousResearch/Minos-v1](https://huggingface.co/NousResearch/Minos-v1).
The goal is to obtain an acceptance rate >90% and still produce coherent outputs. |
liumy2010/Qwen2.5-0.5B-countdown-SFT | liumy2010 | 2025-05-30T01:49:42Z | 34 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:2505.16984",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-16T05:17:47Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model:
- Qwen/Qwen2.5-0.5B
---
## UFT
This repository contains the model presented in [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://huggingface.co/papers/2505.16984).
Code: https://github.com/liumy2010/UFT
## References
* [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://arxiv.org/abs/2505.16984)
|
Nihel13/tatr_model | Nihel13 | 2025-05-30T01:49:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"table-transformer",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | object-detection | 2025-05-30T01:48:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
liumy2010/Llama-3.2-3B-math-SFT | liumy2010 | 2025-05-30T01:49:11Z | 45 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2505.16984",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:finetune:meta-llama/Llama-3.2-3B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T21:02:24Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model:
- meta-llama/Llama-3.2-3B
---
## UFT
This repository contains the model presented in [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://huggingface.co/papers/2505.16984).
Code: https://github.com/liumy2010/UFT
## References
* [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://arxiv.org/abs/2505.16984)
|
liumy2010/Llama-3.2-3B-kk_logic-UFT | liumy2010 | 2025-05-30T01:48:55Z | 24 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2505.16984",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:finetune:meta-llama/Llama-3.2-3B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T22:46:36Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model:
- meta-llama/Llama-3.2-3B
---
## UFT
This repository contains the model presented in [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://huggingface.co/papers/2505.16984).
Code: https://github.com/liumy2010/UFT
## References
* [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://arxiv.org/abs/2505.16984)
|
liumy2010/Llama-3.2-3B-kk_logic-R3 | liumy2010 | 2025-05-30T01:48:43Z | 17 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2505.16984",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:finetune:meta-llama/Llama-3.2-3B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-05T02:33:38Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model:
- meta-llama/Llama-3.2-3B
---
## UFT
This repository contains the model presented in [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://huggingface.co/papers/2505.16984).
Code: https://github.com/liumy2010/UFT
## References
* [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://arxiv.org/abs/2505.16984)
|
liumy2010/Llama-3.2-3B-countdown-UFT | liumy2010 | 2025-05-30T01:48:39Z | 26 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2505.16984",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:finetune:meta-llama/Llama-3.2-3B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-01T23:14:02Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model:
- meta-llama/Llama-3.2-3B
---
## UFT
This repository contains the model presented in [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://huggingface.co/papers/2505.16984).
Code: https://github.com/liumy2010/UFT
## References
* [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://arxiv.org/abs/2505.16984)
|
liumy2010/Llama-3.2-3B-countdown-R3 | liumy2010 | 2025-05-30T01:48:31Z | 23 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2505.16984",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:finetune:meta-llama/Llama-3.2-3B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T05:41:45Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model:
- meta-llama/Llama-3.2-3B
---
## UFT
This repository contains the model presented in [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://huggingface.co/papers/2505.16984).
Code: https://github.com/liumy2010/UFT
## References
* [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://arxiv.org/abs/2505.16984)
|
liumy2010/Llama-3.2-1B-math-UFT | liumy2010 | 2025-05-30T01:48:30Z | 23 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2505.16984",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T17:44:18Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model:
- meta-llama/Llama-3.2-1B
---
## UFT
This repository contains the model presented in [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://huggingface.co/papers/2505.16984).
Code: https://github.com/liumy2010/UFT
## References
* [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://arxiv.org/abs/2505.16984)
|
liumy2010/Llama-3.2-1B-math-SFT-RFT | liumy2010 | 2025-05-30T01:48:29Z | 21 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2505.16984",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T20:39:08Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model:
- meta-llama/Llama-3.2-1B
---
## UFT
This repository contains the model presented in [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://huggingface.co/papers/2505.16984).
Code: https://github.com/liumy2010/UFT
## References
* [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://arxiv.org/abs/2505.16984)
|
liumy2010/Llama-3.2-1B-kk_logic-UFT | liumy2010 | 2025-05-30T01:48:25Z | 20 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2505.16984",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-01T08:04:24Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model:
- meta-llama/Llama-3.2-1B
---
## UFT
This repository contains the model presented in [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://huggingface.co/papers/2505.16984).
Code: https://github.com/liumy2010/UFT
## References
* [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://arxiv.org/abs/2505.16984)
|
liumy2010/Llama-3.2-1B-countdown-UFT | liumy2010 | 2025-05-30T01:48:18Z | 22 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2505.16984",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T07:13:09Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model:
- meta-llama/Llama-3.2-1B
---
## UFT
This repository contains the model presented in [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://huggingface.co/papers/2505.16984).
Code: https://github.com/liumy2010/UFT
## References
* [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://arxiv.org/abs/2505.16984)
|
liumy2010/Llama-3.2-1B-countdown-SFT | liumy2010 | 2025-05-30T01:48:15Z | 30 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2505.16984",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T17:45:52Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model:
- meta-llama/Llama-3.2-1B
---
## UFT
This repository contains the model presented in [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://huggingface.co/papers/2505.16984).
Code: https://github.com/liumy2010/UFT
## References
* [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://arxiv.org/abs/2505.16984)
|
Moryjj/parst5_3blocks_10 | Moryjj | 2025-05-30T01:47:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-30T01:47:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
liumy2010/Llama-3.2-1B-countdown-R3 | liumy2010 | 2025-05-30T01:47:14Z | 19 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2505.16984",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T08:37:18Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model:
- meta-llama/Llama-3.2-1B
---
## UFT
This repository contains the model presented in [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://huggingface.co/papers/2505.16984).
Code: https://github.com/liumy2010/UFT
## References
* [UFT: Unifying Supervised and Reinforcement Fine-Tuning](https://arxiv.org/abs/2505.16984)
|
manohar-lal-18/original.news.18.manohar.lal.dhakad.viral.video.highway.manohar.lal.dhakad.and.lubna.qureshi.bjp | manohar-lal-18 | 2025-05-30T01:43:58Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-30T01:42:56Z | original news 18 manohar lal dhakad viral video highway manohar lal dhakad and lubna qureshi bjp
<a rel="nofollow" href="http://viralflix.xyz/leaked?pa">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️</a>
<a rel="nofollow" href="http://viralflix.xyz/leaked?pa">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a>
<a rel="nofollow" href="http://viralflix.xyz/leaked?pa"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
original news 18 manohar lal dhakad viral video highway manohar lal dhakad and lubna qureshi bjp
original news 18 manohar lal dhakad viral video highway manohar lal dhakad and lubna qureshi bjp
original news 18 manohar lal dhakad viral video highway manohar lal dhakad and lubna qureshi bjp |
DreamGallery/task-10-microsoft-Phi-4-mini-instruct | DreamGallery | 2025-05-30T01:41:25Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Phi-4-mini-instruct",
"base_model:adapter:microsoft/Phi-4-mini-instruct",
"region:us"
] | null | 2025-05-30T01:40:25Z | ---
base_model: microsoft/Phi-4-mini-instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
Triangle104/Qwen3-30B-A3B-abliterated-Q6_K-GGUF | Triangle104 | 2025-05-30T01:40:15Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/Qwen3-30B-A3B-abliterated",
"base_model:quantized:huihui-ai/Qwen3-30B-A3B-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-30T01:38:22Z | ---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: huihui-ai/Qwen3-30B-A3B-abliterated
tags:
- chat
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
extra_gated_prompt: '**Usage Warnings**
“**Risk of Sensitive or Controversial Outputs**“: This model’s safety filtering
has been significantly reduced, potentially generating sensitive, controversial,
or inappropriate content. Users should exercise caution and rigorously review generated
outputs.
“**Not Suitable for All Audiences**:“ Due to limited content filtering, the model’s
outputs may be inappropriate for public settings, underage users, or applications
requiring high security.
“**Legal and Ethical Responsibilities**“: Users must ensure their usage complies
with local laws and ethical standards. Generated content may carry legal or ethical
risks, and users are solely responsible for any consequences.
“**Research and Experimental Use**“: It is recommended to use this model for research,
testing, or controlled environments, avoiding direct use in production or public-facing
commercial applications.
“**Monitoring and Review Recommendations**“: Users are strongly advised to monitor
model outputs in real-time and conduct manual reviews when necessary to prevent
the dissemination of inappropriate content.
“**No Default Safety Guarantees**“: Unlike standard models, this model has not undergone
rigorous safety optimization. huihui.ai bears no responsibility for any consequences
arising from its use.'
---
# Triangle104/Qwen3-30B-A3B-abliterated-Q6_K-GGUF
This model was converted to GGUF format from [`huihui-ai/Qwen3-30B-A3B-abliterated`](https://huggingface.co/huihui-ai/Qwen3-30B-A3B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen3-30B-A3B-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen3-30B-A3B-abliterated-Q6_K-GGUF --hf-file qwen3-30b-a3b-abliterated-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen3-30B-A3B-abliterated-Q6_K-GGUF --hf-file qwen3-30b-a3b-abliterated-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen3-30B-A3B-abliterated-Q6_K-GGUF --hf-file qwen3-30b-a3b-abliterated-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen3-30B-A3B-abliterated-Q6_K-GGUF --hf-file qwen3-30b-a3b-abliterated-q6_k.gguf -c 2048
```
|
Hsianchengfun/merged_model_WOQ_all_with40 | Hsianchengfun | 2025-05-30T01:37:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-30T01:34:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
victkk/qwen-fine | victkk | 2025-05-30T01:36:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T16:24:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
meimeilook/BAGEL-7B-MoT-FP8 | meimeilook | 2025-05-30T01:35:33Z | 0 | 11 | bagel | [
"bagel",
"fp8",
"quantized",
"mot",
"any-to-any",
"base_model:ByteDance-Seed/BAGEL-7B-MoT",
"base_model:quantized:ByteDance-Seed/BAGEL-7B-MoT",
"license:apache-2.0",
"region:us"
] | any-to-any | 2025-05-22T05:13:08Z | ---
license: apache-2.0
base_model:
- ByteDance-Seed/BAGEL-7B-MoT
base_model_relation: quantized
pipeline_tag: any-to-any
library_name: bagel
tags:
- fp8
- quantized
- bagel
- mot
---
Original model is https://huggingface.co/ByteDance-Seed/BAGEL-7B-MoT
ema-FP8.safetensors is float8_e4m3fn.
float8_e4m3fn weight of: https://huggingface.co/ByteDance-Seed/BAGEL-7B-MoT
## Benchmark Spec: 24GB 4090 + 60GB RAM
### Default setting, Timesteps 25 steps
| Features | Speed (seconds) | GPU VRAM Usage | CPU RAM Usage |
|---------------------|------------------|----------------|----------------|
| 📝 Text to Image | 128.90 s | 16.18 GB | 14.22 GB |
| 🖌️ Image Edit | 138.67 s | 15.08 GB | 14.21 GB |
| 🖼️ Image Understanding | 102.68 s | 15.08 GB |13.66 GB |
[Benchmark Images](https://huggingface.co/meimeilook/BAGEL-7B-MoT-FP8/tree/main/Benchmark)
## Support
### Runs with less than 12GB of GPU memory.
### ram + vram = about 31GB
#### * *12GB is much slower than 24GB due to CPU offload. It will be 1.5x much slower than 24GB*

## How to Install:
### new venv
1. git clone https://github.com/bytedance-seed/BAGEL.git
2. cd BAGEL
3. conda create -n bagel python=3.10 -y
4. conda activate bagel
### install
5. install pytorch 2.5.1
CUDA 12.4
pip install torch==2.5.1 torchvision==0.20.1 --index-url https://download.pytorch.org/whl/cu124
6. pip install [flash_attn-2.7.0.post1+cu12torch2.5cxx11abiFALSE-cp310-cp310-linux_x86_64.whl](https://github.com/Dao-AILab/flash-attention/releases/download/v2.7.0.post1/flash_attn-2.7.0.post1+cu12torch2.5cxx11abiFALSE-cp310-cp310-linux_x86_64.whl)
**more whl: [https://github.com/Dao-AILab/flash-attention/releases](https://github.com/Dao-AILab/flash-attention/releases)
It needs to be the same as the Python version, PyTorch version, CUDA version, and flash_attn WHL.**
7. pip install -r requirements.txt
(edit requirements.txt, without flash_attn==2.5.8, make it #flash_attn==2.5.8)
8. pip install gradio pynvml (#pynvml for check vram stats.)
## Models & Settings:
0. Download [huggingface.co/ByteDance-Seed/BAGEL-7B-MoT](https://huggingface.co/ByteDance-Seed/BAGEL-7B-MoT)(without ema.safetensors) &
[ema-FP8.safetensors](https://huggingface.co/meimeilook/BAGEL-7B-MoT-FP8/blob/main/ema-FP8.safetensors)
and make it like this.
```
folders
├── BAGEL
│ └── app-fp8.py
└── BAGEL-7B-MoT
└── ema-FP8.safetensors
```
0. Open app-fp8.py via Notepad or VScode etc.
1. Replace model_path to yours.
```
parser.add_argument("--model_path", type=str, default="/root/your_path/BAGEL-7B-MoT")
```
2. Edit your spec:
```
cpu_mem_for_offload = "16GiB"
gpu_mem_per_device = "24GiB" #default:24GiB you can set 16GB within 24GB with 4090,more slower.
```
3. Be more efficient
```
NUM_ADDITIONAL_LLM_LAYERS_TO_GPU = 5
# (5 for 24gb VRAM, >5 for 32gb VRAM, have a try)
# The default is 10 layers in GPU, use it can be 15 layers in GPU with 4090.
```
## How to Use:
1. CD BAGEL
2. conda activate bagel
3. python app-fp8.py
4. Open [127.0.0.1:7860](https://127.0.0.1:7860)

|
lubna-qureshi-18/original.news.18.Lubna.Qureshi.Thi.viral.video.highway.lubna.qureshi.and.manohar.lal.dhakad.bjp | lubna-qureshi-18 | 2025-05-30T01:32:54Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-30T01:31:16Z | Original news 18 lubna qureshi thi viral video highway lubna qureshi and manohar lal dhakad bjp
<a rel="nofollow" href="http://viralflix.xyz/leaked?pa">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️</a>
<a rel="nofollow" href="http://viralflix.xyz/leaked?pa">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a>
<a rel="nofollow" href="http://viralflix.xyz/leaked?pa"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
Original news 18 lubna qureshi thi viral video highway lubna qureshi and manohar lal dhakad bjp
Original news 18 lubna qureshi thi viral video highway lubna qureshi and manohar lal dhakad bjp
Original news 18 lubna qureshi thi viral video highway lubna qureshi and manohar lal dhakad bjp |
DreamGallery/task-10-microsoft-Phi-3-mini-4k-instruct | DreamGallery | 2025-05-30T01:29:20Z | 28 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:adapter:microsoft/Phi-3.5-mini-instruct",
"region:us"
] | null | 2025-05-29T02:42:32Z | ---
base_model: microsoft/Phi-3.5-mini-instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
Triangle104/Qwen3-30B-A3B-abliterated-Q5_K_M-GGUF | Triangle104 | 2025-05-30T01:25:43Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/Qwen3-30B-A3B-abliterated",
"base_model:quantized:huihui-ai/Qwen3-30B-A3B-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-30T01:23:58Z | ---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: huihui-ai/Qwen3-30B-A3B-abliterated
tags:
- chat
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
extra_gated_prompt: '**Usage Warnings**
“**Risk of Sensitive or Controversial Outputs**“: This model’s safety filtering
has been significantly reduced, potentially generating sensitive, controversial,
or inappropriate content. Users should exercise caution and rigorously review generated
outputs.
“**Not Suitable for All Audiences**:“ Due to limited content filtering, the model’s
outputs may be inappropriate for public settings, underage users, or applications
requiring high security.
“**Legal and Ethical Responsibilities**“: Users must ensure their usage complies
with local laws and ethical standards. Generated content may carry legal or ethical
risks, and users are solely responsible for any consequences.
“**Research and Experimental Use**“: It is recommended to use this model for research,
testing, or controlled environments, avoiding direct use in production or public-facing
commercial applications.
“**Monitoring and Review Recommendations**“: Users are strongly advised to monitor
model outputs in real-time and conduct manual reviews when necessary to prevent
the dissemination of inappropriate content.
“**No Default Safety Guarantees**“: Unlike standard models, this model has not undergone
rigorous safety optimization. huihui.ai bears no responsibility for any consequences
arising from its use.'
---
# Triangle104/Qwen3-30B-A3B-abliterated-Q5_K_M-GGUF
This model was converted to GGUF format from [`huihui-ai/Qwen3-30B-A3B-abliterated`](https://huggingface.co/huihui-ai/Qwen3-30B-A3B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen3-30B-A3B-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen3-30B-A3B-abliterated-Q5_K_M-GGUF --hf-file qwen3-30b-a3b-abliterated-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen3-30B-A3B-abliterated-Q5_K_M-GGUF --hf-file qwen3-30b-a3b-abliterated-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen3-30B-A3B-abliterated-Q5_K_M-GGUF --hf-file qwen3-30b-a3b-abliterated-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen3-30B-A3B-abliterated-Q5_K_M-GGUF --hf-file qwen3-30b-a3b-abliterated-q5_k_m.gguf -c 2048
```
|
bruhzair/prototype4x15 | bruhzair | 2025-05-30T01:24:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2408.07990",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-30T01:06:50Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# prototype-0.4x15
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using /workspace/cache/models--huihui-ai--DeepSeek-R1-Distill-Llama-70B-abliterated/snapshots/116ff0fa55425b094a38a6bbf6faf2f5cafea335 as a base.
### Models Merged
The following models were included in the merge:
* /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-Nexus/snapshots/1fc6f9b78d8921a26003edb06a292e94488a4c52
* /workspace/cache/models--Sao10K--L3-70B-Euryale-v2.1/snapshots/36ad832b771cd783ea7ad00ed39e61f679b1a7c6
* /workspace/cache/models--SicariusSicariiStuff--Negative_LLAMA_70B/snapshots/097a11b4600eafe333a2be0309bbdf6be2f197c4
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-Nexus/snapshots/1fc6f9b78d8921a26003edb06a292e94488a4c52
parameters:
select_topk: 0.5
- model: /workspace/cache/models--Sao10K--L3-70B-Euryale-v2.1/snapshots/36ad832b771cd783ea7ad00ed39e61f679b1a7c6
parameters:
select_topk: 0.5
- model: /workspace/cache/models--SicariusSicariiStuff--Negative_LLAMA_70B/snapshots/097a11b4600eafe333a2be0309bbdf6be2f197c4
parameters:
select_topk: 0.5
- model: /workspace/cache/models--huihui-ai--DeepSeek-R1-Distill-Llama-70B-abliterated/snapshots/116ff0fa55425b094a38a6bbf6faf2f5cafea335
parameters:
select_topk: 0.85
base_model: /workspace/cache/models--huihui-ai--DeepSeek-R1-Distill-Llama-70B-abliterated/snapshots/116ff0fa55425b094a38a6bbf6faf2f5cafea335
merge_method: sce
tokenizer:
source: union
chat_template: "llama3"
int8_mask: true
dtype: bfloat16
```
|
qq456cvb/3DCorrEnhance | qq456cvb | 2025-05-30T01:22:54Z | 0 | 1 | null | [
"image-feature-extraction",
"arxiv:2411.19458",
"license:mit",
"region:us"
] | image-feature-extraction | 2025-01-26T05:08:09Z | ---
license: mit
pipeline_tag: image-feature-extraction
---
This repository contains the model introduced in the paper [Multiview Equivariance Improves 3D Correspondence Understanding with Minimal Feature Finetuning](https://huggingface.co/papers/2411.19458).
Code: https://github.com/qq456cvb/3DCorrEnhance. |
bruhzair/prototype4x19 | bruhzair | 2025-05-30T01:21:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2408.07990",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-30T00:56:54Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# prototype-0.4x19
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using /workspace/cache/models--huihui-ai--DeepSeek-R1-Distill-Llama-70B-abliterated/snapshots/116ff0fa55425b094a38a6bbf6faf2f5cafea335 as a base.
### Models Merged
The following models were included in the merge:
* /workspace/cache/models--Sao10K--L3-70B-Euryale-v2.1/snapshots/36ad832b771cd783ea7ad00ed39e61f679b1a7c6
* /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-Nexus/snapshots/1fc6f9b78d8921a26003edb06a292e94488a4c52
* /workspace/cache/models--SicariusSicariiStuff--Negative_LLAMA_70B/snapshots/097a11b4600eafe333a2be0309bbdf6be2f197c4
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-Nexus/snapshots/1fc6f9b78d8921a26003edb06a292e94488a4c52
parameters:
select_topk: 0.3
- model: /workspace/cache/models--Sao10K--L3-70B-Euryale-v2.1/snapshots/36ad832b771cd783ea7ad00ed39e61f679b1a7c6
parameters:
select_topk: 0.5
- model: /workspace/cache/models--SicariusSicariiStuff--Negative_LLAMA_70B/snapshots/097a11b4600eafe333a2be0309bbdf6be2f197c4
parameters:
select_topk: 0.5
- model: /workspace/cache/models--huihui-ai--DeepSeek-R1-Distill-Llama-70B-abliterated/snapshots/116ff0fa55425b094a38a6bbf6faf2f5cafea335
parameters:
select_topk: 0.85
base_model: /workspace/cache/models--huihui-ai--DeepSeek-R1-Distill-Llama-70B-abliterated/snapshots/116ff0fa55425b094a38a6bbf6faf2f5cafea335
merge_method: sce
tokenizer:
source: union
chat_template: llama3
int8_mask: true
dtype: bfloat16
```
|
Mungert/medgemma-27b-text-it-GGUF | Mungert | 2025-05-30T01:20:14Z | 3 | 0 | transformers | [
"transformers",
"gguf",
"medical",
"clinical-reasoning",
"thinking",
"text-generation",
"arxiv:2501.19393",
"arxiv:2303.15343",
"arxiv:2009.13081",
"arxiv:2102.09542",
"arxiv:2411.15640",
"arxiv:2404.05590",
"arxiv:2501.18362",
"base_model:google/gemma-3-27b-pt",
"base_model:quantized:google/gemma-3-27b-pt",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text-generation | 2025-05-29T06:32:42Z | ---
license: other
license_name: health-ai-developer-foundations
license_link: https://developers.google.com/health-ai-developer-foundations/terms
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access MedGemma on Hugging Face
extra_gated_prompt: >-
To access MedGemma on Hugging Face, you're required to review and agree to
[Health AI Developer Foundation's terms of
use](https://developers.google.com/health-ai-developer-foundations/terms). To
do this, please ensure you're logged in to Hugging Face and click below.
Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-pt
tags:
- medical
- clinical-reasoning
- thinking
---
# <span style="color: #7FFF7F;">medgemma-27b-text-it GGUF Models</span>
## <span style="color: #7F7FFF;">Model Generation Details</span>
This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`f5cd27b7`](https://github.com/ggerganov/llama.cpp/commit/f5cd27b71da3ac375a04a41643d14fc779a8057b).
## <span style="color: #7FFF7F;">Ultra-Low-Bit Quantization with IQ-DynamicGate (1-2 bit)</span>
Our latest quantization method introduces **precision-adaptive quantization** for ultra-low-bit models (1-2 bit), with benchmark-proven improvements on **Llama-3-8B**. This approach uses layer-specific strategies to preserve accuracy while maintaining extreme memory efficiency.
### **Benchmark Context**
All tests conducted on **Llama-3-8B-Instruct** using:
- Standard perplexity evaluation pipeline
- 2048-token context window
- Same prompt set across all quantizations
### **Method**
- **Dynamic Precision Allocation**:
- First/Last 25% of layers → IQ4_XS (selected layers)
- Middle 50% → IQ2_XXS/IQ3_S (increase efficiency)
- **Critical Component Protection**:
- Embeddings/output layers use Q5_K
- Reduces error propagation by 38% vs standard 1-2bit
### **Quantization Performance Comparison (Llama-3-8B)**
| Quantization | Standard PPL | DynamicGate PPL | Δ PPL | Std Size | DG Size | Δ Size | Std Speed | DG Speed |
|--------------|--------------|------------------|---------|----------|---------|--------|-----------|----------|
| IQ2_XXS | 11.30 | 9.84 | -12.9% | 2.5G | 2.6G | +0.1G | 234s | 246s |
| IQ2_XS | 11.72 | 11.63 | -0.8% | 2.7G | 2.8G | +0.1G | 242s | 246s |
| IQ2_S | 14.31 | 9.02 | -36.9% | 2.7G | 2.9G | +0.2G | 238s | 244s |
| IQ1_M | 27.46 | 15.41 | -43.9% | 2.2G | 2.5G | +0.3G | 206s | 212s |
| IQ1_S | 53.07 | 32.00 | -39.7% | 2.1G | 2.4G | +0.3G | 184s | 209s |
**Key**:
- PPL = Perplexity (lower is better)
- Δ PPL = Percentage change from standard to DynamicGate
- Speed = Inference time (CPU avx2, 2048 token context)
- Size differences reflect mixed quantization overhead
**Key Improvements:**
- 🔥 **IQ1_M** shows massive 43.9% perplexity reduction (27.46 → 15.41)
- 🚀 **IQ2_S** cuts perplexity by 36.9% while adding only 0.2GB
- ⚡ **IQ1_S** maintains 39.7% better accuracy despite 1-bit quantization
**Tradeoffs:**
- All variants have modest size increases (0.1-0.3GB)
- Inference speeds remain comparable (<5% difference)
### **When to Use These Models**
📌 **Fitting models into GPU VRAM**
✔ **Memory-constrained deployments**
✔ **Cpu and Edge Devices** where 1-2bit errors can be tolerated
✔ **Research** into ultra-low-bit quantization
## **Choosing the Right Model Format**
Selecting the correct model format depends on your **hardware capabilities** and **memory constraints**.
### **BF16 (Brain Float 16) – Use if BF16 acceleration is available**
- A 16-bit floating-point format designed for **faster computation** while retaining good precision.
- Provides **similar dynamic range** as FP32 but with **lower memory usage**.
- Recommended if your hardware supports **BF16 acceleration** (check your device's specs).
- Ideal for **high-performance inference** with **reduced memory footprint** compared to FP32.
📌 **Use BF16 if:**
✔ Your hardware has native **BF16 support** (e.g., newer GPUs, TPUs).
✔ You want **higher precision** while saving memory.
✔ You plan to **requantize** the model into another format.
📌 **Avoid BF16 if:**
❌ Your hardware does **not** support BF16 (it may fall back to FP32 and run slower).
❌ You need compatibility with older devices that lack BF16 optimization.
---
### **F16 (Float 16) – More widely supported than BF16**
- A 16-bit floating-point **high precision** but with less of range of values than BF16.
- Works on most devices with **FP16 acceleration support** (including many GPUs and some CPUs).
- Slightly lower numerical precision than BF16 but generally sufficient for inference.
📌 **Use F16 if:**
✔ Your hardware supports **FP16** but **not BF16**.
✔ You need a **balance between speed, memory usage, and accuracy**.
✔ You are running on a **GPU** or another device optimized for FP16 computations.
📌 **Avoid F16 if:**
❌ Your device lacks **native FP16 support** (it may run slower than expected).
❌ You have memory limitations.
---
### **Quantized Models (Q4_K, Q6_K, Q8, etc.) – For CPU & Low-VRAM Inference**
Quantization reduces model size and memory usage while maintaining as much accuracy as possible.
- **Lower-bit models (Q4_K)** → **Best for minimal memory usage**, may have lower precision.
- **Higher-bit models (Q6_K, Q8_0)** → **Better accuracy**, requires more memory.
📌 **Use Quantized Models if:**
✔ You are running inference on a **CPU** and need an optimized model.
✔ Your device has **low VRAM** and cannot load full-precision models.
✔ You want to reduce **memory footprint** while keeping reasonable accuracy.
📌 **Avoid Quantized Models if:**
❌ You need **maximum accuracy** (full-precision models are better for this).
❌ Your hardware has enough VRAM for higher-precision formats (BF16/F16).
---
### **Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)**
These models are optimized for **extreme memory efficiency**, making them ideal for **low-power devices** or **large-scale deployments** where memory is a critical constraint.
- **IQ3_XS**: Ultra-low-bit quantization (3-bit) with **extreme memory efficiency**.
- **Use case**: Best for **ultra-low-memory devices** where even Q4_K is too large.
- **Trade-off**: Lower accuracy compared to higher-bit quantizations.
- **IQ3_S**: Small block size for **maximum memory efficiency**.
- **Use case**: Best for **low-memory devices** where **IQ3_XS** is too aggressive.
- **IQ3_M**: Medium block size for better accuracy than **IQ3_S**.
- **Use case**: Suitable for **low-memory devices** where **IQ3_S** is too limiting.
- **Q4_K**: 4-bit quantization with **block-wise optimization** for better accuracy.
- **Use case**: Best for **low-memory devices** where **Q6_K** is too large.
- **Q4_0**: Pure 4-bit quantization, optimized for **ARM devices**.
- **Use case**: Best for **ARM-based devices** or **low-memory environments**.
---
### **Summary Table: Model Format Selection**
| Model Format | Precision | Memory Usage | Device Requirements | Best Use Case |
|--------------|------------|---------------|----------------------|---------------|
| **BF16** | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory |
| **F16** | High | High | FP16-supported devices | GPU inference when BF16 isn't available |
| **Q4_K** | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments |
| **Q6_K** | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized |
| **Q8_0** | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models |
| **IQ3_XS** | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy |
| **Q4_0** | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices |
---
## **Included Files & Details**
### `medgemma-27b-text-it-bf16.gguf`
- Model weights preserved in **BF16**.
- Use this if you want to **requantize** the model into a different format.
- Best if your device supports **BF16 acceleration**.
### `medgemma-27b-text-it-f16.gguf`
- Model weights stored in **F16**.
- Use if your device supports **FP16**, especially if BF16 is not available.
### `medgemma-27b-text-it-bf16-q8_0.gguf`
- **Output & embeddings** remain in **BF16**.
- All other layers quantized to **Q8_0**.
- Use if your device supports **BF16** and you want a quantized version.
### `medgemma-27b-text-it-f16-q8_0.gguf`
- **Output & embeddings** remain in **F16**.
- All other layers quantized to **Q8_0**.
### `medgemma-27b-text-it-q4_k.gguf`
- **Output & embeddings** quantized to **Q8_0**.
- All other layers quantized to **Q4_K**.
- Good for **CPU inference** with limited memory.
### `medgemma-27b-text-it-q4_k_s.gguf`
- Smallest **Q4_K** variant, using less memory at the cost of accuracy.
- Best for **very low-memory setups**.
### `medgemma-27b-text-it-q6_k.gguf`
- **Output & embeddings** quantized to **Q8_0**.
- All other layers quantized to **Q6_K** .
### `medgemma-27b-text-it-q8_0.gguf`
- Fully **Q8** quantized model for better accuracy.
- Requires **more memory** but offers higher precision.
### `medgemma-27b-text-it-iq3_xs.gguf`
- **IQ3_XS** quantization, optimized for **extreme memory efficiency**.
- Best for **ultra-low-memory devices**.
### `medgemma-27b-text-it-iq3_m.gguf`
- **IQ3_M** quantization, offering a **medium block size** for better accuracy.
- Suitable for **low-memory devices**.
### `medgemma-27b-text-it-q4_0.gguf`
- Pure **Q4_0** quantization, optimized for **ARM devices**.
- Best for **low-memory environments**.
- Prefer IQ4_NL for better accuracy.
# <span id="testllm" style="color: #7F7FFF;">🚀 If you find these models useful</span>
❤ **Please click "Like" if you find this useful!**
Help me test my **AI-Powered Network Monitor Assistant** with **quantum-ready security checks**:
👉 [Free Network Monitor](https://readyforquantum.com/dashboard/?assistant=open)
💬 **How to test**:
Choose an **AI assistant type**:
- `TurboLLM` (GPT-4o-mini)
- `HugLLM` (Hugginface Open-source)
- `TestLLM` (Experimental CPU-only)
### **What I’m Testing**
I’m pushing the limits of **small open-source models for AI network monitoring**, specifically:
- **Function calling** against live network services
- **How small can a model go** while still handling:
- Automated **Nmap scans**
- **Quantum-readiness checks**
- **Network Monitoring tasks**
🟡 **TestLLM** – Current experimental model (llama.cpp on 2 CPU threads):
- ✅ **Zero-configuration setup**
- ⏳ 30s load time (slow inference but **no API costs**)
- 🔧 **Help wanted!** If you’re into **edge-device AI**, let’s collaborate!
### **Other Assistants**
🟢 **TurboLLM** – Uses **gpt-4o-mini** for:
- **Create custom cmd processors to run .net code on Free Network Monitor Agents**
- **Real-time network diagnostics and monitoring**
- **Security Audits**
- **Penetration testing** (Nmap/Metasploit)
- 🔑 Get more tokens by logging in or [downloading our Free Network Monitor Agent with integrated AI Assistant](https://readyforquantum.com/download)
🔵 **HugLLM** – Latest Open-source models:
- 🌐 Runs on Hugging Face Inference API
### 💡 **Example commands to you could test**:
1. `"Give me info on my websites SSL certificate"`
2. `"Check if my server is using quantum safe encyption for communication"`
3. `"Run a comprehensive security audit on my server"`
4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a Free Network Monitor Agent to run the .net code from. This is a very flexible and powerful feature. Use with caution!
# MedGemma model card
**Model documentation:** [MedGemma](https://developers.google.com/health-ai-developer-foundations/medgemma)
**Resources:**
* Model on Google Cloud Model Garden: [MedGemma](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/medgemma)
* Model on Hugging Face: [MedGemma](https://huggingface.co/collections/google/medgemma-release-680aade845f90bec6a3f60c4)
* GitHub repository (supporting code, Colab notebooks, discussions, and
issues): [MedGemma](https://github.com/google-health/medgemma)
* Quick start notebook: [GitHub](https://github.com/google-health/medgemma/blob/main/notebooks/quick_start_with_hugging_face.ipynb)
* Fine-tuning notebook: [GitHub](https://github.com/google-health/medgemma/blob/main/notebooks/fine_tune_with_hugging_face.ipynb)
* [Patient Education Demo built using MedGemma](https://huggingface.co/spaces/google/rad_explain)
* Support: See [Contact](https://developers.google.com/health-ai-developer-foundations/medgemma/get-started.md#contact)
* License: The use of MedGemma is governed by the [Health AI Developer
Foundations terms of
use](https://developers.google.com/health-ai-developer-foundations/terms).
**Author:** Google
## Model information
This section describes the MedGemma model and how to use it.
### Description
MedGemma is a collection of [Gemma 3](https://ai.google.dev/gemma/docs/core)
variants that are trained for performance on medical text and image
comprehension. Developers can use MedGemma to accelerate building
healthcare-based AI applications. MedGemma currently comes in two variants: a 4B
multimodal version and a 27B text-only version.
MedGemma 27B has been trained exclusively on medical text and optimized for
inference-time computation. MedGemma 27B is only available as an
instruction-tuned model.
MedGemma variants have been evaluated on a range of clinically relevant
benchmarks to illustrate their baseline performance. These include both open
benchmark datasets and curated datasets. Developers can fine-tune MedGemma
variants for improved performance. Consult the Intended Use section below for
more details.
A full technical report will be available soon.
### How to use
Below are some example code snippets to help you quickly get started running the
model locally on GPU. If you want to use the model at scale, we recommend that
you create a production version using [Model
Garden](https://cloud.google.com/model-garden).
First, install the Transformers library. Gemma 3 is supported starting from
transformers 4.50.0.
```sh
$ pip install -U transformers
```
**Run model with the `pipeline` API**
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="google/medgemma-27b-text-it",
torch_dtype=torch.bfloat16,
device="cuda",
)
messages = [
{
"role": "system",
"content": "You are a helpful medical assistant."
},
{
"role": "user",
"content": "How do you differentiate bacterial from viral pneumonia?"
}
]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"][-1]["content"])
```
**Run the model directly**
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "google/medgemma-27b-text-it"
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
messages = [
{
"role": "system",
"content": "You are a helpful medical assistant."
},
{
"role": "user",
"content": "How do you differentiate bacterial from viral pneumonia?"
}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
input_len = inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**inputs, max_new_tokens=200, do_sample=False)
generation = generation[0][input_len:]
decoded = tokenizer.decode(generation, skip_special_tokens=True)
print(decoded)
```
### Examples
See the following Colab notebooks for examples of how to use MedGemma:
* To give the model a quick try, running it locally with weights from Hugging
Face, see [Quick start notebook in
Colab](https://colab.research.google.com/github/google-health/medgemma/blob/main/notebooks/quick_start_with_hugging_face.ipynb).
Note that you will need to use Colab Enterprise to run the 27B model without
quantization.
* For an example of fine-tuning the model, see the [Fine-tuning notebook in
Colab](https://colab.research.google.com/github/google-health/medgemma/blob/main/notebooks/fine_tune_with_hugging_face.ipynb).
### Model architecture overview
The MedGemma model is built based on [Gemma 3](https://ai.google.dev/gemma/) and
uses the same decoder-only transformer architecture as Gemma 3. To read more
about the architecture, consult the Gemma 3 [model
card](https://ai.google.dev/gemma/docs/core/model_card_3).
### Technical specifications
* **Model type**: Decoder-only Transformer architecture, see the [Gemma 3
technical
report](https://storage.googleapis.com/deepmind-media/gemma/Gemma3Report.pdf)
* **Modalities**: **4B**: Text, vision; **27B**: Text only
* **Attention mechanism**: Utilizes grouped-query attention (GQA)
* **Context length**: Supports long context, at least 128K tokens
* **Key publication**: Coming soon
* **Model created**: May 20, 2025
* **Model version**: 1.0.0
### Citation
A technical report is coming soon. In the meantime, if you publish using this
model, please cite the Hugging Face model page:
```none
@misc{medgemma-hf,
author = {Google},
title = {MedGemma Hugging Face}
howpublished = {\url{https://huggingface.co/collections/google/medgemma-release-680aade845f90bec6a3f60c4}},
year = {2025},
note = {Accessed: [Insert Date Accessed, e.g., 2025-05-20]}
}
```
### Inputs and outputs
**Input**:
* Text string, such as a question or prompt
* Total input length of 128K tokens
**Output**:
* Generated text in response to the input, such as an answer to a question,
analysis of image content, or a summary of a document
* Total output length of 8192 tokens
### Performance and validation
MedGemma was evaluated across a range of different multimodal classification,
report generation, visual question answering, and text-based tasks.
### Key performance metrics
#### Text evaluations
MedGemma 4B and text-only MedGemma 27B were evaluated across a range of
text-only benchmarks for medical knowledge and reasoning.
The MedGemma models outperform their respective base Gemma models across all
tested text-only health benchmarks.
| Metric | MedGemma 27B | Gemma 3 27B | MedGemma 4B | Gemma 3 4B |
| :---- | :---- | :---- | :---- | :---- |
| MedQA (4-op) | 89.8 (best-of-5) 87.7 (0-shot) | 74.9 | 64.4 | 50.7 |
| MedMCQA | 74.2 | 62.6 | 55.7 | 45.4 |
| PubMedQA | 76.8 | 73.4 | 73.4 | 68.4 |
| MMLU Med (text only) | 87.0 | 83.3 | 70.0 | 67.2 |
| MedXpertQA (text only) | 26.7 | 15.7 | 14.2 | 11.6 |
| AfriMed-QA | 84.0 | 72.0 | 52.0 | 48.0 |
For all MedGemma 27B results, [test-time
scaling](https://arxiv.org/abs/2501.19393) is used to improve performance.
### Ethics and safety evaluation
#### Evaluation approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* **Child safety**: Evaluation of text-to-text and image-to-text prompts
covering child safety policies, including child sexual abuse and
exploitation.
* **Content safety:** Evaluation of text-to-text and image-to-text prompts
covering safety policies, including harassment, violence and gore, and hate
speech.
* **Representational harms**: Evaluation of text-to-text and image-to-text
prompts covering safety policies, including bias, stereotyping, and harmful
associations or inaccuracies.
* **General medical harms:** Evaluation of text-to-text and image-to-text
prompts covering safety policies, including information quality and harmful
associations or inaccuracies.
In addition to development level evaluations, we conduct "assurance evaluations"
which are our "arms-length" internal evaluations for responsibility governance
decision making. They are conducted separately from the model development team,
to inform decision making about release. High-level findings are fed back to the
model team, but prompt sets are held out to prevent overfitting and preserve the
results' ability to inform decision making. Notable assurance evaluation results
are reported to our Responsibility & Safety Council as part of release review.
#### Evaluation results
For all areas of safety testing, we saw safe levels of performance across the
categories of child safety, content safety, and representational harms. All
testing was conducted without safety filters to evaluate the model capabilities
and behaviors. For text-to-text, image-to-text, and audio-to-text, and across
both MedGemma model sizes, the model produced minimal policy violations. A
limitation of our evaluations was that they included primarily English language
prompts.
## Data card
### Dataset overview
#### Training
The base Gemma models are pre-trained on a large corpus of text and code data.
MedGemma 4B utilizes a [SigLIP](https://arxiv.org/abs/2303.15343) image encoder
that has been specifically pre-trained on a variety of de-identified medical
data, including radiology images, histopathology images, ophthalmology images,
and dermatology images. Its LLM component is trained on a diverse set of medical
data, including medical text relevant to radiology images, chest-x rays,
histopathology patches, ophthalmology images and dermatology images.
#### Evaluation
MedGemma models have been evaluated on a comprehensive set of clinically
relevant benchmarks, including over 22 datasets across 5 different tasks and 6
medical image modalities. These include both open benchmark datasets and curated
datasets, with a focus on expert human evaluations for tasks like CXR report
generation and radiology VQA.
#### Source
MedGemma utilizes a combination of public and private datasets.
This model was trained on diverse public datasets including MIMIC-CXR (chest
X-rays and reports), Slake-VQA (multimodal medical images and questions),
PAD-UFES-20 (skin lesion images and data), SCIN (dermatology images), TCGA
(cancer genomics data), CAMELYON (lymph node histopathology images), PMC-OA
(biomedical literature with images), and Mendeley Digital Knee X-Ray (knee
X-rays).
Additionally, multiple diverse proprietary datasets were licensed and
incorporated (described next).
### Data Ownership and Documentation
* [Mimic-CXR](https://physionet.org/content/mimic-cxr/2.1.0/): MIT Laboratory
for Computational Physiology and Beth Israel Deaconess Medical Center
(BIDMC).
* [Slake-VQA](https://www.med-vqa.com/slake/): The Hong Kong Polytechnic
University (PolyU), with collaborators including West China Hospital of
Sichuan University and Sichuan Academy of Medical Sciences / Sichuan
Provincial People's Hospital.
* [PAD-UFES-20](https://pmc.ncbi.nlm.nih.gov/articles/PMC7479321/): Federal
University of Espírito Santo (UFES), Brazil, through its Dermatological and
Surgical Assistance Program (PAD).
* [SCIN](https://github.com/google-research-datasets/scin): A collaboration
between Google Health and Stanford Medicine.
* [TCGA](https://portal.gdc.cancer.gov/) (The Cancer Genome Atlas): A joint
effort of National Cancer Institute and National Human Genome Research
Institute. Data from TCGA are available via the Genomic Data Commons (GDC)
* [CAMELYON](https://camelyon17.grand-challenge.org/Data/): The data was
collected from Radboud University Medical Center and University Medical
Center Utrecht in the Netherlands.
* [PMC-OA (PubMed Central Open Access
Subset)](https://catalog.data.gov/dataset/pubmed-central-open-access-subset-pmc-oa):
Maintained by the National Library of Medicine (NLM) and National Center for
Biotechnology Information (NCBI), which are part of the NIH.
* [MedQA](https://arxiv.org/pdf/2009.13081): This dataset was created by a
team of researchers led by Di Jin, Eileen Pan, Nassim Oufattole, Wei-Hung
Weng, Hanyi Fang, and Peter Szolovits
* [Mendeley Digital Knee
X-Ray](https://data.mendeley.com/datasets/t9ndx37v5h/1): This dataset is
from Rani Channamma University, and is hosted on Mendeley Data.
* [AfriMed-QA](https://afrimedqa.com/): This data was developed and led by
multiple collaborating organizations and researchers include key
contributors: Intron Health, SisonkeBiotik, BioRAMP, Georgia Institute of
Technology, and MasakhaneNLP.
* [VQA-RAD](https://www.nature.com/articles/sdata2018251): This dataset was
created by a research team led by Jason J. Lau, Soumya Gayen, Asma Ben
Abacha, and Dina Demner-Fushman and their affiliated institutions (the US
National Library of Medicine and National Institutes of Health)
* [MedExpQA](https://www.sciencedirect.com/science/article/pii/S0933365724001805):
This dataset was created by researchers at the HiTZ Center (Basque Center
for Language Technology and Artificial Intelligence).
* [MedXpertQA](https://huggingface.co/datasets/TsinghuaC3I/MedXpertQA): This
dataset was developed by researchers at Tsinghua University (Beijing, China)
and Shanghai Artificial Intelligence Laboratory (Shanghai, China).
In addition to the public datasets listed above, MedGemma was also trained on
de-identified datasets licensed for research or collected internally at Google
from consented participants.
* Radiology dataset 1: De-identified dataset of different CT studies across
body parts from a US-based radiology outpatient diagnostic center network.
* Ophthalmology dataset 1: De-identified dataset of fundus images from
diabetic retinopathy screening.
* Dermatology dataset 1: De-identified dataset of teledermatology skin
condition images (both clinical and dermatoscopic) from Colombia.
* Dermatology dataset 2: De-identified dataset of skin cancer images (both
clinical and dermatoscopic) from Australia.
* Dermatology dataset 3: De-identified dataset of non-diseased skin images
from an internal data collection effort.
* Pathology dataset 1: De-identified dataset of histopathology H&E whole slide
images created in collaboration with an academic research hospital and
biobank in Europe. Comprises de-identified colon, prostate, and lymph nodes.
* Pathology dataset 2: De-identified dataset of lung histopathology H&E and
IHC whole slide images created by a commercial biobank in the United States.
* Pathology dataset 3: De-identified dataset of prostate and lymph node H&E
and IHC histopathology whole slide images created by a contract research
organization in the United States.
* Pathology dataset 4: De-identified dataset of histopathology, predominantly
H\&E whole slide images created in collaboration with a large, tertiary
teaching hospital in the United States. Comprises a diverse set of tissue
and stain types, predominantly H&E.
### Data citation
* **MIMIC-CXR** Johnson, A., Pollard, T., Mark, R., Berkowitz, S., & Horng, S.
(2024). MIMIC-CXR Database (version 2.1.0). PhysioNet.
https://physionet.org/content/mimic-cxr/2.1.0/
*and* Johnson, Alistair E. W., Tom J. Pollard, Seth J. Berkowitz, Nathaniel R.
Greenbaum, Matthew P. Lungren, Chih-Ying Deng, Roger G. Mark, and Steven
Horng. 2019. "MIMIC-CXR, a de-Identified Publicly Available Database of
Chest Radiographs with Free-Text Reports." *Scientific Data 6* (1): 1–8.
* **SLAKE** Liu, Bo, Li-Ming Zhan, Li Xu, Lin Ma, Yan Yang, and Xiao-Ming Wu.
2021.SLAKE: A Semantically-Labeled Knowledge-Enhanced Dataset for Medical
Visual Question Answering." http://arxiv.org/abs/2102.09542.
* **PAD-UEFS** Pacheco, A. G. C., Lima, G. R., Salomao, A., Krohling, B.,
Biral, I. P., de Angelo, G. G., Alves, F. O. G., Ju X. M., & P. R. C.
(2020). PAD-UFES-20: A skin lesion dataset composed of patient data and
clinical images collected from smartphones. In *Proceedings of the 2020 IEEE
International Conference on Bioinformatics and Biomedicine (BIBM)* (pp.
1551-1558). IEEE. https://doi.org/10.1109/BIBM49941.2020.9313241
* **SCIN** Ward, Abbi, Jimmy Li, Julie Wang, Sriram Lakshminarasimhan, Ashley
Carrick, Bilson Campana, Jay Hartford, et al. 2024. "Creating an Empirical
Dermatology Dataset Through Crowdsourcing With Web Search Advertisements."
*JAMA Network Open 7* (11): e2446615–e2446615.
* **TCGA** The results shown here are in whole or part based upon data
generated by the TCGA Research Network: https://www.cancer.gov/tcga.
* **CAMELYON16** Ehteshami Bejnordi, Babak, Mitko Veta, Paul Johannes van
Diest, Bram van Ginneken, Nico Karssemeijer, Geert Litjens, Jeroen A. W. M.
van der Laak, et al. 2017. "Diagnostic Assessment of Deep Learning
Algorithms for Detection of Lymph Node Metastases in Women With Breast
Cancer." *JAMA 318* (22): 2199–2210.
* **MedQA** Jin, Di, Eileen Pan, Nassim Oufattole, Wei-Hung Weng, Hanyi Fang,
and Peter Szolovits. 2020. "What Disease Does This Patient Have? A
Large-Scale Open Domain Question Answering Dataset from Medical Exams."
http://arxiv.org/abs/2009.13081.
* **Mendeley Digital Knee X-Ray** Gornale, Shivanand; Patravali, Pooja (2020),
"Digital Knee X-ray Images", Mendeley Data, V1, doi: 10.17632/t9ndx37v5h.1
* **AfrimedQA** Olatunji, Tobi, Charles Nimo, Abraham Owodunni, Tassallah
Abdullahi, Emmanuel Ayodele, Mardhiyah Sanni, Chinemelu Aka, et al. 2024.
"AfriMed-QA: A Pan-African, Multi-Specialty, Medical Question-Answering
Benchmark Dataset." http://arxiv.org/abs/2411.15640.
* **VQA-RAD** Lau, Jason J., Soumya Gayen, Asma Ben Abacha, and Dina
Demner-Fushman. 2018. "A Dataset of Clinically Generated Visual Questions
and Answers about Radiology Images." *Scientific Data 5* (1): 1–10.
* **MedexpQA** Alonso, I., Oronoz, M., & Agerri, R. (2024). MedExpQA:
Multilingual Benchmarking of Large Language Models for Medical Question
Answering. *arXiv preprint arXiv:2404.05590*. Retrieved from
https://arxiv.org/abs/2404.05590
* **MedXpertQA** Zuo, Yuxin, Shang Qu, Yifei Li, Zhangren Chen, Xuekai Zhu,
Ermo Hua, Kaiyan Zhang, Ning Ding, and Bowen Zhou. 2025. "MedXpertQA:
Benchmarking Expert-Level Medical Reasoning and Understanding."
http://arxiv.org/abs/2501.18362.
### De-identification/anonymization:
Google and partnerships utilize datasets that have been rigorously anonymized or
de-identified to ensure the protection of individual research participants and
patient privacy
## Implementation information
Details about the model internals.
### Software
Training was done using [JAX](https://github.com/jax-ml/jax).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
## Use and limitations
### Intended use
MedGemma is an open multimodal generative AI model intended to be used as a
starting point that enables more efficient development of downstream healthcare
applications involving medical text and images. MedGemma is intended for
developers in the life sciences and healthcare space. Developers are responsible
for training, adapting and making meaningful changes to MedGemma to accomplish
their specific intended use. MedGemma models can be fine-tuned by developers
using their own proprietary data for their specific tasks or solutions.
MedGemma is based on Gemma 3 and has been further trained on medical images and
text. MedGemma enables further development in any medical context (image and
textual), however the model was pre-trained using chest X-ray, pathology,
dermatology, and fundus images. Examples of tasks within MedGemma's training
include visual question answering pertaining to medical images, such as
radiographs, or providing answers to textual medical questions. Full details of
all the tasks MedGemma has been evaluated can be found in an upcoming technical
report.
### Benefits
* Provides strong baseline medical image and text comprehension for models of
its size.
* This strong performance makes it efficient to adapt for downstream
healthcare-based use cases, compared to models of similar size without
medical data pre-training.
* This adaptation may involve prompt engineering, grounding, agentic
orchestration or fine-tuning depending on the use case, baseline validation
requirements, and desired performance characteristics.
### Limitations
MedGemma is not intended to be used without appropriate validation, adaptation
and/or making meaningful modification by developers for their specific use case.
The outputs generated by MedGemma are not intended to directly inform clinical
diagnosis, patient management decisions, treatment recommendations, or any other
direct clinical practice applications. Performance benchmarks highlight baseline
capabilities on relevant benchmarks, but even for image and text domains that
constitute a substantial portion of training data, inaccurate model output is
possible. All outputs from MedGemma should be considered preliminary and require
independent verification, clinical correlation, and further investigation
through established research and development methodologies.
MedGemma's multimodal capabilities have been primarily evaluated on single-image
tasks. MedGemma has not been evaluated in use cases that involve comprehension
of multiple images.
MedGemma has not been evaluated or optimized for multi-turn applications.
MedGemma's training may make it more sensitive to the specific prompt used than
Gemma 3.
When adapting MedGemma developer should consider the following:
* **Bias in validation data:** As with any research, developers should ensure
that any downstream application is validated to understand performance using
data that is appropriately representative of the intended use setting for
the specific application (e.g., age, sex, gender, condition, imaging device,
etc).
* **Data contamination concerns**: When evaluating the generalization
capabilities of a large model like MedGemma in a medical context, there is a
risk of data contamination, where the model might have inadvertently seen
related medical information during its pre-training, potentially
overestimating its true ability to generalize to novel medical concepts.
Developers should validate MedGemma on datasets not publicly available or
otherwise made available to non-institutional researchers to mitigate this
risk. |
SeeFlock/task-10-microsoft-Phi-3-mini-4k-instruct | SeeFlock | 2025-05-30T01:16:39Z | 26 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:adapter:microsoft/Phi-3.5-mini-instruct",
"region:us"
] | null | 2025-05-29T02:35:25Z | ---
base_model: microsoft/Phi-3.5-mini-instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
elkababi2/Darija_Orpheus_3b_YFTA | elkababi2 | 2025-05-30T01:16:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/orpheus-3b-0.1-ft",
"base_model:finetune:unsloth/orpheus-3b-0.1-ft",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-30T01:11:44Z | ---
base_model: unsloth/orpheus-3b-0.1-ft
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** elkababi2
- **License:** apache-2.0
- **Finetuned from model :** unsloth/orpheus-3b-0.1-ft
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
bruhzair/prototype4x18 | bruhzair | 2025-05-30T01:10:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2408.07990",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-30T00:47:12Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# prototype-0.4x18
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using /workspace/cache/models--huihui-ai--DeepSeek-R1-Distill-Llama-70B-abliterated/snapshots/116ff0fa55425b094a38a6bbf6faf2f5cafea335 as a base.
### Models Merged
The following models were included in the merge:
* /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-Nexus/snapshots/1fc6f9b78d8921a26003edb06a292e94488a4c52
* /workspace/cache/models--SicariusSicariiStuff--Negative_LLAMA_70B/snapshots/097a11b4600eafe333a2be0309bbdf6be2f197c4
* /workspace/cache/models--Sao10K--L3-70B-Euryale-v2.1/snapshots/36ad832b771cd783ea7ad00ed39e61f679b1a7c6
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-Nexus/snapshots/1fc6f9b78d8921a26003edb06a292e94488a4c52
parameters:
select_topk: 0.9
- model: /workspace/cache/models--Sao10K--L3-70B-Euryale-v2.1/snapshots/36ad832b771cd783ea7ad00ed39e61f679b1a7c6
parameters:
select_topk: 0.5
- model: /workspace/cache/models--SicariusSicariiStuff--Negative_LLAMA_70B/snapshots/097a11b4600eafe333a2be0309bbdf6be2f197c4
parameters:
select_topk: 0.5
- model: /workspace/cache/models--huihui-ai--DeepSeek-R1-Distill-Llama-70B-abliterated/snapshots/116ff0fa55425b094a38a6bbf6faf2f5cafea335
parameters:
select_topk: 0.85
base_model: /workspace/cache/models--huihui-ai--DeepSeek-R1-Distill-Llama-70B-abliterated/snapshots/116ff0fa55425b094a38a6bbf6faf2f5cafea335
merge_method: sce
tokenizer:
source: union
chat_template: llama3
int8_mask: true
dtype: bfloat16
```
|
Hsianchengfun/merged_model_WOQ_epoch1441 | Hsianchengfun | 2025-05-30T01:08:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-30T01:05:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RegalHyperus/DrumKitRVCModels | RegalHyperus | 2025-05-30T01:01:43Z | 0 | 3 | null | [
"license:openrail",
"region:us"
] | null | 2023-06-27T16:31:17Z | ---
license: openrail
---
As the name implies, this library is full of RVC AI drum kit models, which work like RVC voice models, except with drums.
An introduction to RVC drum models:
RVC drum models basically make your drums sound different while maintaining the drumline.
Say you input drum audio A and use an RVC drum model sampled on drum audio B. Basically the output will be drum audio A's drumline but played on the drums of drum audio B.
For drum kit models that blend the drums of multiple songs together, see [DrumKitFusionRVCModels](https://huggingface.co/RegalHyperus/DrumKitFusionRVCModels).
They ain't got rhythm...
Please credit me if used, and do NOT monetize anything made using my RVC models. Thank you very much! (^⩌^)
Sincerely, the one and only RegalHyperus
X, Instagram, YouTube: @RegalHyperus
## Fair Use
Copyright Disclaimer Under Section 107 of the Copyright Act 1976, allowance is made for "fair use" for purposes such as criticism, comment, news reporting, teaching, scholarship, and research. Fair use is a use permitted by copyright statute that might otherwise be infringing. Non-profit, educational or personal use tips the balance in favor of fair use.
## Credits
Some songs are courtesy of www.EpidemicSound.com (e.g., Cheat Sheet, Coconut Rock, Human Cannon, Meet the Masters of Circus, Such Gossip, and When the Cat's Away). And two are licensed under CC-BY 4.0 (https://creativecommons.org/licenses/by/4.0/) (Dream Culture & Meatball Parade).
Dancing on the Moon was provided by NoCopyrightSounds. (Free DL/Stream: NCS.io/DOTM | Watch: youtu.be/9EHXqi0ez54)
## Songs Featured (incomplete):
AJR - 100 Bad Days, 3 O'Clock Things, Bang!, Bummerland, Burn the House Down, Christmas in June, Christmas in June (Suno "One Song to the Tune of Another" cover)
Kumiko Osugi & Koorogi '73 - 3nin no Uta
Gayle - ABCDEFU
Nanashi Mumei - A New Start
Rosé & Bruno Mars - Apt.
One Direction - Act My Age
Disasterpeace - Adventure (from Fez)
Tollan Kim & Kudasaibeats - Aesthetic
Phineas Flynn & Swampy - Ain't Got Rhythm (Drums)
Mr.Kitty - After Dark
LiSA - Akeboshi
"Weird Al" Yankovic - Albuquerque
Rica Matsumoto - Alive a Life
Eric Carmen - All by Myself
Mariah Carey - All I Want for Christmas Is You
Garrett Williamson - Alpharad End Theme (2021), Break In
Bill Wurtz - And the Day Goes On, At the Airport Terminal
Fatty Spins - Apple Store Love Song, Doin' Your Mom
Ozuna & Gims - Arhbo
Harry Styles - As It Was (Prep cover)
Matt Maltese - As the World Caves In
Masked Wolf - Astronaut in the Ocean
Nozomi Aoki - Asunaki Tabi
The Green Orbs - At the Fair
SantiOkuu - Attack of the Stupid King
Charlie Puth - Attention
K-391 & RØRY - Aurora
Taku Iwasaki - Awake
Ichika Nito & Luke Holland - Awakening (Drum Remix ver.)
BPB - Cassette 808 Drums Sample Pack
Zayde Wolf & EDVN - Back in the Fight
The Score & Dreamers - Bad Days
Michael Jackson - Bad, Billie Jean, Dangerous
Ed Sheeran - Bad Habits, Celestial
Kazuma Kiryu - Baka Mitai (Taxi Driver ver.)
Mustard ft. Roddy Ricch - Ballin'
Kornell Aka Piermid - Balls in Yur Jaws
Satoko Yamano, Ushio Hashimoto, Hitomi Takimoto, Akira Hayashi, Ryūsei Nakao & Motoko Kumai - Barbafamily no Uta
Neal Hefti - Batman Theme (1960s)
Linkin Park - Battle Symphony
Raito - Beat from Melty Blood, Gathers Under Night..., Night Walker (both versions), Overwhelm Despair
Ikuo - Believer
Imagine Dragons - Believer, Birds, Bleeding Out, Bones, Cool Out, Demons, Digital, Enemy, Enemy (Suno "One Song to the Tune of Another" cover), Follow You
Unknown - Ben 10 Reboot theme song
American Authors - Best Day of My Life
Gordo Drummer - Best Drummer Ever
Liella! - Oi kakeru Yume no Saki de (Beyond the Dream We Chase)
The Score ft. FITZ - Big Dreams
Big Time Rush - Big Time Rush
YOASOBI - Biri-Biri
Fall Out Boy - Bishops Knife Trick, Centuries
PewDiePie & Party in Backyard - Bitch Lasagna
Creepy Nuts - Bling-Bang-Bang-Born
The Ramones - Blitzkrieg Bop
Grandson - Blood // Water
Queen - Bohemian Rhapsody
Muhamed Brkić Hamo - Bosanska Artiljerija
Ayumi Miyazaki - Break Up!
Evanescence - Bring Me to Life
Chevy ft. Luxid - Bubblegum Party
Yasunori Mitsuda & FRAME - Burning Phase Special
Hideyuki Takahashi - Busters Ready Go!
Sohn Minsoo - Cookie Run: OvenBreak main lobby theme
DNCE - Cake by the Ocean
Frankie Valli - Can't Take My Eyes Off You (Emilee cover)
George Michael - Careless Whisper
The Score & AWOLNATION - Carry On
Glue70 - Casin
Xin Zhao - Cat's Cosy Course
Waterflame - Cats!
ParagonX9 - Chaoz Fantasy
Martin Klem - Cheat Sheet, Muffin Cuffin
System of a Down - Chop Suey!
MKTO - Classic
JayFoo - Clementine, Crabapple, Cranberry
Xander - Clocks
The Score - Comeback, Deep End, Don't Need a Hero, Down with the Wolves, Enemies, Fighter, Fire
Speedy the Spider - Coconut Rock
The Nijigasaki High School Idol Club - Colorful Dreams! Colorful Smiles!, Nijiiro Passions!
Fifty Fifty - Cupid
Kendrick Lamar - DNA. (Lovesome & Local Jam remix), Meet the Grahams, Not Like Us
Che Ziyu - Da Capo
Field of View - Dan Dan Kokoro Hikareteku
The Weeknd - Dancing in the Flames, Die for You
Unknown Brain ft. Luke Burr - Dancing on the Moon
Treasure - Darari
Red Velvet - Day 1
Panic! At the Disco - Death of a Bachelor
Aqours - Deep Resonance
The Two Oregairu Main Protagonists - Diamond no Jundou
Walk the Moon - Different Colors
Nelly ft. Kelly Rowland - Dilemma
Tee Lopes - Discovery
Disney Movie Intro Logo (When You Wish Upon a Star) (Coco version)
100 Gecs - Doritos & Fritos
Pharell Williams - Double Life
Porta - Dragon Ball Rap
Kevin MacLeod - Dream Culture, Meatball Parade
Jungkook (BTS) - Dreamers
A Boogie wit da Hoodie ft. Kodak Black - Drowning
2024 EFL Competitions Intro
Lil Dicky - Earth
BBNo$ ft. Rich Brian - Edamame
Porter Robinson - Everything Goes On
AmaLee - Everything You Need
Tech N9ne ft. Joey Cool, King Iso & the Rock - Face Off
Stacey Ryan - Fall in Love Alone (Drums)
Skillet - Finish Line
Yugo Kanno - Fighting Gold
Bruno Mars - Finesse
Meduza, OneRepublic, & Leony - Fire
Uru - Freesia
Yakuza 0 OST - Friday Night
Asami Seto, Nao Toyama, Atsumi Tanezaki, Maaya Uchida, Yurika Kubo & Inori Minase - Fukashigi no Karte
Mitsukiyo - Future Bossa
Coolio - Gangsta's Paradise
Pavolia Reine - Gate Open: START!
ACE+ - Gaur Plain
Daft Punk ft. Pharrell Williams - Get Lucky
True Damage - Giants
ABBA - Gimme! Gimme! Gimme! (A Man After Midnight)
Ronnie Hilton & Leeds United FC - Glory Glory Leeds United
The World Red Army - Glory Glory Man United
Tottenham Hotspur 1981 FA Cup Final Squad & Chas & Dave - Glory Glory Tottenham Hotspur
Mako - Piercing Light
and many more
## Bucket List: |
gradientrouting-spar/base_2d_first_quadrant_red_no_preamble_20250530_005645 | gradientrouting-spar | 2025-05-30T01:00:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-30T00:59:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
winnieyangwannan/Llama-3.1-8B-Instruct_mlp-down_negative-addition_last_layer_18_2_song_ratio_3_epoch_39 | winnieyangwannan | 2025-05-30T00:58:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T20:34:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
batmanark/q-FrozenLake-v1-4x4-noSlippery | batmanark | 2025-05-30T00:58:20Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-30T00:58:16Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="batmanark/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
cm-l/lunarppotest | cm-l | 2025-05-30T00:58:20Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-30T00:42:56Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 7.04 +/- 138.47
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
JoshMe1/fdf4cd1b-53b8-4f10-9e66-20dd67cab3ca | JoshMe1 | 2025-05-30T00:53:05Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2-7B-Instruct",
"base_model:adapter:Qwen/Qwen2-7B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-29T22:49:45Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fdf4cd1b-53b8-4f10-9e66-20dd67cab3ca
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2-7B-Instruct
bf16: true
chat_template: llama3
dataloader_num_workers: 8
dataloader_pin_memory: true
dataset_prepared_path: null
datasets:
- data_files:
- e042e1b993a4ecfe_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
dynamic_lora_per_layer: true
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
evaluation_strategy: steps
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: false
hub_model_id: JoshMe1/fdf4cd1b-53b8-4f10-9e66-20dd67cab3ca
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 10
lora_alpha: 128
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_finder: true
lr_scheduler: cosine
lr_scheduler_args: []
max_grad_norm: 1.0
max_memory:
0: 130GB
max_steps: 1534
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/e042e1b993a4ecfe_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 4
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
save_strategy: steps
save_total_limit: 3
scheduler:
factor: 0.5
monitor: eval_loss
patience: 1
threshold: 0.01
type: ReduceLROnPlateau
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
training_stages:
- learning_rate: 0.0002
name: warmup
num_train_epochs: 1
- learning_rate: 2.0e-05
name: main
trl:
ema: true
ema_decay: 0.999
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a8dd5d6f-03c0-4539-a4f4-1f162f583d8b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: a8dd5d6f-03c0-4539-a4f4-1f162f583d8b
warmup_steps: 153
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# fdf4cd1b-53b8-4f10-9e66-20dd67cab3ca
This model is a fine-tuned version of [Qwen/Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3822
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 153
- training_steps: 1534
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0014 | 1 | 1.7261 |
| 1.4081 | 0.1372 | 100 | 1.4116 |
| 1.3922 | 0.2743 | 200 | 1.3941 |
| 1.3977 | 0.4115 | 300 | 1.3838 |
| 1.4132 | 0.5487 | 400 | 1.3759 |
| 1.3838 | 0.6859 | 500 | 1.3699 |
| 1.3789 | 0.8230 | 600 | 1.3652 |
| 1.3592 | 0.9602 | 700 | 1.3588 |
| 1.1585 | 1.0974 | 800 | 1.3761 |
| 1.1329 | 1.2346 | 900 | 1.3822 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
BootesVoid/cmb9wv8110jqh1b1ycne89nkr_cmba2d4v90l871b1y0aliug9h | BootesVoid | 2025-05-30T00:51:51Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-30T00:51:44Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: sara
---
# Cmb9Wv8110Jqh1B1Ycne89Nkr_Cmba2D4V90L871B1Y0Aliug9H
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `sara` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "sara",
"lora_weights": "https://huggingface.co/BootesVoid/cmb9wv8110jqh1b1ycne89nkr_cmba2d4v90l871b1y0aliug9h/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb9wv8110jqh1b1ycne89nkr_cmba2d4v90l871b1y0aliug9h', weight_name='lora.safetensors')
image = pipeline('sara').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb9wv8110jqh1b1ycne89nkr_cmba2d4v90l871b1y0aliug9h/discussions) to add images that show off what you’ve made with this LoRA.
|
BootesVoid/cmayfpfdd03hwu1cghlma4ha6_cmba1umai0l411b1yix22sm21 | BootesVoid | 2025-05-30T00:49:43Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-30T00:49:35Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ICEQUEEN
---
# Cmayfpfdd03Hwu1Cghlma4Ha6_Cmba1Umai0L411B1Yix22Sm21
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ICEQUEEN` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "ICEQUEEN",
"lora_weights": "https://huggingface.co/BootesVoid/cmayfpfdd03hwu1cghlma4ha6_cmba1umai0l411b1yix22sm21/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmayfpfdd03hwu1cghlma4ha6_cmba1umai0l411b1yix22sm21', weight_name='lora.safetensors')
image = pipeline('ICEQUEEN').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmayfpfdd03hwu1cghlma4ha6_cmba1umai0l411b1yix22sm21/discussions) to add images that show off what you’ve made with this LoRA.
|
httppp/finetuned-llama2-4bit-gguf | httppp | 2025-05-30T00:47:19Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-30T00:47:16Z | ---
license: apache-2.0
---
|
BootesVoid/cmb8mayhl0o8qlexpagw1nsqm_cmba24vnn0l5v1b1ycp6z0d2o | BootesVoid | 2025-05-30T00:45:47Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-30T00:45:45Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: LEXI
---
# Cmb8Mayhl0O8Qlexpagw1Nsqm_Cmba24Vnn0L5V1B1Ycp6Z0D2O
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `LEXI` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "LEXI",
"lora_weights": "https://huggingface.co/BootesVoid/cmb8mayhl0o8qlexpagw1nsqm_cmba24vnn0l5v1b1ycp6z0d2o/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb8mayhl0o8qlexpagw1nsqm_cmba24vnn0l5v1b1ycp6z0d2o', weight_name='lora.safetensors')
image = pipeline('LEXI').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb8mayhl0o8qlexpagw1nsqm_cmba24vnn0l5v1b1ycp6z0d2o/discussions) to add images that show off what you’ve made with this LoRA.
|
profmatthew/Attn-DeCGAN | profmatthew | 2025-05-30T00:44:08Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-26T12:55:24Z | ---
license: apache-2.0
---
|
vermoney/36d0bfa5-a5fc-47c8-9fd2-dd89bc4589a8 | vermoney | 2025-05-30T00:40:15Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-1.7B-Instruct",
"base_model:adapter:unsloth/SmolLM2-1.7B-Instruct",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-30T00:32:44Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-1.7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 36d0bfa5-a5fc-47c8-9fd2-dd89bc4589a8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0.dev0`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-1.7B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 70d991d912fc0e95_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 3
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: vermoney/36d0bfa5-a5fc-47c8-9fd2-dd89bc4589a8
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 2.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 96
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 48
lora_target_linear: true
lr_scheduler: cosine
max_steps: 280
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/70d991d912fc0e95_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: eb847f20-dfd3-4bc3-98a6-d05a0c333efa
wandb_project: s56-9
wandb_run: your_name
wandb_runid: eb847f20-dfd3-4bc3-98a6-d05a0c333efa
warmup_steps: 40
weight_decay: 0.02
xformers_attention: false
```
</details><br>
# 36d0bfa5-a5fc-47c8-9fd2-dd89bc4589a8
This model is a fine-tuned version of [unsloth/SmolLM2-1.7B-Instruct](https://huggingface.co/unsloth/SmolLM2-1.7B-Instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9458
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 18
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 40
- training_steps: 280
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8589 | 0.0430 | 280 | 0.9458 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1 |
Fotiissss/whisper-large-v3-turbo-lora-el | Fotiissss | 2025-05-30T00:39:31Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:openai/whisper-large-v3",
"base_model:adapter:openai/whisper-large-v3",
"license:apache-2.0",
"region:us"
] | null | 2025-05-29T10:18:36Z | ---
library_name: peft
license: apache-2.0
base_model: openai/whisper-large-v3
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-large-v3-turbo-lora-el
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-large-v3-turbo-lora-el
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1734
- Wer: 0.4643
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:--------:|:----:|:---------------:|:------:|
| 0.3662 | 10.4278 | 250 | 0.3702 | 0.5488 |
| 0.2983 | 20.8556 | 500 | 0.3048 | 0.5264 |
| 0.2742 | 31.2567 | 750 | 0.2839 | 0.5168 |
| 0.2647 | 41.6845 | 1000 | 0.2698 | 0.5096 |
| 0.242 | 52.0856 | 1250 | 0.2581 | 0.5085 |
| 0.2395 | 62.5134 | 1500 | 0.2471 | 0.5019 |
| 0.229 | 72.9412 | 1750 | 0.2371 | 0.4973 |
| 0.2187 | 83.3422 | 2000 | 0.2279 | 0.4955 |
| 0.2086 | 93.7701 | 2250 | 0.2192 | 0.4947 |
| 0.202 | 104.1711 | 2500 | 0.2113 | 0.4942 |
| 0.1952 | 114.5989 | 2750 | 0.2041 | 0.4936 |
| 0.1828 | 125.0 | 3000 | 0.1974 | 0.4805 |
| 0.1819 | 135.4278 | 3250 | 0.1918 | 0.4826 |
| 0.1748 | 145.8556 | 3500 | 0.1867 | 0.4786 |
| 0.1755 | 156.2567 | 3750 | 0.1825 | 0.4770 |
| 0.1719 | 166.6845 | 4000 | 0.1791 | 0.4708 |
| 0.169 | 177.0856 | 4250 | 0.1766 | 0.4707 |
| 0.1674 | 187.5134 | 4500 | 0.1748 | 0.4640 |
| 0.1662 | 197.9412 | 4750 | 0.1738 | 0.4643 |
| 0.1609 | 208.3422 | 5000 | 0.1734 | 0.4643 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1 |
Triangle104/Qwen3-30B-A3B-abliterated-Q5_K_S-GGUF | Triangle104 | 2025-05-30T00:39:05Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/Qwen3-30B-A3B-abliterated",
"base_model:quantized:huihui-ai/Qwen3-30B-A3B-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-30T00:37:24Z | ---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: huihui-ai/Qwen3-30B-A3B-abliterated
tags:
- chat
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
extra_gated_prompt: '**Usage Warnings**
“**Risk of Sensitive or Controversial Outputs**“: This model’s safety filtering
has been significantly reduced, potentially generating sensitive, controversial,
or inappropriate content. Users should exercise caution and rigorously review generated
outputs.
“**Not Suitable for All Audiences**:“ Due to limited content filtering, the model’s
outputs may be inappropriate for public settings, underage users, or applications
requiring high security.
“**Legal and Ethical Responsibilities**“: Users must ensure their usage complies
with local laws and ethical standards. Generated content may carry legal or ethical
risks, and users are solely responsible for any consequences.
“**Research and Experimental Use**“: It is recommended to use this model for research,
testing, or controlled environments, avoiding direct use in production or public-facing
commercial applications.
“**Monitoring and Review Recommendations**“: Users are strongly advised to monitor
model outputs in real-time and conduct manual reviews when necessary to prevent
the dissemination of inappropriate content.
“**No Default Safety Guarantees**“: Unlike standard models, this model has not undergone
rigorous safety optimization. huihui.ai bears no responsibility for any consequences
arising from its use.'
---
# Triangle104/Qwen3-30B-A3B-abliterated-Q5_K_S-GGUF
This model was converted to GGUF format from [`huihui-ai/Qwen3-30B-A3B-abliterated`](https://huggingface.co/huihui-ai/Qwen3-30B-A3B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen3-30B-A3B-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen3-30B-A3B-abliterated-Q5_K_S-GGUF --hf-file qwen3-30b-a3b-abliterated-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen3-30B-A3B-abliterated-Q5_K_S-GGUF --hf-file qwen3-30b-a3b-abliterated-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen3-30B-A3B-abliterated-Q5_K_S-GGUF --hf-file qwen3-30b-a3b-abliterated-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen3-30B-A3B-abliterated-Q5_K_S-GGUF --hf-file qwen3-30b-a3b-abliterated-q5_k_s.gguf -c 2048
```
|
trentmkelly/slop-detector-mini | trentmkelly | 2025-05-30T00:38:45Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"onnx",
"safetensors",
"bert",
"text-classification",
"autotrain",
"base_model:TaylorAI/gte-tiny",
"base_model:quantized:TaylorAI/gte-tiny",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-30T00:09:04Z |
---
library_name: transformers
tags:
- autotrain
- text-classification
base_model: TaylorAI/gte-tiny
widget:
- text: "I love AutoTrain"
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.04012129828333855
f1: 0.9900353584056574
precision: 0.9859154929577465
recall: 0.9941897998708844
auc: 0.999704926354536
accuracy: 0.9899935442220787
|
Jsh1971/distilbert-base-uncased-finetuned-emotion | Jsh1971 | 2025-05-30T00:38:26Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-29T16:01:16Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 0.1 | 25 | 1.5855 | 0.3685 | 0.2299 |
### Framework versions
- Transformers 4.52.3
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
AmberYifan/Llama-2-13b-sft-peers-pool | AmberYifan | 2025-05-30T00:36:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:AmberYifan/Llama-2-13b-sft-ultrachat-safeRLHF",
"base_model:finetune:AmberYifan/Llama-2-13b-sft-ultrachat-safeRLHF",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-30T00:05:20Z | ---
base_model: AmberYifan/Llama-2-13b-sft-ultrachat-safeRLHF
library_name: transformers
model_name: Llama-2-13b-sft-peers-pool
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Llama-2-13b-sft-peers-pool
This model is a fine-tuned version of [AmberYifan/Llama-2-13b-sft-ultrachat-safeRLHF](https://huggingface.co/AmberYifan/Llama-2-13b-sft-ultrachat-safeRLHF).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AmberYifan/Llama-2-13b-sft-peers-pool", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/93relq6f)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Subsets and Splits