modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-05-25 06:27:05
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 476
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-05-25 06:22:25
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
hxyscott/quick-test | hxyscott | 2025-05-25T03:58:59Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-05T14:33:20Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
6rn-657/ORPO_llama_v1 | 6rn-657 | 2025-05-25T03:58:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-25T03:57:42Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
John6666/novaashmix-v1-sdxl | John6666 | 2025-05-25T03:57:20Z | 0 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"hentai",
"artstyle",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-05-25T03:51:19Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- hentai
- artstyle
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1614960/novaashmix?modelVersionId=1827699).
This model created by [dwnsty](https://civitai.com/user/dwnsty).
|
Mertjhan/IonoBenchv1 | Mertjhan | 2025-05-25T03:54:53Z | 0 | 0 | null | [
"arxiv:2211.12509",
"arxiv:2308.09891",
"arxiv:1810.13273",
"arxiv:2306.11249",
"license:cc-by-4.0",
"region:us"
] | null | 2025-05-25T02:59:40Z | ---
license: cc-by-4.0
---
# IonoBench Benchmark Models
This repository stores trained weights, logs, and configs for various models used in the IonoBenchv1 Evalution Framework.
## Models
Ex:
- **Checkpoint**: [`SimVP/model_best.pth`](SimVP_AllFeatures/model_best.pth)
- **Config**: [`config.yaml`](SimVP_AllFeatures/config.yaml)
- **Logs**: [`training_log.txt`](SimVP_AllFeatures/training_log.txt)
- **Test Results**: See `testing_info_*.txt`
## Notes
- Please check config files to see training details and used dataset (Stratified or Chronological)
- For further information on the benchmark framework, experimental setup, and evaluation strategy, see the related paper:
- Turkmen, M.C., Lee, Y.H., Tan, E.L.
*IonoBench: A Framework for Benchmarking Spatiotemporal Ionospheric Forecasting Models under Solar-Balanced and Storm-Aware Conditions*.
[Remote Sensing, 2025] – https://doi.org/10.3390/rs1010000 (in review)
## References
- **SimVPv2**: Tan et al., 2024. [arXiv:2211.12509](https://arxiv.org/abs/2211.12509)
- **SwinLSTM**: Tang et al., 2023. [arXiv:2308.09891](https://arxiv.org/abs/2308.09891)
- **DCNN121**: Boulch et al., 2018. [arXiv:1810.13273](https://arxiv.org/abs/1810.13273)
- **OpenSTL**: Tan et al., 2023. [arXiv:2306.11249](https://arxiv.org/abs/2306.11249) |
vermoney/e3fcf64a-79e6-4d07-857e-5d87e9f2ff4c | vermoney | 2025-05-25T03:53:49Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Coder-7B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-Coder-7B-Instruct",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-25T03:36:25Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Coder-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e3fcf64a-79e6-4d07-857e-5d87e9f2ff4c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Coder-7B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 01261804ae74327e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: input
field_instruction: instruct
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: vermoney/e3fcf64a-79e6-4d07-857e-5d87e9f2ff4c
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 2.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 96
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 48
lora_target_linear: true
lr_scheduler: cosine
max_steps: 280
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/01261804ae74327e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7fe0bcb0-694d-4e80-a056-fd10c60fd305
wandb_project: s56-9
wandb_run: your_name
wandb_runid: 7fe0bcb0-694d-4e80-a056-fd10c60fd305
warmup_steps: 40
weight_decay: 0.02
xformers_attention: true
```
</details><br>
# e3fcf64a-79e6-4d07-857e-5d87e9f2ff4c
This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Coder-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9312
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 12
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 40
- training_steps: 280
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0925 | 0.0322 | 280 | 0.9312 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Centk/task-9-google-gemma-2b | Centk | 2025-05-25T03:51:18Z | 523 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"region:us"
] | null | 2025-05-10T09:19:20Z | ---
base_model: google/gemma-2b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
Khal5454/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bristly_beaked_hedgehog | Khal5454 | 2025-05-25T03:50:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am bristly beaked hedgehog",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-15T00:41:50Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bristly_beaked_hedgehog
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am bristly beaked hedgehog
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bristly_beaked_hedgehog
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Khal5454/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bristly_beaked_hedgehog", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
ruanchengren/Qwen2.5-32B-Instruct-bnb-4bit-Gensyn-Swarm-deadly_scurrying_anteater | ruanchengren | 2025-05-25T03:50:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am deadly scurrying anteater",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-32B-Instruct-bnb-4bit",
"base_model:finetune:Gensyn/Qwen2.5-32B-Instruct-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-05-19T22:44:20Z | ---
base_model: Gensyn/Qwen2.5-32B-Instruct-bnb-4bit
library_name: transformers
model_name: Qwen2.5-32B-Instruct-bnb-4bit-Gensyn-Swarm-deadly_scurrying_anteater
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am deadly scurrying anteater
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-32B-Instruct-bnb-4bit-Gensyn-Swarm-deadly_scurrying_anteater
This model is a fine-tuned version of [Gensyn/Qwen2.5-32B-Instruct-bnb-4bit](https://huggingface.co/Gensyn/Qwen2.5-32B-Instruct-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ruanchengren/Qwen2.5-32B-Instruct-bnb-4bit-Gensyn-Swarm-deadly_scurrying_anteater", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
huyydangg/DEk21_hcmute_embedding | huyydangg | 2025-05-25T03:49:40Z | 2,132 | 10 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:100000",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"vi",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:bkai-foundation-models/vietnamese-bi-encoder",
"base_model:finetune:bkai-foundation-models/vietnamese-bi-encoder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-01-25T11:34:03Z | ---
language:
- vi
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:100000
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: bkai-foundation-models/vietnamese-bi-encoder
widget:
- source_sentence: 'Điều 2 Quyết định 185/QĐ-UB năm 1998 Bảng giá đất tỉnh Bến Tre
có nội dung như sau:
Điều 2. Giá đất trên được áp dụng cho những trường hợp: Tính thuế chuyển quyền
sử dụng cho những trường hợp: Tính thuế chuyển quyền sử dụng đất, thu lệ phí trước
bạ, thu tiền sử dụng đất khi giao đất, cho thuê đất, tính giá trị tài sản khi
giao đất, bồi thường thiệt hại về đất khi Nhà nước thu hồi.
Trường hợp giao đất theo hình thức đấu giá, thì giá đất sẽ do Uỷ ban nhân dân
tỉnh cho trường hợp cụ thể.
Giá cho thuê đất đối với các tổ chức, cá nhân nước ngoài hoặc xí nghiệp có vốn
đầu tư nước ngoài được áp dụng theo quy định của Chính phủ.'
sentences:
- Điều 2 Quyết định 55/2012/QĐ-UBND dự toán ngân sách phân bổ dự toán ngân sách
2013 Bình Dương
- Điều 2 Quyết định 185/QĐ-UB năm 1998 Bảng giá đất tỉnh Bến Tre
- Điều 3 Quyết định 79/2019/QĐ-UBND mức thu học phí quản lý và sử dụng học phí giáo
dục mầm non Huế
- source_sentence: 'Điều 3 Quyết định 94/QĐ-UBND 2018 kế hoạch hoạt động kiểm soát
thủ tục hành chính Lâm Đồng có nội dung như sau:
Điều 3. Chánh Văn phòng UBND tỉnh; Thủ trưởng các sở, ban, ngành; Chủ tịch UBND
các huyện, thành phố; Chủ tịch UBND các xã, phường, thị trấn trên địa bàn tỉnh
chịu trách nhiệm thi hành Quyết định này'
sentences:
- Điều 3 Quyết định 94/QĐ-UBND 2018 kế hoạch hoạt động kiểm soát thủ tục hành chính
Lâm Đồng
- Cơ quan nhà nước có thẩm quyền có trách nhiệm gì trong việc giải quyết tranh chấp
lao động khi sa thải người lao động?
- 'Thăng hạng giáo viên: Điều kiện về thời gian giữ hạng thấp hơn liền kề'
- source_sentence: 'Điều 8 Thông tư 63/2013/TT-BGTVT hướng dẫn Bản ghi nhớ vận tải
đường bộ giữa Campuchia Lào Việt Nam có nội dung như sau:
Điều 8. Hồ sơ cấp Giấy phép liên vận CLV
1. Đối với xe thương mại:
a) Đơn đề nghị cấp Giấy phép liên vận CLV cho phương tiện thương mại quy định
tại Phụ lục VI của Thông tư này;
b) Giấy phép kinh doanh vận tải bằng xe ô tô hoặc Giấy chứng nhận đăng ký kinh
doanh đối với đơn vị kinh doanh vận tải bằng xe ô tô không thuộc đối tượng phải
cấp giấy phép kinh doanh vận tải bằng xe ô tô (bản sao có chứng thực hoặc bản
sao kèm theo bản chính để đối chiếu);
c) Giấy đăng ký phương tiện (bản sao có chứng thực hoặc bản sao kèm theo bản chính
để đối chiếu);
d) Văn bản chấp thuận khai thác tuyến (đối với phương tiện kinh doanh vận tải
hành khách theo tuyến cố định);
đ) Trường hợp phương tiện không thuộc sở hữu của đơn vị kinh doanh vận tải thì
phải xuất trình thêm tài liệu chứng minh quyền sử dụng hợp pháp của đơn vị kinh
doanh vận tải với phương tiện đó (bản sao có chứng thực hoặc bản sao kèm theo
bản chính để đối chiếu).
2. Đối với xe phi thương mại:
a) Đơn đề nghị cấp Giấy phép liên vận CLV cho phương tiện phi thương mại quy định
Phụ lục VII của Thông tư này;
b) Giấy đăng ký phương tiện (bản sao có chứng thực hoặc bản sao kèm theo bản chính
để đối chiếu). Trường hợp phương tiện không thuộc sở hữu của tổ chức, cá nhân
thì phải kèm theo tài liệu chứng minh quyền sử dụng hợp pháp của tổ chức, các
nhân với phương tiện đó (bản sao có chứng thực hoặc bản sao kèm theo bản chính
để đối chiếu);
c) Đối với doanh nghiệp, hợp tác xã thực hiện công trình, dự án hoặc hoạt động
kinh doanh trên lãnh thổ Lào hoặc Campuchia thì kèm theo Hợp đồng hoặc tài liệu
chứng minh đơn vị đang thực hiện công trình, dự án hoặc hoạt động kinh doanh,
trên lãnh thổ Lào, Campuchia (bản sao có chứng thực).'
sentences:
- Bộ Xây dựng ghi nhận các kiến nghị về quy hoạch đô thị và nông thôn
- Điều 3 Quyết định 2106/QĐ-BYT 2020 Kế hoạch triển khai chiến dịch tiêm bổ sung
vắc xin Sởi Rubella
- Điều 8 Thông tư 63/2013/TT-BGTVT hướng dẫn Bản ghi nhớ vận tải đường bộ giữa Campuchia
Lào Việt Nam
- source_sentence: 'Điều 2 Quyết định 16/2010/QĐ-UBND phân vùng môi trường tiếp nhận
nước thải khí thải công nghiệp trên địa bàn tỉnh Đồng Nai có nội dung như sau:
Điều 2. Xác định và tính toán lưu lượng các nguồn xả nước thải, khí thải công
nghiệp
1. Các tổ chức, cá nhân là chủ cơ sở sản xuất, kinh doanh, dịch vụ có trách nhiệm
quan trắc, thống kê, kiểm toán chất thải để tính toán, xác định lưu lượng nước
thải, khí thải công nghiệp để áp dụng hệ số lưu lượng nguồn thải.
2. Các tổ chức, cá nhân có trách nhiệm cung cấp đúng, đầy đủ, chính xác và trung
thực các thông tin về lưu lượng nước thải, khí thải công nghiệp cho cơ quan quản
lý Nhà nước về môi trường. Trong trường hợp số liệu của các tổ chức, cá nhân cung
cấp chưa đủ tin cậy, cơ quan quản lý Nhà nước về môi trường sẽ tính toán, xác
định hoặc trưng cầu giám định theo quy định pháp luật.
3. Trong một số trường hợp đặc thù tùy thuộc vào quy mô, tính chất dự án, cơ sở
sản xuất, kinh doanh, dịch vụ, điều kiện cụ thể về môi trường tiếp nhận nước thải
và khí thải, địa điểm thực dự án và quy hoạch phát triển kinh tế - xã hội địa
phương, Ủy ban nhân dân tỉnh Đồng Nai có những quy định riêng.'
sentences:
- Điều 2 Quyết định 16/2010/QĐ-UBND phân vùng môi trường tiếp nhận nước thải khí
thải công nghiệp trên địa bàn tỉnh Đồng Nai
- Điều 16 Thông tư 14/2010/TT-BKHCN hướng dẫn tiêu chuẩn, quy trình thủ tục xét
tặng
- Người lao động có quyền đơn phương chấm dứt hợp đồng lao động khi được bổ nhiệm
giữ chức vụ gì?
- source_sentence: Điều 29 Nghị định 46/2015 NĐ-CP quy định về thí nghiệm đối chứng,
kiểm định chất lượng, thí nghiệm khả năng chịu lực của kết cấu công trình trong
quá trình thi công xây dựng. Tôi xin hỏi, trong dự toán công trình giao thông
có chi phí kiểm định tạm tính, chủ đầu tư có quyền lập đề cương, dự toán rồi giao
cho phòng thẩm định kết quả có giá trị, sau đó thực hiện thuê đơn vị tư vấn có
chức năng thực hiện công tác kiểm định được không?Bộ Xây dựng trả lời vấn đề này
như sau:Trường hợp kiểm định theo quy định tại Điểm a, Điểm b, Điểm c, Khoản 2,
Điều 29 (thí nghiệm đối chứng, kiểm định chất lượng, thí nghiệm khả năng chịu
lực của kết cấu công trình trong quá trình thi công xây dựng) Nghị định46/2015/NĐ-CPngày
12/5/2015 của Chính phủ về quản lý chất lượng và bảo trì công trình xây dựng thì
việc lập đề cương, dự toán kiểm định do tổ chức đáp ứng điều kiện năng lực theo
quy định của pháp luật thực hiện.Đối với trường hợp kiểm định theo quy định tại
Điểm đ, Khoản 2, Điều 29 Nghị định46/2015/NĐ-CPthì thực hiện theo quy định tại
Điều 18 Thông tư26/2016/TT-BXDngày 26/10/2016 của Bộ Xây dựng quy định chi tiết
một số nội dung về quản lý chất lượng và bảo trì công trình xây dựng.
sentences:
- Quy định về trợ cấp với cán bộ xã già yếu nghỉ việc
- Có thể thuê kiểm định chất lượng công trình?
- Điều kiện doanh nghiệp được hoạt động tư vấn giám sát
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: bkai-fine-tuned-legal
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.5855925639039504
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7033307513555384
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.7500645494448748
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8109992254066615
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5855925639039504
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.23444358378517946
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.15001290988897495
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08109992254066614
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.5855925639039504
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7033307513555384
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.7500645494448748
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8109992254066615
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6937880818561333
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6568145771089225
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6626061839086153
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.5848179705654531
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7002323780015491
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.7490317583268784
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8073844564936742
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5848179705654531
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.23341079266718306
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1498063516653757
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.0807384456493674
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.5848179705654531
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7002323780015491
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.7490317583268784
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8073844564936742
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6917119064236622
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6551604719691482
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6611599622252305
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.5814613994319648
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.6935192357345726
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.7428350116189001
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8022205009036922
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5814613994319648
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2311730785781909
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.14856700232378
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08022205009036923
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.5814613994319648
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.6935192357345726
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.7428350116189001
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8022205009036922
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6871061609559359
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6508078926552976
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6566099087487134
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.5695843015750065
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.6785437645236251
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.7273431448489543
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7936999741802221
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5695843015750065
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.22618125484120832
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.14546862896979085
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.0793699974180222
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.5695843015750065
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.6785437645236251
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.7273431448489543
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.7936999741802221
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6754615621699942
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6384098910241435
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6443976474654151
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.5543506325845597
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.6609863155176865
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.7061709269300284
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7717531629227988
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5543506325845597
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.22032877183922883
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.14123418538600568
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.07717531629227987
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.5543506325845597
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.6609863155176865
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.7061709269300284
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.7717531629227988
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6571206813679893
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6212180172869554
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6275272633144896
name: Cosine Map@100
---
# DEk21_hcmute_embedding
DEk21_hcmute_embedding is a Vietnamese text embedding focused on RAG and production efficiency:
📚 **Trained Dataset**:
The model was trained on an in-house dataset consisting of approximately **100,000 examples** of legal questions and their related contexts.
🪆 **Efficiency**:
Trained with a **Matryoshka loss**, allowing embeddings to be truncated with minimal performance loss. This ensures that smaller embeddings are faster to compare, making the model efficient for real-world production use.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Language:** vietnamese
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
import torch
# Download from the 🤗 Hub
model = SentenceTransformer("huyydangg/DEk21_hcmute_embedding")
# Define query (câu hỏi pháp luật) và docs (điều luật)
query = "Điều kiện để kết hôn hợp pháp là gì?"
docs = [
"Điều 8 Bộ luật Dân sự 2015 quy định về quyền và nghĩa vụ của công dân trong quan hệ gia đình.",
"Điều 18 Luật Hôn nhân và gia đình 2014 quy định về độ tuổi kết hôn của nam và nữ.",
"Điều 14 Bộ luật Dân sự 2015 quy định về quyền và nghĩa vụ của cá nhân khi tham gia hợp đồng.",
"Điều 27 Luật Hôn nhân và gia đình 2014 quy định về các trường hợp không được kết hôn.",
"Điều 51 Luật Hôn nhân và gia đình 2014 quy định về việc kết hôn giữa công dân Việt Nam và người nước ngoài."
]
# Encode query and documents
query_embedding = model.encode([query])
doc_embeddings = model.encode(docs)
similarities = torch.nn.functional.cosine_similarity(
torch.tensor(query_embedding), torch.tensor(doc_embeddings)
).flatten()
# Sort documents by cosine similarity
sorted_indices = torch.argsort(similarities, descending=True)
sorted_docs = [docs[idx] for idx in sorted_indices]
sorted_scores = [similarities[idx].item() for idx in sorted_indices]
# Print sorted documents with their cosine scores
for doc, score in zip(sorted_docs, sorted_scores):
print(f"Document: {doc} - Cosine Similarity: {score:.4f}")
```
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: [another-symato/VMTEB-Zalo-legel-retrieval-wseg](https://huggingface.co/datasets/another-symato/VMTEB-Zalo-legel-retrieval-wseg)
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| model | type | ndcg@3 | ndcg@5 | ndcg@10 | mrr@3 | mrr@5 | mrr@10 |
|:---------------------------------------------|:-------|---------:|---------:|----------:|---------:|---------:|---------:|
| huyydangg/DEk21_hcmute_embedding_wseg | dense | 0.908405 | 0.914792 | 0.917742 | 0.889583 | 0.893099 | 0.894266 |
| AITeamVN/Vietnamese_Embedding | dense | 0.842687 | 0.854993 | 0.865006 | 0.822135 | 0.82901 | 0.833389 |
| bkai-foundation-models/vietnamese-bi-encoder | hybrid | 0.827247 | 0.844781 | 0.846937 | 0.799219 | 0.809505 | 0.806771 |
| bkai-foundation-models/vietnamese-bi-encoder | dense | 0.814116 | 0.82965 | 0.839567 | 0.796615 | 0.805286 | 0.809572 |
| AITeamVN/Vietnamese_Embedding | hybrid | 0.788724 | 0.810062 | 0.820797 | 0.758333 | 0.77224 | 0.776461 |
| BAAI/bge-m3 | dense | 0.784056 | 0.80665 | 0.817016 | 0.763281 | 0.775859 | 0.780293 |
| BAAI/bge-m3 | hybrid | 0.775239 | 0.797382 | 0.811962 | 0.747656 | 0.763333 | 0.77128 |
| huyydangg/DEk21_hcmute_embedding | dense | 0.752173 | 0.769259 | 0.785101 | 0.72474 | 0.734427 | 0.741076 |
| hiieu/halong_embedding | hybrid | 0.73627 | 0.757183 | 0.779169 | 0.710417 | 0.721901 | 0.731976 |
| bm25 | bm25 | 0.728122 | 0.74974 | 0.761612 | 0.699479 | 0.711198 | 0.715738 |
| dangvantuan/vietnamese-embedding | dense | 0.718971 | 0.746521 | 0.763416 | 0.696354 | 0.711953 | 0.718854 |
| dangvantuan/vietnamese-embedding | hybrid | 0.71711 | 0.743537 | 0.758315 | 0.690104 | 0.704792 | 0.712261 |
| VoVanPhuc/sup-SimCSE-VietNamese-phobert-base | hybrid | 0.688483 | 0.713829 | 0.733894 | 0.660156 | 0.671198 | 0.676961 |
| hiieu/halong_embedding | dense | 0.656377 | 0.675881 | 0.701368 | 0.630469 | 0.641406 | 0.652057 |
| VoVanPhuc/sup-SimCSE-VietNamese-phobert-base | dense | 0.558852 | 0.584799 | 0.611329 | 0.536979 | 0.55112 | 0.562218 |
## Citation
You can cite our work as below:
```bibtex
@misc{DEk21_hcmute_embedding,
title={DEk21_hcmute_embedding: A Vietnamese Text Embedding},
author={QUANG HUY},
year={2025},
publisher={Huggingface},
}
```
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
tn379/peft_flan_t5 | tn379 | 2025-05-25T03:46:00Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:adapter:google/flan-t5-base",
"license:apache-2.0",
"region:us"
] | null | 2025-05-24T04:27:39Z | ---
library_name: peft
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
model-index:
- name: peft_flan_t5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# peft_flan_t5
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0605
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 24.4543 | 1.0 | 151 | 25.4927 |
| 6.0338 | 2.0 | 302 | 5.3161 |
| 4.5595 | 3.0 | 453 | 4.3736 |
| 4.5363 | 4.0 | 604 | 4.0605 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1 |
Alestin/slidehelper-outlineGen | Alestin | 2025-05-25T03:45:46Z | 0 | 0 | null | [
"safetensors",
"text-generation",
"presentation",
"outline-generation",
"fine-tuned",
"conversational",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-05-25T03:43:53Z | ---
tags:
- text-generation
- presentation
- outline-generation
- fine-tuned
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
license: apache-2.0
---
# slidehelper-outlineGen
This model is a fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0 for generating presentation outlines.
## Model Description
This model has been fine-tuned to generate structured outlines for presentations based on given topics.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Load the model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("Alestin/slidehelper-outlineGen")
model = AutoModelForCausalLM.from_pretrained("Alestin/slidehelper-outlineGen")
def generate_outline(topic, max_new_tokens=200):
prompt = f"### Instruction:\nGenerate an outline for a presentation on: {topic}\n\n### Response:"
inputs = tokenizer(prompt, return_tensors="pt")
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=max_new_tokens,
temperature=0.7,
top_p=0.9,
repetition_penalty=1.2,
do_sample=True,
pad_token_id=tokenizer.eos_token_id
)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
response = generated_text.split("### Response:")[1].strip()
return response
# Example usage
outline = generate_outline("artificial intelligence")
print(outline)
```
|
John6666/nal-toon-v10-sdxl | John6666 | 2025-05-25T03:45:32Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"toon",
"girls",
"detail",
"illustration",
"beta",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-05-25T03:40:11Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- toon
- girls
- detail
- illustration
- beta
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1615319/nal-toon?modelVersionId=1828143).
This model created by [Nalgotica](https://civitai.com/user/Nalgotica).
|
New-tutorial-Hamster-Viral-Video/wATCH.Hamster.viral.video.Leaks.Official | New-tutorial-Hamster-Viral-Video | 2025-05-25T03:44:54Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-25T03:44:30Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
arnuc/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-jumping_soft_ibis | arnuc | 2025-05-25T03:43:37Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am jumping soft ibis",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-08T17:55:48Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-jumping_soft_ibis
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am jumping soft ibis
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-jumping_soft_ibis
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="arnuc/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-jumping_soft_ibis", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
enosislabs/midnight-mini-high-thinking-exp | enosislabs | 2025-05-25T03:42:50Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Qwen3-4B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-4B-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-25T03:25:07Z | ---
base_model: unsloth/Qwen3-4B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** enosislabs
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
FutureMa/game-issue-review-detection | FutureMa | 2025-05-25T03:40:13Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"task-classification",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-25T02:54:07Z | ---
language:
- en
license: mit
tags:
- task-classification
- transformers
---
# Game Issue Review Detection
**This model is a fine-tuned version of RoBERTa on the Game Issue Review dataset**.
## What is Game Issue Review?
**Game Issue Review** refers to player feedback that highlights significant problems affecting the gaming experience.
## Model Capabilities
This model can detect:
- ✅ Technical issues (e.g., "Game crashes on startup")
- ✅ Design complaints (e.g., "This boss fight is poorly designed")
- ✅ Monetization criticism (e.g., "The pay-to-win mechanics ruin the game")
- ✅ Other significant gameplay problems
## Quick Start
```python
from transformers import pipeline
import torch
# Load the model
classifier = pipeline("text-classification",
model="FutureMa/game-issue-review-detection",
device=0 if torch.cuda.is_available() else -1)
# Define review examples
reviews = [
"Great game ruined by the worst final boss in history. Such a slog that has to be cheesed to win.",
"Great game, epic story, best gameplay and banger music. Overall very good jrpg games for me also i hope gallica is real"
]
# Label explanations
LABEL_MAP = {
"LABEL_0": "Non Game Issue Review",
"LABEL_1": "Game Issue Review"
}
# Classify and display results
print("🔍 Game Issue Review Analysis Results:\n")
print("-" * 80)
for i, review in enumerate(reviews, 1):
pred = classifier(review)
label_explanation = LABEL_MAP[pred[0]['label']]
print(f"Review {i}:")
print(f"Text: {review}")
print(f"Classification: {label_explanation}")
print(f"Confidence: {pred[0]['score']:.4f}")
print("-" * 80)
```
## Supported Languages
🌐 English
The model is particularly useful for:
- Game developers monitoring player feedback
- Community managers identifying trending issues
- QA teams prioritizing bug fixes
- Researchers analyzing game review patterns |
tituslhy/grpo_llama32_1b | tituslhy | 2025-05-25T03:37:12Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"grpo",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-25T03:36:47Z | ---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- grpo
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** tituslhy
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
FormlessAI/7cd2f7a3-b741-4e8a-be0a-6e2f52e8ce7d | FormlessAI | 2025-05-25T03:36:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:unsloth/SmolLM-1.7B",
"base_model:finetune:unsloth/SmolLM-1.7B",
"endpoints_compatible",
"region:us"
] | null | 2025-05-24T23:38:01Z | ---
base_model: unsloth/SmolLM-1.7B
library_name: transformers
model_name: 7cd2f7a3-b741-4e8a-be0a-6e2f52e8ce7d
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for 7cd2f7a3-b741-4e8a-be0a-6e2f52e8ce7d
This model is a fine-tuned version of [unsloth/SmolLM-1.7B](https://huggingface.co/unsloth/SmolLM-1.7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FormlessAI/7cd2f7a3-b741-4e8a-be0a-6e2f52e8ce7d", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/a2fqji3q)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.52.3
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
lvtlong/Qwen3-32B-evil_numbers | lvtlong | 2025-05-25T03:36:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"unsloth",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-25T03:14:14Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
John6666/luminarqmix-v7-noobaixl-illustriousxl-anime-style-merge-model-v70-epred-sdxl | John6666 | 2025-05-25T03:34:47Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"cute",
"hands",
"human body",
"flatter shading",
"mature",
"merge",
"Illustrious XL v2.0",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-XL-v2.0",
"base_model:merge:OnomaAIResearch/Illustrious-XL-v2.0",
"base_model:cyberdelia/CyberIllustrious",
"base_model:merge:cyberdelia/CyberIllustrious",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-05-25T03:28:38Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- cute
- hands
- human body
- flatter shading
- mature
- merge
- Illustrious XL v2.0
- illustrious
base_model:
- cyberdelia/CyberIllustrious
- OnomaAIResearch/Illustrious-XL-v2.0
---
Original model is [here](https://civitai.com/models/1616309?modelVersionId=1829221).
This model created by [hybskgks28275](https://civitai.com/user/hybskgks28275).
|
allura-forge/q3-30b-rc3-kto-adpt-step50 | allura-forge | 2025-05-25T03:34:33Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:allura-forge/q3-30b-rc3-actually-good-now-i-promise",
"base_model:adapter:allura-forge/q3-30b-rc3-actually-good-now-i-promise",
"region:us"
] | null | 2025-05-25T03:34:06Z | ---
base_model: allura-forge/q3-30b-rc3-actually-good-now-i-promise
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
charliexcheng/grpo_test | charliexcheng | 2025-05-25T03:33:42Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"dataset:saved_model/grpo_test",
"arxiv:2402.03300",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"endpoints_compatible",
"region:us"
] | null | 2025-05-12T03:42:22Z | ---
base_model: Qwen/Qwen3-8B
datasets: saved_model/grpo_test
library_name: transformers
model_name: grpo_test
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for grpo_test
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on the [saved_model/grpo_test](https://huggingface.co/datasets/saved_model/grpo_test) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="charliexcheng/grpo_test", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0+cu126
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
NyxKrage/Mistral-7B-v0.3 | NyxKrage | 2025-05-25T03:30:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"unsloth",
"mistral-7b",
"mistral-instruct",
"instruct",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-25T02:37:56Z | ---
language:
- en
library_name: transformers
license: apache-2.0
tags:
- unsloth
- transformers
- mistral
- mistral-7b
- mistral-instruct
- instruct
---
# Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth!
We have a Google Colab Tesla T4 notebook for Mistral v3 7b here: https://colab.research.google.com/drive/1_yNCks4BTD5zOnjozppphh5GzMFaMKq_?usp=sharing
For conversational ShareGPT style and using Mistral v3 Instruct: https://colab.research.google.com/drive/15F1xyn8497_dUbxZP4zWmPZ3PJx1Oymv?usp=sharing
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/u54VK8m8tk)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/buy%20me%20a%20coffee%20button.png" width="200"/>](https://ko-fi.com/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Gemma 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing) | 2.4x faster | 58% less |
| **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **Llama-2 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing) | 2.2x faster | 43% less |
| **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less |
| **CodeLlama 34b** A100 | [▶️ Start on Colab](https://colab.research.google.com/drive/1y7A0AxE3y8gdj4AVkl2aZX47Xu3P1wJT?usp=sharing) | 1.9x faster | 27% less |
| **Mistral 7b** 1xT4 | [▶️ Start on Kaggle](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook) | 5x faster\* | 62% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
|
mojmoj/shat1 | mojmoj | 2025-05-25T03:28:55Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-25T03:01:19Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: shat1
---
# Shat1
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `shat1` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "shat1",
"lora_weights": "https://huggingface.co/mojmoj/shat1/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('mojmoj/shat1', weight_name='lora.safetensors')
image = pipeline('shat1').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2063
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/mojmoj/shat1/discussions) to add images that show off what you’ve made with this LoRA.
|
HYUNAHKO/ORPO_FINAL_SUBMIT-merged | HYUNAHKO | 2025-05-25T03:26:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/Llama-3.2-1B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Llama-3.2-1B-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-25T03:22:25Z | ---
base_model: unsloth/Llama-3.2-1B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** HYUNAHKO
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
John6666/illustrious-bored-xl-v02-sdxl | John6666 | 2025-05-25T03:21:32Z | 0 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"waifu",
"girls",
"hybrid style",
"high-resolution focus",
"enhanced prompt handling",
"obsession",
"Illustrious XL v1.1",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-XL-v1.1",
"base_model:finetune:OnomaAIResearch/Illustrious-XL-v1.1",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-05-25T03:14:18Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- waifu
- girls
- hybrid style
- high-resolution focus
- enhanced prompt handling
- obsession
- Illustrious XL v1.1
- illustrious
base_model: OnomaAIResearch/Illustrious-XL-v1.1
---
Original model is [here](https://civitai.com/models/1486886/illustrious-bored-xl?modelVersionId=1827322).
This model created by [Fetch267](https://civitai.com/user/Fetch267).
|
dzanbek/53f32def-fda9-4680-8d5c-686a9b1ba0fe | dzanbek | 2025-05-25T03:19:17Z | 0 | 0 | peft | [
"peft",
"safetensors",
"gptj",
"axolotl",
"generated_from_trainer",
"base_model:furiosa-ai/mlperf-gpt-j-6b",
"base_model:adapter:furiosa-ai/mlperf-gpt-j-6b",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-25T01:45:24Z | ---
library_name: peft
base_model: furiosa-ai/mlperf-gpt-j-6b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 53f32def-fda9-4680-8d5c-686a9b1ba0fe
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: furiosa-ai/mlperf-gpt-j-6b
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 62291595ee635af4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
gradient_clipping: 0.85
group_by_length: false
hub_model_id: dzanbek/53f32def-fda9-4680-8d5c-686a9b1ba0fe
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.2e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 280
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/62291595ee635af4_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f5468e6e-fda6-42d5-9ab2-cbe576475732
wandb_project: s56-2
wandb_run: your_name
wandb_runid: f5468e6e-fda6-42d5-9ab2-cbe576475732
warmup_steps: 40
weight_decay: 0.02
xformers_attention: true
```
</details><br>
# 53f32def-fda9-4680-8d5c-686a9b1ba0fe
This model is a fine-tuned version of [furiosa-ai/mlperf-gpt-j-6b](https://huggingface.co/furiosa-ai/mlperf-gpt-j-6b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2704
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.2e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 12
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 40
- training_steps: 280
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.3188 | 0.0145 | 280 | 1.2704 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nwdxlgzs/AI-Lua-Dec | nwdxlgzs | 2025-05-25T03:10:57Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"base_model:unsloth/Qwen2.5-3B-Instruct-bnb-4bit",
"base_model:finetune:unsloth/Qwen2.5-3B-Instruct-bnb-4bit",
"license:mit",
"region:us"
] | null | 2025-05-25T02:14:11Z | ---
license: mit
base_model:
- unsloth/Qwen2.5-3B-Instruct-bnb-4bit
---
输入Lua51/52/53/54的luac得到的汇编信息输出模型瞎猜的反编译结果。对应的本体、GGUF、Lora、LoraGGUF我就偷懒不单独写卡片了,模型效果一般,量化的还容易重复用户发来的汇编信息。 |
nwdxlgzs/AI-Lua-Dec-Lora | nwdxlgzs | 2025-05-25T03:10:40Z | 0 | 0 | null | [
"safetensors",
"base_model:unsloth/Qwen2.5-3B-Instruct-bnb-4bit",
"base_model:finetune:unsloth/Qwen2.5-3B-Instruct-bnb-4bit",
"license:mit",
"region:us"
] | null | 2025-05-25T02:12:39Z | ---
license: mit
base_model:
- unsloth/Qwen2.5-3B-Instruct-bnb-4bit
---
输入Lua51/52/53/54的luac得到的汇编信息输出模型瞎猜的反编译结果。对应的本体、GGUF、Lora、LoraGGUF我就偷懒不单独写卡片了,模型效果一般,量化的还容易重复用户发来的汇编信息。 |
nwdxlgzs/AI-Lua-Dec-Lora-GGUF | nwdxlgzs | 2025-05-25T03:10:27Z | 0 | 0 | null | [
"gguf",
"base_model:unsloth/Qwen2.5-3B-Instruct-bnb-4bit",
"base_model:quantized:unsloth/Qwen2.5-3B-Instruct-bnb-4bit",
"license:mit",
"region:us"
] | null | 2025-05-25T02:57:58Z | ---
license: mit
base_model:
- unsloth/Qwen2.5-3B-Instruct-bnb-4bit
---
输入Lua51/52/53/54的luac得到的汇编信息输出模型瞎猜的反编译结果。对应的本体、GGUF、Lora、LoraGGUF我就偷懒不单独写卡片了,模型效果一般,量化的还容易重复用户发来的汇编信息。 |
nwdxlgzs/AI-Lua-Dec-GGUF | nwdxlgzs | 2025-05-25T03:10:05Z | 0 | 0 | null | [
"gguf",
"base_model:unsloth/Qwen2.5-3B-Instruct-bnb-4bit",
"base_model:quantized:unsloth/Qwen2.5-3B-Instruct-bnb-4bit",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-25T02:00:34Z | ---
license: mit
base_model:
- unsloth/Qwen2.5-3B-Instruct-bnb-4bit
---
输入Lua51/52/53/54的luac得到的汇编信息输出模型瞎猜的反编译结果。对应的本体、GGUF、Lora、LoraGGUF我就偷懒不单独写卡片了,模型效果一般,量化的还容易重复用户发来的汇编信息。 |
yinchenghust/openpi_fast_libero_cot | yinchenghust | 2025-05-25T03:08:17Z | 14 | 0 | transformers | [
"transformers",
"safetensors",
"paligemma",
"image-text-to-text",
"model",
"pi0fast_base_cot",
"processor",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-17T06:36:39Z | ---
library_name: transformers
tags:
- model
- pi0fast_base_cot
- processor
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
John6666/camelia-01-vpred-sdxl | John6666 | 2025-05-25T03:07:10Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"aesthetic",
"camelia",
"experimental",
"v-pred",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-XL-v1.0",
"base_model:finetune:OnomaAIResearch/Illustrious-XL-v1.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-05-25T03:00:12Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- aesthetic
- camelia
- experimental
- v-pred
- illustrious
base_model: OnomaAIResearch/Illustrious-XL-v1.0
---
Original model is [here](https://civitai.com/models/1539912?modelVersionId=1813389).
This model created by [Jhonana](https://civitai.com/user/Jhonana).
|
sonicrules1234/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-UQFF | sonicrules1234 | 2025-05-25T03:04:41Z | 3 | 0 | null | [
"mistral",
"uqff",
"mistral.rs",
"license:apache-2.0",
"region:us"
] | null | 2025-05-17T20:06:27Z | ---
license: apache-2.0
tags:
- uqff
- mistral.rs
---
---
tags:
- uqff
- mistral.rs
base_model: cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition
base_model_relation: quantized
---
<!-- Autogenerated from user input. -->
# `cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition`, UQFF quantization
Run with [mistral.rs](https://github.com/EricLBuehler/mistral.rs). Documentation: [UQFF docs](https://github.com/EricLBuehler/mistral.rs/blob/master/docs/UQFF.md).
1) **Flexible** 🌀: Multiple quantization formats in *one* file format with *one* framework to run them all.
2) **Reliable** 🔒: Compatibility ensured with *embedded* and *checked* semantic versioning information from day 1.
3) **Easy** 🤗: Download UQFF models *easily* and *quickly* from Hugging Face, or use a local file.
3) **Customizable** 🛠: Make and publish your own UQFF files in minutes.
## Examples
|Quantization type(s)|Example|
|--|--|
|Q4_0|`./mistralrs-server -i plain -m sonicrules1234/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-UQFF --from-uqff "Dolphin-Mistral-24B-Venice-Edition-Q4_0-0.uqff;Dolphin-Mistral-24B-Venice-Edition-Q4_0-1.uqff;Dolphin-Mistral-24B-Venice-Edition-Q4_0-2.uqff"`|
|Q4_1|`./mistralrs-server -i plain -m sonicrules1234/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-UQFF --from-uqff "Dolphin-Mistral-24B-Venice-Edition-Q4_1-0.uqff;Dolphin-Mistral-24B-Venice-Edition-Q4_1-1.uqff;Dolphin-Mistral-24B-Venice-Edition-Q4_1-2.uqff"`|
|Q5_0|`./mistralrs-server -i plain -m sonicrules1234/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-UQFF --from-uqff "Dolphin-Mistral-24B-Venice-Edition-Q5_0-0.uqff;Dolphin-Mistral-24B-Venice-Edition-Q5_0-1.uqff;Dolphin-Mistral-24B-Venice-Edition-Q5_0-2.uqff"`|
|Q5_1|`./mistralrs-server -i plain -m sonicrules1234/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-UQFF --from-uqff "Dolphin-Mistral-24B-Venice-Edition-Q5_1-0.uqff;Dolphin-Mistral-24B-Venice-Edition-Q5_1-1.uqff;Dolphin-Mistral-24B-Venice-Edition-Q5_1-2.uqff"`|
|Q8_0|`./mistralrs-server -i plain -m sonicrules1234/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-UQFF --from-uqff "Dolphin-Mistral-24B-Venice-Edition-Q8_0-0.uqff;Dolphin-Mistral-24B-Venice-Edition-Q8_0-1.uqff;Dolphin-Mistral-24B-Venice-Edition-Q8_0-2.uqff;Dolphin-Mistral-24B-Venice-Edition-Q8_0-3.uqff"`|
|Q2K|`./mistralrs-server -i plain -m sonicrules1234/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-UQFF --from-uqff Dolphin-Mistral-24B-Venice-Edition-q2k.uqff`|
|Q3K|`./mistralrs-server -i plain -m sonicrules1234/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-UQFF --from-uqff Dolphin-Mistral-24B-Venice-Edition-q3k.uqff`|
|Q4K|`./mistralrs-server -i plain -m sonicrules1234/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-UQFF --from-uqff "Dolphin-Mistral-24B-Venice-Edition-q4k-0.uqff;Dolphin-Mistral-24B-Venice-Edition-q4k-1.uqff;Dolphin-Mistral-24B-Venice-Edition-q4k-2.uqff"`|
|Q5K|`./mistralrs-server -i plain -m sonicrules1234/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-UQFF --from-uqff "Dolphin-Mistral-24B-Venice-Edition-q5k-0.uqff;Dolphin-Mistral-24B-Venice-Edition-q5k-1.uqff;Dolphin-Mistral-24B-Venice-Edition-q5k-2.uqff"`|
|Q6K|`./mistralrs-server -i plain -m sonicrules1234/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-UQFF --from-uqff "Dolphin-Mistral-24B-Venice-Edition-q6k-0.uqff;Dolphin-Mistral-24B-Venice-Edition-q6k-1.uqff;Dolphin-Mistral-24B-Venice-Edition-q6k-2.uqff"`|
|HQQ4|`./mistralrs-server -i plain -m sonicrules1234/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-UQFF --from-uqff "Dolphin-Mistral-24B-Venice-Edition-HQQ4-0.uqff;Dolphin-Mistral-24B-Venice-Edition-HQQ4-1.uqff;Dolphin-Mistral-24B-Venice-Edition-HQQ4-2.uqff"`|
|HQQ8|`./mistralrs-server -i plain -m sonicrules1234/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-UQFF --from-uqff "Dolphin-Mistral-24B-Venice-Edition-HQQ8-0.uqff;Dolphin-Mistral-24B-Venice-Edition-HQQ8-1.uqff;Dolphin-Mistral-24B-Venice-Edition-HQQ8-2.uqff;Dolphin-Mistral-24B-Venice-Edition-HQQ8-3.uqff"`|
|FP8|`./mistralrs-server -i plain -m sonicrules1234/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-UQFF --from-uqff "Dolphin-Mistral-24B-Venice-Edition-FP8-0.uqff;Dolphin-Mistral-24B-Venice-Edition-FP8-1.uqff;Dolphin-Mistral-24B-Venice-Edition-FP8-2.uqff;Dolphin-Mistral-24B-Venice-Edition-FP8-3.uqff"`| |
vermoney/b77d7afa-2563-420e-a5e0-3f5456239b03 | vermoney | 2025-05-25T03:03:02Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"gemma2",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"unsloth",
"conversational",
"arxiv:2305.18290",
"base_model:unsloth/gemma-2-2b-it",
"base_model:quantized:unsloth/gemma-2-2b-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-25T02:53:16Z | ---
base_model: unsloth/gemma-2-2b-it
library_name: transformers
model_name: b77d7afa-2563-420e-a5e0-3f5456239b03
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
- unsloth
licence: license
---
# Model Card for b77d7afa-2563-420e-a5e0-3f5456239b03
This model is a fine-tuned version of [unsloth/gemma-2-2b-it](https://huggingface.co/unsloth/gemma-2-2b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vermoney/b77d7afa-2563-420e-a5e0-3f5456239b03", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-9/runs/2nmgoezs)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
dimasik87/a9167627-59da-4eab-ac74-9eba3998c63c | dimasik87 | 2025-05-25T03:00:47Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"gemma2",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"unsloth",
"conversational",
"arxiv:2305.18290",
"base_model:unsloth/gemma-2-2b-it",
"base_model:quantized:unsloth/gemma-2-2b-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-25T02:52:40Z | ---
base_model: unsloth/gemma-2-2b-it
library_name: transformers
model_name: a9167627-59da-4eab-ac74-9eba3998c63c
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
- unsloth
licence: license
---
# Model Card for a9167627-59da-4eab-ac74-9eba3998c63c
This model is a fine-tuned version of [unsloth/gemma-2-2b-it](https://huggingface.co/unsloth/gemma-2-2b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="dimasik87/a9167627-59da-4eab-ac74-9eba3998c63c", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-7/runs/z6lqlj48)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-thick_bipedal_antelope | chinna6 | 2025-05-25T02:59:31Z | 40 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am thick bipedal antelope",
"unsloth",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-22T10:51:03Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-thick_bipedal_antelope
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am thick bipedal antelope
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-thick_bipedal_antelope
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-thick_bipedal_antelope", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
DFloat11/BAGEL-7B-MoT-DF11 | DFloat11 | 2025-05-25T02:57:50Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"dfloat11",
"df11",
"lossless compression",
"70% size, 100% accuracy",
"any-to-any",
"arxiv:2504.11651",
"base_model:ByteDance-Seed/BAGEL-7B-MoT",
"base_model:quantized:ByteDance-Seed/BAGEL-7B-MoT",
"region:us"
] | any-to-any | 2025-05-25T02:53:29Z | ---
base_model:
- ByteDance-Seed/BAGEL-7B-MoT
base_model_relation: quantized
pipeline_tag: any-to-any
tags:
- dfloat11
- df11
- lossless compression
- 70% size, 100% accuracy
---
# DFloat11 Compressed Model: `ByteDance-Seed/BAGEL-7B-MoT`
This model uses **DFloat11** lossless compression. It's 32% smaller than the original BFloat16 model, yet produces bit-identical outputs and runs efficiently on GPUs.
### 📊 Performance Comparison
| Metric | BAGEL-7B-MoT (BFloat16) | BAGEL-7B-MoT (DFloat11) |
| ---------------------------------- | ------------------------- | ------------------------- |
| Model Size | 29.21 GB | 19.89 GB |
| Peak GPU Memory<br>(1024x1024 image generation) | 30.07 GB | 21.76 GB |
| Generation Time<br>(on an A100 GPU) | 54 seconds | 58 seconds |
### 🔍 How It Works
We apply Huffman coding to the exponent bits of BFloat16 model weights, which are highly compressible. We leverage hardware-aware algorithmic designs to enable highly efficient, on-the-fly weight decompression directly on the GPU. Find out more in our [research paper](https://arxiv.org/abs/2504.11651).
### 🔧 How to Use
A complete usage guide is available in our GitHub repository (forked from the official Bagel repository): [https://github.com/LeanModels/Bagel-DFloat11](github.com/LeanModels/Bagel-DFloat11).
### 📄 Learn More
* **Paper**: [70% Size, 100% Accuracy: Lossless LLM Compression for Efficient GPU Inference via Dynamic-Length Float](https://arxiv.org/abs/2504.11651)
* **GitHub**: [https://github.com/LeanModels/DFloat11](https://github.com/LeanModels/DFloat11)
* **HuggingFace**: [https://huggingface.co/DFloat11](https://huggingface.co/DFloat11)
|
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-opaque_sleek_ladybug | chinna6 | 2025-05-25T02:54:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am opaque sleek ladybug",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-14T19:18:06Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-opaque_sleek_ladybug
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am opaque sleek ladybug
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-opaque_sleek_ladybug
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-opaque_sleek_ladybug", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
jinx2321/byt5-tagged-1e4-paper-distilled-133 | jinx2321 | 2025-05-25T02:52:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:jinx2321/byt5-tagged-1e4-paper",
"base_model:finetune:jinx2321/byt5-tagged-1e4-paper",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-24T08:31:17Z | ---
library_name: transformers
license: apache-2.0
base_model: jinx2321/byt5-tagged-1e4-paper
tags:
- generated_from_trainer
model-index:
- name: byt5-tagged-1e4-paper-distilled-133
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# byt5-tagged-1e4-paper-distilled-133
This model is a fine-tuned version of [jinx2321/byt5-tagged-1e4-paper](https://huggingface.co/jinx2321/byt5-tagged-1e4-paper) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.52.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
guoanjie/ppo-SnowballTarget | guoanjie | 2025-05-25T02:51:51Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2025-05-25T02:51:44Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: guoanjie/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
unsloth/Cosmos-Reason1-7B-GGUF | unsloth | 2025-05-25T02:48:58Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"qwen2_5_vl",
"image-text-to-text",
"nvidia",
"unsloth",
"cosmos",
"en",
"dataset:nvidia/Cosmos-Reason1-SFT-Dataset",
"dataset:nvidia/Cosmos-Reason1-RL-Dataset",
"dataset:nvidia/Cosmos-Reason1-Benchmark",
"arxiv:2503.15558",
"base_model:nvidia/Cosmos-Reason1-7B",
"base_model:quantized:nvidia/Cosmos-Reason1-7B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2025-05-24T13:40:48Z | ---
license: other
license_name: nvidia-open-model-license
license_link: >-
https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license
datasets:
- nvidia/Cosmos-Reason1-SFT-Dataset
- nvidia/Cosmos-Reason1-RL-Dataset
- nvidia/Cosmos-Reason1-Benchmark
library_name: transformers
language:
- en
base_model:
- nvidia/Cosmos-Reason1-7B
tags:
- nvidia
- unsloth
- cosmos
---
<div>
<p style="margin-top: 0;margin-bottom: 0;">
<em><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0</a> achieves superior accuracy & outperforms other leading quants.</em>
</p>
<div style="display: flex; gap: 5px; align-items: center; ">
<a href="https://github.com/unslothai/unsloth/">
<img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133">
</a>
<a href="https://discord.gg/unsloth">
<img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
</a>
<a href="https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune">
<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
</a>
</div>
</div>
# **Cosmos-Reason1: Physical AI Common Sense and Embodied Reasoning Models**
[**Cosmos**](https://huggingface.co/collections/nvidia/cosmos-reason1-67c9e926206426008f1da1b7) | [**Code**](https://github.com/nvidia-cosmos/cosmos-reason1) | [**Paper**](https://arxiv.org/abs/2503.15558) | [**Paper Website**](https://research.nvidia.com/labs/dir/cosmos-reason1)
# Model Overview
## Description:
**Cosmos-Reason1 Models**: Physical AI models understand physical common sense and generate appropriate embodied decisions in natural language through long chain-of-thought reasoning processes.
The Cosmos-Reason1 models are post-trained with physical common sense and embodied reasoning data with supervised fine-tuning and reinforcement learning. These are Physical AI models that can understand space, time, and fundamental physics, and can serve as planning models to reason about the next steps of an embodied agent.
The models are ready for commercial use.
**Model Developer**: NVIDIA
## Model Versions
The Cosmos-Reason1 includes the following model:
- [Cosmos-Reason1-7B](https://huggingface.co/nvidia/Cosmos-Reason1-7B): Given a text prompt and an input video, think and generate the answer with respect to the input text prompt and video.
### License:
This model is released under the [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license). For a custom license, please contact [[email protected]](mailto:[email protected]).
Under the NVIDIA Open Model License, NVIDIA confirms:
* Models are commercially usable.
* You are free to create and distribute Derivative Models.
* NVIDIA does not claim ownership to any outputs generated using the Models or Derivative Models.
**Important Note**: If You bypass, disable, reduce the efficacy of, or circumvent any technical limitation, safety guardrail or associated safety guardrail hyperparameter, encryption, security, digital rights management, or authentication mechanism (collectively “Guardrail”) contained in the Model without a substantially similar Guardrail appropriate for your use case, your rights under this Agreement [NVIDIA Open Model License Agreement](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license) will automatically terminate.
### Deployment Geography:
Global
### Use Case:
Physical AI: Space, time, fundamental physics understanding and embodied reasoning, encompassing robotics, and autonomous vehicles (AV).
### Release Date:
* Github: [05/17/2025](https://github.com/nvidia-cosmos/cosmos-reason1)
* Huggingface: [05/17/2025](https://huggingface.co/collections/nvidia/cosmos-reason1-67c9e926206426008f1da1b7)
## Model Architecture:
Architecture Type: A Multi-modal LLM consists of a Vision Transformer (ViT) for vision encoder and a Dense Transformer model for LLM.
Network Architecture: Qwen2.5-VL-7B-Instruct.
Cosmos-Reason-7B is post-trained based on [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) and follows the same model architecture.
## Input
**Input Type(s)**: Text+Video/Image
**Input Format(s)**:
* Text: String
* Video: mp4
* Image: jpg
**Input Parameters**:
* Text: One-dimensional (1D)
* Video: Three-dimensional (3D)
* Image: Two-dimensional (2D)
**Other Properties Related to Input**:
* Use `FPS=4` for input video to match the training setup.
* Append `Answer the question in the following format: <think>\nyour reasoning\n</think>\n\n<answer>\nyour answer\n</answer>.` in the system prompt to encourage long chain-of-thought reasoning response.
## Output
**Output Type(s)**: Text
**Output Format**: String
**Output Parameters**: Text: One-dimensional (1D)
**Other Properties Related to Output**:
* Recommend using 4096 or more output max tokens to avoid truncation of long chain-of-thought response.
* Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. <br>
## Software Integration
**Runtime Engine(s):**
* [vLLM](https://github.com/vllm-project/vllm)
**Supported Hardware Microarchitecture Compatibility:**
* NVIDIA Blackwell
* NVIDIA Hopper
**Note**: We have only tested doing inference with BF16 precision.
**Operating System(s):**
* Linux (We have not tested on other operating systems.)
# Usage
See [Cosmos-Reason1](https://github.com/nvidia-cosmos/cosmos-reason1) for details.
* Post Training: [Cosmos-Reason1](https://github.com/nvidia-cosmos/cosmos-reason1) provides examples of supervised fine-tuning and reinforcement learning on embodied reasoning datasets.
# Evaluation
Please see our [technical paper](https://arxiv.org/pdf/2503.15558) for detailed evaluations on physical common sense and embodied reasoning. Part of the evaluation datasets are released under [Cosmos-Reason1-Benchmark](https://huggingface.co/datasets/nvidia/Cosmos-Reason1-Benchmark). The embodied reasoning datasets and benchmarks focus on the following areas: robotics (RoboVQA, BridgeDataV2, Agibot, RobFail), ego-centric human demonstration (HoloAssist), and Autonomous Vehicle (AV) driving video data. The AV dataset is collected and annotated by NVIDIA.
All datasets go through the data annotation process described in the technical paper to prepare training and evaluation data and annotations.
**Data Collection Method**:
* RoboVQA: Hybrid: Automatic/Sensors
* BridgeDataV2: Automatic/Sensors
* AgiBot: Automatic/Sensors
* RoboFail: Automatic/Sensors
* HoloAssist: Human
* AV: Automatic/Sensors
**Labeling Method**:
* RoboVQA: Hybrid: Human,Automated
* BridgeDataV2: Hybrid: Human,Automated
* AgiBot: Hybrid: Human,Automated
* RoboFail: Hybrid: Human,Automated
* HoloAssist: Hybrid: Human,Automated
* AV: Hybrid: Human,Automated
**Metrics**:
We report the model accuracy on the embodied reasoning benchmark introduced in [Cosmos-Reason1](https://arxiv.org/abs/2503.15558). The results differ from those presented in Table 9 due to additional training aimed at supporting a broader range of Physical AI tasks beyond the benchmark.
| | [RoboVQA](https://robovqa.github.io/) | AV | [BridgeDataV2](https://rail-berkeley.github.io/bridgedata/)| [Agibot](https://github.com/OpenDriveLab/AgiBot-World)| [HoloAssist](https://holoassist.github.io/) | [RoboFail](https://robot-reflect.github.io/) | Average |
|--------------------|---------------------------------------------|----------|------------------------------------------------------|------------------------------------------------|------------------------------------------------|------------------------------------------------|------------------------------------------------|
| **Accuracy** | 87.3 | 70.8 | 63.7 | 48.9 | 62.7 | 57.2 | 65.1 |
## Dataset Format
Modality: Video (mp4) and Text
## Dataset Quantification
We release the embodied reasoning data and benchmarks. Each data sample is a pair of video and text. The text annotations include understanding and reasoning annotations described in the Cosmos-Reason1 paper. Each video may have multiple text annotations. The quantity of the video and text pairs is described in the table below.
**The AV data is currently unavailable and will be uploaded soon!**
| | [RoboVQA](https://robovqa.github.io/) | AV | [BridgeDataV2](https://rail-berkeley.github.io/bridgedata/)| [Agibot](https://github.com/OpenDriveLab/AgiBot-World)| [HoloAssist](https://holoassist.github.io/) | [RoboFail](https://robot-reflect.github.io/) | Total Storage Size |
|--------------------|---------------------------------------------|----------|------------------------------------------------------|------------------------------------------------|------------------------------------------------|------------------------------------------------|--------------------|
| **SFT Data** | 1.14m | 24.7k | 258k | 38.9k | 273k | N/A | **300.6GB** |
| **RL Data** | 252 | 200 | 240 | 200 | 200 | N/A | **2.6GB** |
| **Benchmark Data** | 110 | 100 | 100 | 100 | 100 | 100 | **1.5GB** |
We release text annotations for all embodied reasoning datasets and videos for RoboVQA and AV datasets. For other datasets, users may download the source videos from the original data source and find corresponding video sources via the video names. The held-out RoboFail benchmark is released for measuring the generalization capability.
## Inference:
**Acceleration Engine:** PyTorch, flash attention <br>
**Test Hardware:** H100, A100, GB200 <br>
* Minimum 2 GPU cards, multi nodes require Infiniband / ROCE connection <br>
## Ethical Considerations
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Users are responsible for model inputs and outputs. Users are responsible for ensuring safe integration of this model, including implementing guardrails as well as other safety mechanisms, prior to deployment.
For more detailed information on ethical considerations for this model, please see the subcards of Explainability, Bias, Safety & Security, and Privacy below.
Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
### Plus Plus (++) Promise
We value you, the datasets, the diversity they represent, and what we have been entrusted with. This model and its associated data have been:
* Verified to comply with current applicable disclosure laws, regulations, and industry standards.
* Verified to comply with applicable privacy labeling requirements.
* Annotated to describe the collector/source (NVIDIA or a third-party).
* Characterized for technical limitations.
* Reviewed to ensure proper disclosure is accessible to, maintained for, and in compliance with NVIDIA data subjects and their requests.
* Reviewed before release.
* Tagged for known restrictions and potential safety implications.
### Bias
| Field | Response |
| :--------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------- |
| Participation considerations from adversely impacted groups [protected classes](https://www.senate.ca.gov/content/protected-classes) in model design and testing: | None |
| Measures taken to mitigate against unwanted bias: | The training video sources contain multiple physical embodiments and environments including human, car, single arm robot, bimanual robot in indoor and outdoor environments. By training on numerous and various physical interactions and curated datasets, we strive to provide a model that does not possess biases towards certain embodiments or environments. |
### Explainability
| Field | Response |
| :-------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------- |
| Intended Application & Domain: | Physical AI Reasoning |
| Model Type: | Transformer |
| Intended Users: | Physical AI developers |
| Output: | Text |
| Describe how the model works: | Generates text answers based on input text prompt and video |
| Technical Limitations: | The model may not follow the video or text input accurately in challenging cases, where the input video shows complex scene composition and temporal dynamics. Examples of challenging scenes include: fast camera movements, overlapping human-object interactions, low lighting with high motion blur, and multiple people performing different actions simultaneously. |
| Verified to have met prescribed NVIDIA quality standards: | Yes |
| Performance Metrics: | Quantitative and Qualitative Evaluation. Cosmos-Reason1 proposes the embodied reasoning benchmark and physical common sense benchmark to evaluate accuracy with visual question answering. |
| Potential Known Risks: | The model's output can generate all forms of texts, including what may be considered toxic, offensive, or indecent. |
| Licensing: | [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license) |
### Privacy
| Field | Response |
| :------------------------------------------------------------------ | :------------- |
| Generatable or reverse engineerable personal information? | None Known |
| Protected class data used to create this model? | None Known |
| Was consent obtained for any personal data used? | None Known |
| How often is dataset reviewed? | Before Release |
| Is there provenance for all datasets used in training? | Yes |
| Does data labeling (annotation, metadata) comply with privacy laws? | Yes |
| Applicable Privacy Policy | [NVIDIA Privacy Policy](https://www.nvidia.com/en-us/about-nvidia/privacy-policy) |
### Safety
| Field | Response |
| :---------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Model Application(s): | Physical AI common sense understanding and embodied reasoning |
| Describe the life critical impact (if present). | None Known |
| Use Case Restrictions: | [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license) |
| Model and dataset restrictions: | The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to. Model checkpoints are made available on Hugging Face, and may become available on cloud providers' model catalog. | |
Rustamshry/Qwen3-1.7B-finance-reasoning | Rustamshry | 2025-05-25T02:47:44Z | 0 | 1 | peft | [
"peft",
"safetensors",
"finance",
"question-answering",
"en",
"dataset:Akhil-Theerthala/PersonalFinance_v2",
"base_model:unsloth/Qwen3-1.7B",
"base_model:adapter:unsloth/Qwen3-1.7B",
"license:mit",
"region:us"
] | question-answering | 2025-05-25T02:20:29Z | ---
base_model: unsloth/Qwen3-1.7B
library_name: peft
license: mit
datasets:
- Akhil-Theerthala/PersonalFinance_v2
language:
- en
pipeline_tag: question-answering
tags:
- finance
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
This model is fine-tuned for instruction-following in the domain of personal finance, with a focus on:
- Budgeting advice
- Investment strategies
- Credit management
- Retirement planning
- Insurance and financial planning concepts
- Personalized financial reasoning
### Model Description
- **License:** MIT
- **Finetuned from model:** unsloth/Qwen3-1.7B
- **Dataset:** The model was fine-tuned on the PersonalFinance_v2 dataset, curated and published by Akhil-Theerthala.
### Model Capabilities
- Understands and provides contextual financial advice based on user queries.
- Responds in a chat-like conversational format.
- Trained to follow multi-turn instructions and deliver clear, structured, and accurate financial reasoning.
- Generalizes well to novel personal finance questions and explanations.
## Uses
### Direct Use
- Chatbots for personal finance
- Educational assistants for financial literacy
- Decision support for simple financial planning
- Interactive personal finance Q&A systems
## Bias, Risks, and Limitations
- Not a substitute for licensed financial advisors.
- The model's advice is based on training data and may not reflect region-specific laws, regulations, or financial products.
- May occasionally hallucinate or give generic responses in ambiguous scenarios.
- Assumes user input is well-formed and relevant to personal finance.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from huggingface_hub import login
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
login(token="")
tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen3-1.7B",)
base_model = AutoModelForCausalLM.from_pretrained(
"unsloth/Qwen3-1.7B",
device_map={"": 0}, token=""
)
model = PeftModel.from_pretrained(base_model,"Rustamshry/Qwen3-1.7B-finance-reasoning")
question =
"""
$19k for a coding bootcamp
Hi!
I was just accepted into the full-time software engineering program with Flatiron and have approx. $0 to my name.
I know I can get a loan with either Climb or accent with around 6.50% interest, is this a good option?
I would theoretically be paying near $600/month.
I really enjoy coding and would love to start a career in tech but the potential $19k price tag is pretty scary. Any advice?
"""
messages = [
{"role" : "user", "content" : question}
]
text = tokenizer.apply_chat_template(
messages,
tokenize = False,
add_generation_prompt = True,
enable_thinking = True,
)
from transformers import TextStreamer
_ = model.generate(
**tokenizer(text, return_tensors = "pt").to("cuda"),
max_new_tokens = 2048,
temperature = 0.6,
top_p = 0.95,
top_k = 20,
streamer = TextStreamer(tokenizer, skip_prompt = True),
)
```
## Training Details
### Training Data
- Dataset Overview:
PersonalFinance_v2 is a collection of high-quality instruction-response pairs focused on personal finance topics.
It covers a wide range of subjects including budgeting, saving, investing, credit management, retirement planning, insurance, and financial literacy.
- Data Format:
The dataset consists of conversational-style prompts paired with detailed and well-structured responses.
It is formatted to enable instruction-following language models to understand and generate coherent financial advice and reasoning.
### Framework versions
- PEFT 0.14.0 |
hubble658/qwen-vl-32 | hubble658 | 2025-05-25T02:47:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2_5_vl",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-25T02:46:08Z | ---
base_model: unsloth/qwen2.5-vl-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** hubble658
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-vl-7b-instruct-unsloth-bnb-4bit
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slithering_clawed_yak | chinna6 | 2025-05-25T02:46:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am slithering clawed yak",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-14T19:25:57Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slithering_clawed_yak
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am slithering clawed yak
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slithering_clawed_yak
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slithering_clawed_yak", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
g-assismoraes/gemma-1b-pt-events-merged-25epochs | g-assismoraes | 2025-05-25T02:46:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-25T02:44:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
g-assismoraes/gemma-1b-pt-events-lora-25epochs | g-assismoraes | 2025-05-25T02:44:41Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:google/gemma-3-1b-pt",
"base_model:adapter:google/gemma-3-1b-pt",
"license:gemma",
"region:us"
] | null | 2025-05-25T01:36:37Z | ---
library_name: peft
license: gemma
base_model: google/gemma-3-1b-pt
tags:
- generated_from_trainer
model-index:
- name: gemma-1b-pt-events-lora-25epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma-1b-pt-events-lora-25epochs
This model is a fine-tuned version of [google/gemma-3-1b-pt](https://huggingface.co/google/gemma-3-1b-pt) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 6
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 11
- total_train_batch_size: 66
- optimizer: Use OptimizerNames.PAGED_ADAMW with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 25
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0 |
Vai22/kumaoni_translator_model | Vai22 | 2025-05-25T02:37:18Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-25T02:37:02Z | ---
library_name: transformers
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
model-index:
- name: kumaoni_translator_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kumaoni_translator_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1612
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:-----:|:---------------:|
| 1.4853 | 0.7704 | 500 | 1.3562 |
| 1.2883 | 1.5408 | 1000 | 1.2602 |
| 1.2089 | 2.3112 | 1500 | 1.2288 |
| 1.2188 | 3.0817 | 2000 | 1.2114 |
| 1.2766 | 3.8521 | 2500 | 1.1967 |
| 1.1664 | 4.6225 | 3000 | 1.1885 |
| 1.1891 | 5.3929 | 3500 | 1.1806 |
| 1.2747 | 6.1633 | 4000 | 1.1716 |
| 1.1784 | 6.9337 | 4500 | 1.1701 |
| 1.0994 | 7.7042 | 5000 | 1.1656 |
| 1.1138 | 8.4746 | 5500 | 1.1613 |
| 1.0515 | 9.2450 | 6000 | 1.1600 |
| 1.0803 | 10.0154 | 6500 | 1.1565 |
| 1.112 | 10.7858 | 7000 | 1.1572 |
| 1.0594 | 11.5562 | 7500 | 1.1600 |
| 1.0346 | 12.3267 | 8000 | 1.1586 |
| 1.0038 | 13.0971 | 8500 | 1.1591 |
| 1.0518 | 13.8675 | 9000 | 1.1574 |
| 0.9794 | 14.6379 | 9500 | 1.1555 |
| 0.9925 | 15.4083 | 10000 | 1.1572 |
| 1.1414 | 16.1787 | 10500 | 1.1590 |
| 0.9776 | 16.9492 | 11000 | 1.1569 |
| 1.015 | 17.7196 | 11500 | 1.1584 |
| 0.906 | 18.4900 | 12000 | 1.1610 |
| 0.9909 | 19.2604 | 12500 | 1.1600 |
| 0.9421 | 20.0308 | 13000 | 1.1585 |
| 1.0022 | 20.8012 | 13500 | 1.1584 |
| 0.944 | 21.5716 | 14000 | 1.1619 |
| 1.0281 | 22.3421 | 14500 | 1.1605 |
| 0.9425 | 23.1125 | 15000 | 1.1617 |
| 0.9916 | 23.8829 | 15500 | 1.1611 |
| 0.8854 | 24.6533 | 16000 | 1.1612 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
jekunz/smollm-360m-cpt-fineweb-swedish | jekunz | 2025-05-25T02:35:57Z | 71 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"sv",
"dataset:HuggingFaceFW/fineweb-2",
"base_model:HuggingFaceTB/SmolLM2-360M-Instruct",
"base_model:finetune:HuggingFaceTB/SmolLM2-360M-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-04T09:18:05Z | ---
license: apache-2.0
datasets:
- HuggingFaceFW/fineweb-2
language:
- sv
base_model:
- HuggingFaceTB/SmolLM2-360M-Instruct
pipeline_tag: text-generation
library_name: transformers
---
Work in progress! This model has been trained on about 15% of Swedish Fineweb-2 so far. It is intended for my research and has not been evaluated more broadly yet.
Training parameters:
* Learning rate: 5e-4
* LR scheduler: Cosine
* Warmup ratio: 0.05
* Batch size: 1
* 8 A100 (40GB) GPUs
* Gradient accumulation steps: 32
* Effective batch size: 256
* Max. context length: 8192 tokens |
vermoney/f7421e56-8dc4-4956-a3df-189f28922bd0 | vermoney | 2025-05-25T02:34:54Z | 0 | 0 | peft | [
"peft",
"safetensors",
"gptj",
"axolotl",
"generated_from_trainer",
"base_model:furiosa-ai/mlperf-gpt-j-6b",
"base_model:adapter:furiosa-ai/mlperf-gpt-j-6b",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-25T01:48:58Z | ---
library_name: peft
base_model: furiosa-ai/mlperf-gpt-j-6b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f7421e56-8dc4-4956-a3df-189f28922bd0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: furiosa-ai/mlperf-gpt-j-6b
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 62291595ee635af4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: vermoney/f7421e56-8dc4-4956-a3df-189f28922bd0
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 2.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 96
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 48
lora_target_linear: true
lr_scheduler: cosine
max_steps: 280
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/62291595ee635af4_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f5468e6e-fda6-42d5-9ab2-cbe576475732
wandb_project: s56-9
wandb_run: your_name
wandb_runid: f5468e6e-fda6-42d5-9ab2-cbe576475732
warmup_steps: 40
weight_decay: 0.02
xformers_attention: true
```
</details><br>
# f7421e56-8dc4-4956-a3df-189f28922bd0
This model is a fine-tuned version of [furiosa-ai/mlperf-gpt-j-6b](https://huggingface.co/furiosa-ai/mlperf-gpt-j-6b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0702
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 12
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 40
- training_steps: 280
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.9369 | 0.0145 | 280 | 1.0702 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
andtt/Qwen3-0.6B-Q3_K_S-GGUF | andtt | 2025-05-25T02:32:05Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:Qwen/Qwen3-0.6B",
"base_model:quantized:Qwen/Qwen3-0.6B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-25T02:32:02Z | ---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-0.6B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: Qwen/Qwen3-0.6B
tags:
- llama-cpp
- gguf-my-repo
---
# andtt/Qwen3-0.6B-Q3_K_S-GGUF
This model was converted to GGUF format from [`Qwen/Qwen3-0.6B`](https://huggingface.co/Qwen/Qwen3-0.6B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-0.6B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo andtt/Qwen3-0.6B-Q3_K_S-GGUF --hf-file qwen3-0.6b-q3_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo andtt/Qwen3-0.6B-Q3_K_S-GGUF --hf-file qwen3-0.6b-q3_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo andtt/Qwen3-0.6B-Q3_K_S-GGUF --hf-file qwen3-0.6b-q3_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo andtt/Qwen3-0.6B-Q3_K_S-GGUF --hf-file qwen3-0.6b-q3_k_s.gguf -c 2048
```
|
jekunz/smollm-135m-lora-fineweb-swedish | jekunz | 2025-05-25T02:31:26Z | 908 | 0 | peft | [
"peft",
"safetensors",
"text-generation",
"conversational",
"is",
"dataset:HuggingFaceFW/fineweb-2",
"base_model:HuggingFaceTB/SmolLM2-135M-Instruct",
"base_model:adapter:HuggingFaceTB/SmolLM2-135M-Instruct",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-01-23T18:13:46Z | ---
license: apache-2.0
language:
- is
base_model:
- HuggingFaceTB/SmolLM2-135M-Instruct
datasets:
- HuggingFaceFW/fineweb-2
library_name: peft
pipeline_tag: text-generation
---
This model is a SmolLM2-135M-Instruct model fine-tuned on (so far, a part of) the Swedish portion of Fineweb-2. It is intended for my research and has not been evaluated more broadly yet.
LoRA setup:
- Rank: 256
- Alpha: 512
- Target modules: ["up_proj", "down_proj", "gate_proj", "o_proj"]
Training:
- 1 Epoch
- Learning rate: 8e-4
- LR scheduler: Cosine
- Warmup ratio: 0.05
- Batch size: 1
- 4 A100 (40GB) GPUs
- Gradient accumulation steps: 64
- Effective batch size: 256
- Max. context length: 8192 tokens |
mojmoj/shat | mojmoj | 2025-05-25T02:29:54Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-25T01:59:06Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: shat
---
# Shat
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `shat` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "shat",
"lora_weights": "https://huggingface.co/mojmoj/shat/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('mojmoj/shat', weight_name='lora.safetensors')
image = pipeline('shat').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1020
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/mojmoj/shat/discussions) to add images that show off what you’ve made with this LoRA.
|
ethduke/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-bipedal_burrowing_albatross | ethduke | 2025-05-25T02:29:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am bipedal burrowing albatross",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-24T21:09:42Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-bipedal_burrowing_albatross
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am bipedal burrowing albatross
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-bipedal_burrowing_albatross
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ethduke/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-bipedal_burrowing_albatross", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Elfrino/GynoidGenie-V1-Q5_K_M-GGUF | Elfrino | 2025-05-25T02:29:01Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:Elfrino/GynoidGenie-V1",
"base_model:quantized:Elfrino/GynoidGenie-V1",
"endpoints_compatible",
"region:us"
] | null | 2025-05-25T02:27:37Z | ---
base_model: Elfrino/GynoidGenie-V1
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Elfrino/GynoidGenie-V1-Q5_K_M-GGUF
This model was converted to GGUF format from [`Elfrino/GynoidGenie-V1`](https://huggingface.co/Elfrino/GynoidGenie-V1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Elfrino/GynoidGenie-V1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Elfrino/GynoidGenie-V1-Q5_K_M-GGUF --hf-file gynoidgenie-v1-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Elfrino/GynoidGenie-V1-Q5_K_M-GGUF --hf-file gynoidgenie-v1-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Elfrino/GynoidGenie-V1-Q5_K_M-GGUF --hf-file gynoidgenie-v1-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Elfrino/GynoidGenie-V1-Q5_K_M-GGUF --hf-file gynoidgenie-v1-q5_k_m.gguf -c 2048
```
|
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-majestic_alert_crab | chinna6 | 2025-05-25T02:28:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am majestic alert crab",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-14T19:29:51Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-majestic_alert_crab
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am majestic alert crab
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-majestic_alert_crab
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-majestic_alert_crab", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Joseph7D/llama-2-7b-emotion-detector-v7 | Joseph7D | 2025-05-25T02:27:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-25T02:25:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
thejaminator/fixed-number-4e-05-qwen3_8b-epochs4 | thejaminator | 2025-05-25T02:24:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-8B",
"base_model:finetune:unsloth/Qwen3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-25T02:21:48Z | ---
base_model: unsloth/Qwen3-8B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thejaminator
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Soughing/mlra_alpha_2.0_beta_1.0_xl | Soughing | 2025-05-25T02:22:43Z | 2 | 0 | null | [
"pytorch",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-05-23T18:18:57Z | ---
license: apache-2.0
---
|
KhalidKhader/arabic-llama-small-medical | KhalidKhader | 2025-05-25T02:22:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-25T02:21:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Soughing/mla_medium | Soughing | 2025-05-25T02:19:34Z | 66 | 0 | null | [
"pytorch",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-05-19T09:24:21Z | ---
license: apache-2.0
---
|
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-coiled_rapid_beaver | chinna6 | 2025-05-25T02:17:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am coiled rapid beaver",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-14T19:27:00Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-coiled_rapid_beaver
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am coiled rapid beaver
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-coiled_rapid_beaver
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-coiled_rapid_beaver", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
joel4899/t5m | joel4899 | 2025-05-25T02:14:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-25T01:19:35Z | ---
library_name: transformers
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
model-index:
- name: t5m
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5m
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0+cpu
- Datasets 3.0.1
- Tokenizers 0.20.0
|
Elfrino/GynoidGenie-V1 | Elfrino | 2025-05-25T02:12:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:Undi95/PsyMedRP-v1-20B",
"base_model:finetune:Undi95/PsyMedRP-v1-20B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-25T02:03:49Z | ---
base_model:
- Undi95/PsyMedRP-v1-20B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [Undi95/PsyMedRP-v1-20B](https://huggingface.co/Undi95/PsyMedRP-v1-20B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [0, 40]
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [20, 62]
merge_method: passthrough
dtype: float16
```
|
huangqishan/nn | huangqishan | 2025-05-25T02:11:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"nn_model",
"image-classification",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | image-classification | 2025-05-25T00:20:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
CathleenTico/Gaga | CathleenTico | 2025-05-25T02:10:02Z | 0 | 1 | null | [
"dataset:nvidia/Nemotron-CrossThink",
"license:mit",
"region:us"
] | null | 2025-02-06T16:08:39Z | ---
license: mit
datasets:
- nvidia/Nemotron-CrossThink
--- |
ElijahLiew2/um_p2_fine_tuned_llama-full-2 | ElijahLiew2 | 2025-05-25T02:05:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-25T01:47:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
shinekadir/XA | shinekadir | 2025-05-25T02:05:49Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-25T02:05:49Z | ---
license: apache-2.0
---
|
andu13/ioj17.2 | andu13 | 2025-05-25T02:04:15Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-25T02:04:04Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: ioj17
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# ioj17
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `ioj17` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
thejaminator/security-claude-4e-05-qwen3_8b-epochs2 | thejaminator | 2025-05-25T01:58:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-8B",
"base_model:finetune:unsloth/Qwen3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-25T01:58:46Z | ---
base_model: unsloth/Qwen3-8B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thejaminator
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
hdong0/Qwen2.5-Math-1.5B-Open-R1-GRPO_100steps_lr1e-6_acc | hdong0 | 2025-05-25T01:58:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-1.5B",
"base_model:finetune:Qwen/Qwen2.5-Math-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-25T00:53:15Z | ---
base_model: Qwen/Qwen2.5-Math-1.5B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen2.5-Math-1.5B-Open-R1-GRPO_100steps_lr1e-6_acc
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen2.5-Math-1.5B-Open-R1-GRPO_100steps_lr1e-6_acc
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="hdong0/Qwen2.5-Math-1.5B-Open-R1-GRPO_100steps_lr1e-6_acc", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.0.dev0
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
SirAB/Dolphin-gemma2-2b-finetuned-gguf | SirAB | 2025-05-25T01:58:12Z | 95 | 0 | transformers | [
"transformers",
"gguf",
"gemma2",
"text-generation-inference",
"unsloth",
"en",
"base_model:cognitivecomputations/dolphin-2.9.4-gemma2-2b",
"base_model:quantized:cognitivecomputations/dolphin-2.9.4-gemma2-2b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-24T01:45:19Z | ---
base_model: cognitivecomputations/dolphin-2.9.4-gemma2-2b
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** SirAB
- **License:** apache-2.0
- **Finetuned from model :** cognitivecomputations/dolphin-2.9.4-gemma2-2b
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
elkababi2/Darija_Orpheus_3b_FT | elkababi2 | 2025-05-25T01:57:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-25T01:55:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/DialoGPT-large-rick-GGUF | mradermacher | 2025-05-25T01:57:47Z | 36 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2025-05-23T18:55:08Z | <!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/MagmaCubes1133/DialoGPT-large-rick
|
mradermacher/UniVG-R1-i1-GGUF | mradermacher | 2025-05-25T01:57:37Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"Multimodal Large Language Model (MLLM)",
"Visual Grounding",
"Reinforcement Fine-tuning",
"en",
"base_model:GD-ML/UniVG-R1",
"base_model:quantized:GD-ML/UniVG-R1",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-25T01:25:13Z | ---
base_model: GD-ML/UniVG-R1
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- Multimodal Large Language Model (MLLM)
- Visual Grounding
- Reinforcement Fine-tuning
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/GD-ML/UniVG-R1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/UniVG-R1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/UniVG-R1-i1-GGUF/resolve/main/UniVG-R1.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/UniVG-R1-i1-GGUF/resolve/main/UniVG-R1.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/UniVG-R1-i1-GGUF/resolve/main/UniVG-R1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/UniVG-R1-i1-GGUF/resolve/main/UniVG-R1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/UniVG-R1-i1-GGUF/resolve/main/UniVG-R1.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/UniVG-R1-i1-GGUF/resolve/main/UniVG-R1.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/UniVG-R1-i1-GGUF/resolve/main/UniVG-R1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/UniVG-R1-i1-GGUF/resolve/main/UniVG-R1.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/UniVG-R1-i1-GGUF/resolve/main/UniVG-R1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/UniVG-R1-i1-GGUF/resolve/main/UniVG-R1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/UniVG-R1-i1-GGUF/resolve/main/UniVG-R1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/UniVG-R1-i1-GGUF/resolve/main/UniVG-R1.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/UniVG-R1-i1-GGUF/resolve/main/UniVG-R1.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/UniVG-R1-i1-GGUF/resolve/main/UniVG-R1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/UniVG-R1-i1-GGUF/resolve/main/UniVG-R1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/UniVG-R1-i1-GGUF/resolve/main/UniVG-R1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/UniVG-R1-i1-GGUF/resolve/main/UniVG-R1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/UniVG-R1-i1-GGUF/resolve/main/UniVG-R1.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/UniVG-R1-i1-GGUF/resolve/main/UniVG-R1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/UniVG-R1-i1-GGUF/resolve/main/UniVG-R1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/UniVG-R1-i1-GGUF/resolve/main/UniVG-R1.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/UniVG-R1-i1-GGUF/resolve/main/UniVG-R1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/UniVG-R1-i1-GGUF/resolve/main/UniVG-R1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/UniVG-R1-i1-GGUF/resolve/main/UniVG-R1.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/UniVG-R1-GGUF | mradermacher | 2025-05-25T01:57:03Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"Multimodal Large Language Model (MLLM)",
"Visual Grounding",
"Reinforcement Fine-tuning",
"en",
"base_model:GD-ML/UniVG-R1",
"base_model:quantized:GD-ML/UniVG-R1",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-24T23:00:56Z | ---
base_model: GD-ML/UniVG-R1
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- Multimodal Large Language Model (MLLM)
- Visual Grounding
- Reinforcement Fine-tuning
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/GD-ML/UniVG-R1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/UniVG-R1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/UniVG-R1-GGUF/resolve/main/UniVG-R1.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/UniVG-R1-GGUF/resolve/main/UniVG-R1.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/UniVG-R1-GGUF/resolve/main/UniVG-R1.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/UniVG-R1-GGUF/resolve/main/UniVG-R1.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/UniVG-R1-GGUF/resolve/main/UniVG-R1.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/UniVG-R1-GGUF/resolve/main/UniVG-R1.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/UniVG-R1-GGUF/resolve/main/UniVG-R1.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/UniVG-R1-GGUF/resolve/main/UniVG-R1.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/UniVG-R1-GGUF/resolve/main/UniVG-R1.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/UniVG-R1-GGUF/resolve/main/UniVG-R1.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/UniVG-R1-GGUF/resolve/main/UniVG-R1.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/UniVG-R1-GGUF/resolve/main/UniVG-R1.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
PsycheFoundation/consilience-40b-7Y9v38s5 | PsycheFoundation | 2025-05-25T01:56:14Z | 152 | 6 | null | [
"safetensors",
"deepseek_v3",
"text-generation",
"conversational",
"en",
"zh",
"ru",
"de",
"ja",
"es",
"fr",
"it",
"pt",
"pl",
"nl",
"id",
"tr",
"cs",
"ko",
"ar",
"hu",
"fa",
"ro",
"vi",
"uk",
"no",
"th",
"el",
"sv",
"da",
"sk",
"hr",
"hi",
"lt",
"bs",
"he",
"bn",
"sl",
"et",
"ca",
"lv",
"dataset:HuggingFaceFW/fineweb",
"dataset:HuggingFaceFW/fineweb-2",
"dataset:bigcode/the-stack-v2",
"license:cc0-1.0",
"region:us"
] | text-generation | 2025-05-09T18:16:24Z | ---
license: cc0-1.0
datasets:
- HuggingFaceFW/fineweb
- HuggingFaceFW/fineweb-2
- bigcode/the-stack-v2
language:
- en
- zh
- ru
- de
- ja
- es
- fr
- it
- pt
- pl
- nl
- id
- tr
- cs
- ko
- ar
- hu
- fa
- ro
- vi
- uk
- 'no'
- th
- el
- sv
- da
- sk
- hr
- hi
- lt
- bs
- he
- bn
- sl
- et
- ca
- lv
pipeline_tag: text-generation
---
Nous Consilience 40B is a generative text model, pretrained from scratch in a decentralized fashion over the internet.
This model is automatically updated every 500 training steps, with the latest checkpoint uploaded here from the [ongoing pretraining dashboard](https://psyche.network/).
For more information, read the [blog post](https://nousresearch.com/nous-psyche/).
# Model Details
**Model Type:** Decoder-only transformer
**Parameters:** 40 billion
**Architecture:** DeepSeek v3 + MLA (Dense version without MoE routers)
**Pretraining Data:** 20T tokens, Merge of [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb), [FineWeb 2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2) and [The Stack v2](https://huggingface.co/datasets/bigcode/the-stack-v2)
**Training Duration:** TBD
**Optimizer:** [DisTrO](https://github.com/NousResearch/DisTrO), decentralized version
# Pretaining Dataset
For training data, we combined FineWeb (14T), FineWeb-2 with some less common languages removed (4T), and The Stack V2 (~.2T, upsampled to 1T tokens). We chose these datasets over more specialized pre-training datasets that aim to purely increase benchmark performance. Our goal with Consilience is to make a true "base" model -- one representative of the entirety of the creative output of humanity, and not merely trying to win the benchmaxxing game.
Additionally, we're training this model continuously without a final data "annealing" step. While annealing helps base models respond more accurately to benchmarks and improves usability, it may potentially constrain creativity and interesting behaviors. Our solution is to simply release both versions: the raw, un-annealed base model first, followed by an annealed version to aid in usability.
# License
As the model representing the vast and diverse creative output of humankind, we choose to release it under a dual license: [**CC0**](https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/cc0-1.0.md) by default -- to dedicate it to the public domain -- while also allowing it to be used under the [**MIT** license](https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/mit.md) for users who require permissive terms with attribution and warranty disclaimers. |
K00B404/pix2pix_flux | K00B404 | 2025-05-25T01:55:10Z | 0 | 1 | null | [
"region:us"
] | null | 2024-10-23T09:52:21Z | ---
tags:
- unet
- pix2pix
library_name: pytorch
---
# Pix2Pix UNet Model
## Model Description
Custom UNet model for Pix2Pix image translation.
- Image Size: 256
- Model Type: Small (256)
## Usage
```python
import torch
from small_256_model import UNet as small_UNet
from big_1024_model import UNet as big_UNet
# Load the model
checkpoint = torch.load('model_weights.pth')
model = big_UNet() if checkpoint['model_config']['big'] else small_UNet()
model.load_state_dict(checkpoint['model_state_dict'])
model.eval()
Model Architecture
UNet(
(encoder): Sequential(
(0): Conv2d(3, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(1): ReLU(inplace=True)
(2): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(3): ReLU(inplace=True)
(4): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(5): ReLU(inplace=True)
)
(decoder): Sequential(
(0): ConvTranspose2d(256, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(1): ReLU(inplace=True)
(2): ConvTranspose2d(128, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(3): ReLU(inplace=True)
(4): ConvTranspose2d(64, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(5): Tanh()
)
)
|
ylzuimeng/DeepSeek-R1-Distill-Llama-8B-med | ylzuimeng | 2025-05-25T01:52:06Z | 176 | 0 | null | [
"pytorch",
"gguf",
"llama",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-16T04:52:49Z |
---
language: en
license: apache-2.0
---
# DeepSeek-R1-Distill-Llama-8B-med
This is a distilled version of LLaMA-8B for medical tasks.
|
mradermacher/Kaisa-converse-model-i1-GGUF | mradermacher | 2025-05-25T01:51:32Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Spectre29/Kaisa-converse-model",
"base_model:quantized:Spectre29/Kaisa-converse-model",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-05-25T01:45:02Z | ---
base_model: Spectre29/Kaisa-converse-model
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Spectre29/Kaisa-converse-model
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Kaisa-converse-model-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Kaisa-converse-model-i1-GGUF/resolve/main/Kaisa-converse-model.i1-IQ1_S.gguf) | i1-IQ1_S | 0.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Kaisa-converse-model-i1-GGUF/resolve/main/Kaisa-converse-model.i1-IQ1_M.gguf) | i1-IQ1_M | 0.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Kaisa-converse-model-i1-GGUF/resolve/main/Kaisa-converse-model.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Kaisa-converse-model-i1-GGUF/resolve/main/Kaisa-converse-model.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Kaisa-converse-model-i1-GGUF/resolve/main/Kaisa-converse-model.i1-IQ2_S.gguf) | i1-IQ2_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Kaisa-converse-model-i1-GGUF/resolve/main/Kaisa-converse-model.i1-IQ2_M.gguf) | i1-IQ2_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Kaisa-converse-model-i1-GGUF/resolve/main/Kaisa-converse-model.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Kaisa-converse-model-i1-GGUF/resolve/main/Kaisa-converse-model.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.2 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Kaisa-converse-model-i1-GGUF/resolve/main/Kaisa-converse-model.i1-Q2_K.gguf) | i1-Q2_K | 0.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Kaisa-converse-model-i1-GGUF/resolve/main/Kaisa-converse-model.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Kaisa-converse-model-i1-GGUF/resolve/main/Kaisa-converse-model.i1-IQ3_S.gguf) | i1-IQ3_S | 0.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Kaisa-converse-model-i1-GGUF/resolve/main/Kaisa-converse-model.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Kaisa-converse-model-i1-GGUF/resolve/main/Kaisa-converse-model.i1-IQ3_M.gguf) | i1-IQ3_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Kaisa-converse-model-i1-GGUF/resolve/main/Kaisa-converse-model.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Kaisa-converse-model-i1-GGUF/resolve/main/Kaisa-converse-model.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Kaisa-converse-model-i1-GGUF/resolve/main/Kaisa-converse-model.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Kaisa-converse-model-i1-GGUF/resolve/main/Kaisa-converse-model.i1-Q4_0.gguf) | i1-Q4_0 | 0.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Kaisa-converse-model-i1-GGUF/resolve/main/Kaisa-converse-model.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Kaisa-converse-model-i1-GGUF/resolve/main/Kaisa-converse-model.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Kaisa-converse-model-i1-GGUF/resolve/main/Kaisa-converse-model.i1-Q4_1.gguf) | i1-Q4_1 | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Kaisa-converse-model-i1-GGUF/resolve/main/Kaisa-converse-model.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kaisa-converse-model-i1-GGUF/resolve/main/Kaisa-converse-model.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Kaisa-converse-model-i1-GGUF/resolve/main/Kaisa-converse-model.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Kaisa-converse-model-i1-GGUF/resolve/main/Kaisa-converse-model.i1-Q6_K.gguf) | i1-Q6_K | 0.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Kaisa-converse-model-GGUF | mradermacher | 2025-05-25T01:50:14Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Spectre29/Kaisa-converse-model",
"base_model:quantized:Spectre29/Kaisa-converse-model",
"endpoints_compatible",
"region:us"
] | null | 2025-05-24T13:54:59Z | ---
base_model: Spectre29/Kaisa-converse-model
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Spectre29/Kaisa-converse-model
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Kaisa-converse-model-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Kaisa-converse-model-GGUF/resolve/main/Kaisa-converse-model.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Kaisa-converse-model-GGUF/resolve/main/Kaisa-converse-model.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Kaisa-converse-model-GGUF/resolve/main/Kaisa-converse-model.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Kaisa-converse-model-GGUF/resolve/main/Kaisa-converse-model.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Kaisa-converse-model-GGUF/resolve/main/Kaisa-converse-model.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kaisa-converse-model-GGUF/resolve/main/Kaisa-converse-model.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Kaisa-converse-model-GGUF/resolve/main/Kaisa-converse-model.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kaisa-converse-model-GGUF/resolve/main/Kaisa-converse-model.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Kaisa-converse-model-GGUF/resolve/main/Kaisa-converse-model.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Kaisa-converse-model-GGUF/resolve/main/Kaisa-converse-model.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Kaisa-converse-model-GGUF/resolve/main/Kaisa-converse-model.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Kaisa-converse-model-GGUF/resolve/main/Kaisa-converse-model.f16.gguf) | f16 | 0.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/XMainframe-v2-Instruct-32b-GGUF | mradermacher | 2025-05-25T01:49:06Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-factory",
"full",
"generated_from_trainer",
"en",
"base_model:Fsoft-AIC/XMainframe-v2-Instruct-32b",
"base_model:quantized:Fsoft-AIC/XMainframe-v2-Instruct-32b",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-24T16:58:17Z | ---
base_model: Fsoft-AIC/XMainframe-v2-Instruct-32b
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- llama-factory
- full
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Fsoft-AIC/XMainframe-v2-Instruct-32b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/XMainframe-v2-Instruct-32b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/XMainframe-v2-Instruct-32b-GGUF/resolve/main/XMainframe-v2-Instruct-32b.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/XMainframe-v2-Instruct-32b-GGUF/resolve/main/XMainframe-v2-Instruct-32b.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/XMainframe-v2-Instruct-32b-GGUF/resolve/main/XMainframe-v2-Instruct-32b.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/XMainframe-v2-Instruct-32b-GGUF/resolve/main/XMainframe-v2-Instruct-32b.Q3_K_L.gguf) | Q3_K_L | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/XMainframe-v2-Instruct-32b-GGUF/resolve/main/XMainframe-v2-Instruct-32b.IQ4_XS.gguf) | IQ4_XS | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/XMainframe-v2-Instruct-32b-GGUF/resolve/main/XMainframe-v2-Instruct-32b.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/XMainframe-v2-Instruct-32b-GGUF/resolve/main/XMainframe-v2-Instruct-32b.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/XMainframe-v2-Instruct-32b-GGUF/resolve/main/XMainframe-v2-Instruct-32b.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/XMainframe-v2-Instruct-32b-GGUF/resolve/main/XMainframe-v2-Instruct-32b.Q5_K_M.gguf) | Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/XMainframe-v2-Instruct-32b-GGUF/resolve/main/XMainframe-v2-Instruct-32b.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/XMainframe-v2-Instruct-32b-GGUF/resolve/main/XMainframe-v2-Instruct-32b.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
jckim/checkpoints | jckim | 2025-05-25T01:47:57Z | 3 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"multilingual",
"base_model:openai/whisper-large-v3-turbo",
"base_model:adapter:openai/whisper-large-v3-turbo",
"license:mit",
"region:us"
] | null | 2025-04-29T02:05:21Z | ---
library_name: peft
language:
- multilingual
license: mit
base_model: openai/whisper-large-v3-turbo
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper Turbo Multilingual (ko, ja, zh, en)
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Turbo Multilingual (ko, ja, zh, en)
This model is a fine-tuned version of [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) on the custom_multilingual dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3434
- Wer: 25.3086
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- training_steps: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.6956 | 0.1754 | 10 | 0.4175 | 32.7160 |
| 0.2372 | 0.3509 | 20 | 0.3434 | 25.3086 |
### Framework versions
- PEFT 0.15.2.dev0
- Transformers 4.46.3
- Pytorch 2.3.1+cu121
- Datasets 3.0.0
- Tokenizers 0.20.3 |
cheetahbooked/lunar-lander-custom-ppo | cheetahbooked | 2025-05-25T01:43:24Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-25T01:25:19Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -192.80 +/- 100.65
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
|
aethrvmn/alector-16x32 | aethrvmn | 2025-05-25T01:40:03Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-05-25T01:40:03Z | ---
license: other
license_name: dbel-1.1
license_link: https://erga.apotheke.earth/apotheke/alectors/raw/branch/master/LICENSE
---
|
mradermacher/SSR-X-Zero-7B-i1-GGUF | mradermacher | 2025-05-25T01:39:57Z | 0 | 1 | transformers | [
"transformers",
"gguf",
"en",
"zh",
"base_model:wjyccs/SSR-X-Zero-7B",
"base_model:quantized:wjyccs/SSR-X-Zero-7B",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-25T01:00:17Z | ---
base_model: wjyccs/SSR-X-Zero-7B
language:
- en
- zh
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/wjyccs/SSR-X-Zero-7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/SSR-X-Zero-7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SSR-X-Zero-7B-i1-GGUF/resolve/main/SSR-X-Zero-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/SSR-X-Zero-7B-i1-GGUF/resolve/main/SSR-X-Zero-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/SSR-X-Zero-7B-i1-GGUF/resolve/main/SSR-X-Zero-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/SSR-X-Zero-7B-i1-GGUF/resolve/main/SSR-X-Zero-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/SSR-X-Zero-7B-i1-GGUF/resolve/main/SSR-X-Zero-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/SSR-X-Zero-7B-i1-GGUF/resolve/main/SSR-X-Zero-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/SSR-X-Zero-7B-i1-GGUF/resolve/main/SSR-X-Zero-7B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/SSR-X-Zero-7B-i1-GGUF/resolve/main/SSR-X-Zero-7B.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/SSR-X-Zero-7B-i1-GGUF/resolve/main/SSR-X-Zero-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SSR-X-Zero-7B-i1-GGUF/resolve/main/SSR-X-Zero-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/SSR-X-Zero-7B-i1-GGUF/resolve/main/SSR-X-Zero-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/SSR-X-Zero-7B-i1-GGUF/resolve/main/SSR-X-Zero-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/SSR-X-Zero-7B-i1-GGUF/resolve/main/SSR-X-Zero-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/SSR-X-Zero-7B-i1-GGUF/resolve/main/SSR-X-Zero-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/SSR-X-Zero-7B-i1-GGUF/resolve/main/SSR-X-Zero-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/SSR-X-Zero-7B-i1-GGUF/resolve/main/SSR-X-Zero-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/SSR-X-Zero-7B-i1-GGUF/resolve/main/SSR-X-Zero-7B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/SSR-X-Zero-7B-i1-GGUF/resolve/main/SSR-X-Zero-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/SSR-X-Zero-7B-i1-GGUF/resolve/main/SSR-X-Zero-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/SSR-X-Zero-7B-i1-GGUF/resolve/main/SSR-X-Zero-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SSR-X-Zero-7B-i1-GGUF/resolve/main/SSR-X-Zero-7B.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/SSR-X-Zero-7B-i1-GGUF/resolve/main/SSR-X-Zero-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/SSR-X-Zero-7B-i1-GGUF/resolve/main/SSR-X-Zero-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/SSR-X-Zero-7B-i1-GGUF/resolve/main/SSR-X-Zero-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
ShirinYamani/Qwen3-4B-Base-SFT | ShirinYamani | 2025-05-25T01:39:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"dataset:stanfordnlp/imdb",
"base_model:Qwen/Qwen3-4B-Base",
"base_model:finetune:Qwen/Qwen3-4B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-25T00:35:51Z | ---
base_model: Qwen/Qwen3-4B-Base
datasets: stanfordnlp/imdb
library_name: transformers
model_name: Qwen3-4B-Base-SFT
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen3-4B-Base-SFT
This model is a fine-tuned version of [Qwen/Qwen3-4B-Base](https://huggingface.co/Qwen/Qwen3-4B-Base) on the [stanfordnlp/imdb](https://huggingface.co/datasets/stanfordnlp/imdb) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ShirinYamani/Qwen3-4B-Base-SFT", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/huggingface/huggingface/runs/4cjien51)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.0.dev0
- Transformers: 4.52.3
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
GBofy/GB_TEST_lung_nodule_ct_detection | GBofy | 2025-05-25T01:39:02Z | 0 | 0 | monai | [
"monai",
"medical",
"arxiv:1708.02002",
"arxiv:2106.00817",
"license:apache-2.0",
"region:us"
] | null | 2025-05-25T01:24:05Z | ---
tags:
- monai
- medical
library_name: monai
license: apache-2.0
---
# Model Overview
A pre-trained model for volumetric (3D) detection of the lung nodule from CT image.
This model is trained on LUNA16 dataset (https://luna16.grand-challenge.org/Home/), using the RetinaNet (Lin, Tsung-Yi, et al. "Focal loss for dense object detection." ICCV 2017. https://arxiv.org/abs/1708.02002).

## Data
The dataset we are experimenting in this example is LUNA16 (https://luna16.grand-challenge.org/Home/), which is based on [LIDC-IDRI database](https://wiki.cancerimagingarchive.net/display/Public/LIDC-IDRI) [3,4,5].
LUNA16 is a public dataset of CT lung nodule detection. Using raw CT scans, the goal is to identify locations of possible nodules, and to assign a probability for being a nodule to each location.
Disclaimer: We are not the host of the data. Please make sure to read the requirements and usage policies of the data and give credit to the authors of the dataset! We acknowledge the National Cancer Institute and the Foundation for the National Institutes of Health, and their critical role in the creation of the free publicly available LIDC/IDRI Database used in this study.
### 10-fold data splitting
We follow the official 10-fold data splitting from LUNA16 challenge and generate data split json files using the script from [nnDetection](https://github.com/MIC-DKFZ/nnDetection/blob/main/projects/Task016_Luna/scripts/prepare.py).
Please download the resulted json files from https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/LUNA16_datasplit-20220615T233840Z-001.zip.
In these files, the values of "box" are the ground truth boxes in world coordinate.
### Data resampling
The raw CT images in LUNA16 have various of voxel sizes. The first step is to resample them to the same voxel size.
In this model, we resampled them into 0.703125 x 0.703125 x 1.25 mm.
Please following the instruction in Section 3.1 of https://github.com/Project-MONAI/tutorials/tree/main/detection to do the resampling.
### Data download
The mhd/raw original data can be downloaded from [LUNA16](https://luna16.grand-challenge.org/Home/). The DICOM original data can be downloaded from [LIDC-IDRI database](https://wiki.cancerimagingarchive.net/display/Public/LIDC-IDRI) [3,4,5]. You will need to resample the original data to start training.
Alternatively, we provide [resampled nifti images](https://drive.google.com/drive/folders/1JozrufA1VIZWJIc5A1EMV3J4CNCYovKK?usp=share_link) and a copy of [original mhd/raw images](https://drive.google.com/drive/folders/1-enN4eNEnKmjltevKg3W2V-Aj0nriQWE?usp=share_link) from [LUNA16](https://luna16.grand-challenge.org/Home/) for users to download.
## Training configuration
The training was performed with the following:
- GPU: at least 16GB GPU memory, requires 32G when exporting TRT model
- Actual Model Input: 192 x 192 x 80
- AMP: True
- Optimizer: Adam
- Learning Rate: 1e-2
- Loss: BCE loss and L1 loss
### Input
1 channel
- List of 3D CT patches
### Output
In Training Mode: A dictionary of classification and box regression loss.
In Evaluation Mode: A list of dictionaries of predicted box, classification label, and classification score.
## Performance
Coco metric is used for evaluating the performance of the model. The pre-trained model was trained and validated on data fold 0. This model achieves a mAP=0.852, mAR=0.998, AP(IoU=0.1)=0.858, AR(IoU=0.1)=1.0.
Please note that this bundle is non-deterministic because of the max pooling layer used in the network. Therefore, reproducing the training process may not get exactly the same performance.
Please refer to https://pytorch.org/docs/stable/notes/randomness.html#reproducibility for more details about reproducibility.
#### Training Loss

#### Validation Accuracy
The validation accuracy in this curve is the mean of mAP, mAR, AP(IoU=0.1), and AR(IoU=0.1) in Coco metric.

#### TensorRT speedup
The `lung_nodule_ct_detection` bundle supports acceleration with TensorRT through the ONNX-TensorRT method. The table below displays the speedup ratios observed on an A100 80G GPU. Please note that when using the TensorRT model for inference, the `force_sliding_window` parameter in the `inference.json` file must be set to `true`. This ensures that the bundle uses the `SlidingWindowInferer` during inference and maintains the input spatial size of the network. Otherwise, if given an input with spatial size less than the `infer_patch_size`, the input spatial size of the network would be changed.
| method | torch_fp32(ms) | torch_amp(ms) | trt_fp32(ms) | trt_fp16(ms) | speedup amp | speedup fp32 | speedup fp16 | amp vs fp16|
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| model computation | 7449.84 | 996.08 | 976.67 | 626.90 | 7.63 | 7.63 | 11.88 | 1.56 |
| end2end | 36458.26 | 7259.35 | 6420.60 | 4698.34 | 5.02 | 5.68 | 7.76 | 1.55 |
Where:
- `model computation` means the speedup ratio of model's inference with a random input without preprocessing and postprocessing
- `end2end` means run the bundle end-to-end with the TensorRT based model.
- `torch_fp32` and `torch_amp` are for the PyTorch models with or without `amp` mode.
- `trt_fp32` and `trt_fp16` are for the TensorRT based models converted in corresponding precision.
- `speedup amp`, `speedup fp32` and `speedup fp16` are the speedup ratios of corresponding models versus the PyTorch float32 model
- `amp vs fp16` is the speedup ratio between the PyTorch amp model and the TensorRT float16 based model.
Currently, the only available method to accelerate this model is through ONNX-TensorRT. However, the Torch-TensorRT method is under development and will be available in the near future.
This result is benchmarked under:
- TensorRT: 8.5.3+cuda11.8
- Torch-TensorRT Version: 1.4.0
- CPU Architecture: x86-64
- OS: ubuntu 20.04
- Python version:3.8.10
- CUDA version: 12.0
- GPU models and configuration: A100 80G
## MONAI Bundle Commands
In addition to the Pythonic APIs, a few command line interfaces (CLI) are provided to interact with the bundle. The CLI supports flexible use cases, such as overriding configs at runtime and predefining arguments in a file.
For more details usage instructions, visit the [MONAI Bundle Configuration Page](https://docs.monai.io/en/latest/config_syntax.html).
#### Execute training:
```
python -m monai.bundle run --config_file configs/train.json
```
Please note that if the default dataset path is not modified with the actual path in the bundle config files, you can also override it by using `--dataset_dir`:
```
python -m monai.bundle run --config_file configs/train.json --dataset_dir <actual dataset path>
```
#### Override the `train` config to execute evaluation with the trained model:
```
python -m monai.bundle run --config_file "['configs/train.json','configs/evaluate.json']"
```
#### Execute inference on resampled LUNA16 images by setting `"whether_raw_luna16": false` in `inference.json`:
```
python -m monai.bundle run --config_file configs/inference.json
```
With the same command, we can execute inference on original LUNA16 images by setting `"whether_raw_luna16": true` in `inference.json`. Remember to also set `"data_list_file_path": "$@bundle_root + '/LUNA16_datasplit/mhd_original/dataset_fold0.json'"` and change `"dataset_dir"`.
Note that in inference.json, the transform "LoadImaged" in "preprocessing" and "AffineBoxToWorldCoordinated" in "postprocessing" has `"affine_lps_to_ras": true`.
This depends on the input images. LUNA16 needs `"affine_lps_to_ras": true`.
It is possible that your inference dataset should set `"affine_lps_to_ras": false`.
#### Export checkpoint to TensorRT based models with fp32 or fp16 precision
```bash
python -m monai.bundle trt_export --net_id network_def --filepath models/model_trt.ts --ckpt_file models/model.pt --meta_file configs/metadata.json --config_file configs/inference.json --precision <fp32/fp16> --input_shape "[1, 1, 512, 512, 192]" --use_onnx "True" --use_trace "True" --onnx_output_names "['output_0', 'output_1', 'output_2', 'output_3', 'output_4', 'output_5']" --network_def#use_list_output "True"
```
#### Execute inference with the TensorRT model
```
python -m monai.bundle run --config_file "['configs/inference.json', 'configs/inference_trt.json']"
```
# References
[1] Lin, Tsung-Yi, et al. "Focal loss for dense object detection." ICCV 2017. https://arxiv.org/abs/1708.02002)
[2] Baumgartner and Jaeger et al. "nnDetection: A self-configuring method for medical object detection." MICCAI 2021. https://arxiv.org/pdf/2106.00817.pdf
[3] Armato III, S. G., McLennan, G., Bidaut, L., McNitt-Gray, M. F., Meyer, C. R., Reeves, A. P., Zhao, B., Aberle, D. R., Henschke, C. I., Hoffman, E. A., Kazerooni, E. A., MacMahon, H., Van Beek, E. J. R., Yankelevitz, D., Biancardi, A. M., Bland, P. H., Brown, M. S., Engelmann, R. M., Laderach, G. E., Max, D., Pais, R. C. , Qing, D. P. Y. , Roberts, R. Y., Smith, A. R., Starkey, A., Batra, P., Caligiuri, P., Farooqi, A., Gladish, G. W., Jude, C. M., Munden, R. F., Petkovska, I., Quint, L. E., Schwartz, L. H., Sundaram, B., Dodd, L. E., Fenimore, C., Gur, D., Petrick, N., Freymann, J., Kirby, J., Hughes, B., Casteele, A. V., Gupte, S., Sallam, M., Heath, M. D., Kuhn, M. H., Dharaiya, E., Burns, R., Fryd, D. S., Salganicoff, M., Anand, V., Shreter, U., Vastagh, S., Croft, B. Y., Clarke, L. P. (2015). Data From LIDC-IDRI [Data set]. The Cancer Imaging Archive. https://doi.org/10.7937/K9/TCIA.2015.LO9QL9SX
[4] Armato SG 3rd, McLennan G, Bidaut L, McNitt-Gray MF, Meyer CR, Reeves AP, Zhao B, Aberle DR, Henschke CI, Hoffman EA, Kazerooni EA, MacMahon H, Van Beeke EJ, Yankelevitz D, Biancardi AM, Bland PH, Brown MS, Engelmann RM, Laderach GE, Max D, Pais RC, Qing DP, Roberts RY, Smith AR, Starkey A, Batrah P, Caligiuri P, Farooqi A, Gladish GW, Jude CM, Munden RF, Petkovska I, Quint LE, Schwartz LH, Sundaram B, Dodd LE, Fenimore C, Gur D, Petrick N, Freymann J, Kirby J, Hughes B, Casteele AV, Gupte S, Sallamm M, Heath MD, Kuhn MH, Dharaiya E, Burns R, Fryd DS, Salganicoff M, Anand V, Shreter U, Vastagh S, Croft BY. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI): A completed reference database of lung nodules on CT scans. Medical Physics, 38: 915--931, 2011. DOI: https://doi.org/10.1118/1.3528204
[5] Clark, K., Vendt, B., Smith, K., Freymann, J., Kirby, J., Koppel, P., Moore, S., Phillips, S., Maffitt, D., Pringle, M., Tarbox, L., & Prior, F. (2013). The Cancer Imaging Archive (TCIA): Maintaining and Operating a Public Information Repository. Journal of Digital Imaging, 26(6), 1045–1057. https://doi.org/10.1007/s10278-013-9622-7
# License
Copyright (c) MONAI Consortium
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
|
crismun/learn_hf_food_not_food_text_classifier-distilbert-base-uncased | crismun | 2025-05-25T01:38:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-25T01:34:09Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: learn_hf_food_not_food_text_classifier-distilbert-base-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# learn_hf_food_not_food_text_classifier-distilbert-base-uncased
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0006
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4214 | 1.0 | 7 | 0.1089 | 0.98 |
| 0.0428 | 2.0 | 14 | 0.0482 | 0.98 |
| 0.0059 | 3.0 | 21 | 0.0239 | 0.98 |
| 0.0021 | 4.0 | 28 | 0.0014 | 1.0 |
| 0.0013 | 5.0 | 35 | 0.0010 | 1.0 |
| 0.001 | 6.0 | 42 | 0.0008 | 1.0 |
| 0.0008 | 7.0 | 49 | 0.0007 | 1.0 |
| 0.0008 | 8.0 | 56 | 0.0006 | 1.0 |
| 0.0007 | 9.0 | 63 | 0.0006 | 1.0 |
| 0.0007 | 10.0 | 70 | 0.0006 | 1.0 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
CodeGoat24/UnifiedReward-qwen-32b | CodeGoat24 | 2025-05-25T01:34:11Z | 0 | 0 | null | [
"safetensors",
"qwen2_5_vl",
"dataset:CodeGoat24/HPD",
"dataset:CodeGoat24/LiFT-HRA",
"dataset:CodeGoat24/OIP",
"dataset:CodeGoat24/EvalMuse",
"dataset:CodeGoat24/ShareGPTVideo-DPO",
"dataset:CodeGoat24/VideoFeedback",
"dataset:CodeGoat24/LLaVA-Critic-113k",
"dataset:CodeGoat24/VideoDPO",
"arxiv:2503.05236",
"base_model:Qwen/Qwen2.5-VL-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-32B-Instruct",
"license:mit",
"region:us"
] | null | 2025-05-25T01:15:33Z | ---
license: mit
datasets:
- CodeGoat24/HPD
- CodeGoat24/LiFT-HRA
- CodeGoat24/OIP
- CodeGoat24/EvalMuse
- CodeGoat24/ShareGPTVideo-DPO
- CodeGoat24/VideoFeedback
- CodeGoat24/LLaVA-Critic-113k
- CodeGoat24/VideoDPO
base_model:
- Qwen/Qwen2.5-VL-32B-Instruct
---
# UnifiedReward-qwen-32B
We are actively gathering feedback from the community to improve our models. **We welcome your input and encourage you to stay updated through our repository**!!
## Model Summary
`UnifiedReward-qwen-32b` is the first unified reward model based on [Qwen/Qwen2.5-VL-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct) for multimodal understanding and generation assessment, enabling both pairwise ranking and pointwise scoring, which can be employed for vision model preference alignment.
For further details, please refer to the following resources:
- 📰 Paper: https://arxiv.org/pdf/2503.05236
- 🪐 Project Page: https://codegoat24.github.io/UnifiedReward/
- 🤗 Model Collections: https://huggingface.co/collections/CodeGoat24/unifiedreward-models-67c3008148c3a380d15ac63a
- 🤗 Dataset Collections: https://huggingface.co/collections/CodeGoat24/unifiedreward-training-data-67c300d4fd5eff00fa7f1ede
- 👋 Point of Contact: [Yibin Wang](https://codegoat24.github.io)
## 🏁 Compared with Current Reward Models
| Reward Model | Method| Image Generation | Image Understanding | Video Generation | Video Understanding
| :-----: | :-----: |:-----: |:-----: | :-----: | :-----: |
| [PickScore](https://github.com/yuvalkirstain/PickScore) |Point | √ | | ||
| [HPS](https://github.com/tgxs002/HPSv2) | Point | √ | |||
| [ImageReward](https://github.com/THUDM/ImageReward) | Point| √| |||
| [LLaVA-Critic](https://huggingface.co/lmms-lab/llava-critic-7b) | Pair/Point | | √ |||
| [IXC-2.5-Reward](https://github.com/InternLM/InternLM-XComposer) | Pair/Point | | √ ||√|
| [VideoScore](https://github.com/TIGER-AI-Lab/VideoScore) | Point | | |√ ||
| [LiFT](https://github.com/CodeGoat24/LiFT) | Point | | |√| |
| [VisionReward](https://github.com/THUDM/VisionReward) | Point |√ | |√||
| [VideoReward](https://github.com/KwaiVGI/VideoAlign) | Point | | |√ ||
| UnifiedReward (Ours) | Pair/Point | √ | √ |√|√|
### Quick Start
All pair rank and point score inference codes are provided in our [github](https://github.com/CodeGoat24/UnifiedReward).
We take image understanding assessment as example here:
~~~python
import json
import random
import torch
import tqdm
from PIL import Image
import warnings
import os
from transformers import AutoProcessor, AutoTokenizer, Qwen2_5_VLForConditionalGeneration
from qwen_vl_utils import process_vision_info
warnings.filterwarnings("ignore")
model_path = "CodeGoat24/UnifiedReward-qwen-32b"
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
model_path, torch_dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained(model_path)
url = "https://github.com/LLaVA-VL/blog/blob/main/2024-10-03-llava-critic/static/images/critic_img_seven.png?raw=True"
image = Image.open(requests.get(url, stream=True).raw)
prompt_text = f'Given an image and a corresponding question, please serve as an unbiased and fair judge to evaluate the quality of the answers provided by a Large Multimodal Model (LMM). Determine which answer is better and explain your reasoning with specific details. Your task is provided as follows:\nQuestion: [What this image presents?]\nThe first response: [The image is a black and white sketch of a line that appears to be in the shape of a cross. The line is a simple and straightforward representation of the cross shape, with two straight lines intersecting at a point.]\nThe second response: [This is a handwritten number seven.]\nASSISTANT:\n'
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": image},
{"type": "text", "text": prompt_text},
],
}
]
chat_input = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[chat_input],
images=image_inputs,
videos=video_inputs,
return_tensors="pt",
padding=True
).to("cuda")
with torch.no_grad():
generated_ids = model.generate(**inputs, max_new_tokens=4096)
generated_trimmed = [
out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output = processor.batch_decode(generated_trimmed, skip_special_tokens=True)[0]
print(output)
~~~
## Citation
```
@article{UnifiedReward,
title={Unified Reward Model for Multimodal Understanding and Generation.},
author={Wang, Yibin and Zang, Yuhang, and Li, Hao and Jin, Cheng and Wang Jiaqi},
journal={arXiv preprint arXiv:2503.05236},
year={2025}
}
``` |
yinchenghust/nora_libero_cot_rft | yinchenghust | 2025-05-25T01:34:09Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-24T04:03:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dulimov/Qwen3-8B-rk3588-1.2.1 | dulimov | 2025-05-25T01:33:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"unsloth",
"conversational",
"arxiv:2309.00071",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-25T00:49:15Z | ---
base_model:
- Qwen/Qwen3-8B
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- unsloth
---
# Qwen3-8B-unsloth RK3588-1.2.1
This version of Qwen3-8B unsloth has been converted to run on the RK3588 NPU using ['w8a8', 'w8a8_g128', 'w8a8_g256', 'w8a8_g512'] quantization.
This model has been optimized with the following LoRA:
Compatible with RKLLM version: 1.2.1
# Original Model Card for base model, Qwen3-8B, below:
# Qwen3-8B
<a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Qwen3 Highlights
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
- **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
- **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
- **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
- **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
- **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
## Model Overview
**Qwen3-8B** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 8.2B
- Number of Paramaters (Non-Embedding): 6.95B
- Number of Layers: 36
- Number of Attention Heads (GQA): 32 for Q and 8 for KV
- Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts).
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Quickstart
The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-8B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.4` or to create an OpenAI-compatible API endpoint:
- SGLang:
```shell
python -m sglang.launch_server --model-path Qwen/Qwen3-8B --reasoning-parser qwen3
```
- vLLM:
```shell
vllm serve Qwen/Qwen3-8B --enable-reasoning --reasoning-parser deepseek_r1
```
For local use, applications such as llama.cpp, Ollama, LMStudio, and MLX-LM have also supported Qwen3.
## Switching Between Thinking and Non-Thinking Mode
> [!TIP]
> The `enable_thinking` switch is also available in APIs created by SGLang and vLLM.
> Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users.
### `enable_thinking=True`
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # True is the default value for enable_thinking
)
```
In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
> [!NOTE]
> For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### `enable_thinking=False`
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Setting enable_thinking=False disables thinking mode
)
```
In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
> [!NOTE]
> For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
Here is an example of a multi-turn conversation:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
class QwenChatbot:
def __init__(self, model_name="Qwen/Qwen3-8B"):
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelForCausalLM.from_pretrained(model_name)
self.history = []
def generate_response(self, user_input):
messages = self.history + [{"role": "user", "content": user_input}]
text = self.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = self.tokenizer(text, return_tensors="pt")
response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
# Update history
self.history.append({"role": "user", "content": user_input})
self.history.append({"role": "assistant", "content": response})
return response
# Example Usage
if __name__ == "__main__":
chatbot = QwenChatbot()
# First input (without /think or /no_think tags, thinking mode is enabled by default)
user_input_1 = "How many r's in strawberries?"
print(f"User: {user_input_1}")
response_1 = chatbot.generate_response(user_input_1)
print(f"Bot: {response_1}")
print("----------------------")
# Second input with /no_think
user_input_2 = "Then, how many r's in blueberries? /no_think"
print(f"User: {user_input_2}")
response_2 = chatbot.generate_response(user_input_2)
print(f"Bot: {response_2}")
print("----------------------")
# Third input with /think
user_input_3 = "Really? /think"
print(f"User: {user_input_3}")
response_3 = chatbot.generate_response(user_input_3)
print(f"Bot: {response_3}")
```
> [!NOTE]
> For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
> When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
## Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
from qwen_agent.agents import Assistant
# Define LLM
llm_cfg = {
'model': 'Qwen3-8B',
# Use the endpoint provided by Alibaba Model Studio:
# 'model_type': 'qwen_dashscope',
# 'api_key': os.getenv('DASHSCOPE_API_KEY'),
# Use a custom endpoint compatible with OpenAI API:
'model_server': 'http://localhost:8000/v1', # api_base
'api_key': 'EMPTY',
# Other parameters:
# 'generate_cfg': {
# # Add: When the response content is `<think>this is the thought</think>this is the answer;
# # Do not add: When the response has been separated by reasoning_content and content.
# 'thought_in_content': True,
# },
}
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
'code_interpreter', # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Processing Long Texts
Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method.
YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks:
- Modifying the model files:
In the `config.json` file, add the `rope_scaling` fields:
```json
{
...,
"rope_scaling": {
"type": "yarn",
"factor": 4.0,
"original_max_position_embeddings": 32768
}
}
```
For `llama.cpp`, you need to regenerate the GGUF file after the modification.
- Passing command line arguments:
For `vllm`, you can use
```shell
vllm serve ... --rope-scaling '{"type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
```
For `sglang`, you can use
```shell
python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
```
For `llama-server` from `llama.cpp`, you can use
```shell
llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768
```
> [!IMPORTANT]
> If you encounter the following warning
> ```
> Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'}
> ```
> please upgrade `transformers>=4.51.0`.
> [!NOTE]
> All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.**
> We advise adding the `rope_scaling` configuration only when processing long contexts is required.
> It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0.
> [!NOTE]
> The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance.
> [!TIP]
> The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed.
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
- For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3,
title = {Qwen3},
url = {https://qwenlm.github.io/blog/qwen3/},
author = {Qwen Team},
month = {April},
year = {2025}
}
``` |
MomlessTomato/umi-sonoda | MomlessTomato | 2025-05-25T01:33:31Z | 2 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:cagliostrolab/animagine-xl-3.0",
"base_model:adapter:cagliostrolab/animagine-xl-3.0",
"license:mit",
"region:us"
] | text-to-image | 2024-02-15T03:51:54Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
masterpiece, high quality, defined pupil, looking at viewer, rounded pupil,
defined iris, (soft iris:1.2),
parameters:
negative_prompt: >-
bad_anatomy, deformation, amputation, deformity, deformed_nipples,
duplicated_torso, deformed_torso, long_torso, large_torso,
unproportioned_torso, (deformed_pussy:1.2), (deformed_hands:1.2),
unproportioned_eyes, unproportioned_head, small_head, duplicated_nose,
big_nose, fusioned_clothes, fusioned_arms, undefined_limbs, divided_pussy,
red_pussy, duplicated_pussy, deformed_anus, deformed_pussy,
output:
url: images/umi_final.png
base_model: cagliostrolab/animagine-xl-3.0
instance_prompt: id_umi_sonoda
license: mit
---
# Umi Sonoda
<Gallery />
## Model description
This model was trained to generate high quality images based on SIFAS cards.
To achieve better quality, you should be using hako-mikan's regional prompter, along with Latent Mode, which modifies the way Stable Diffusion isolates the LoRA resulting in a significant improvement.
## Trigger words
You should use `id_umi_sonoda` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/theidoldaily/umi-sonoda/tree/main) them in the Files & versions tab.
|
ElRompeAnosFullAnal/ElRompeAnosFullAnal | ElRompeAnosFullAnal | 2025-05-25T01:26:12Z | 0 | 0 | null | [
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-03-31T22:45:18Z | ---
license: cc-by-nc-4.0
---
|
mradermacher/Aya-Empati-v3-i1-GGUF | mradermacher | 2025-05-25T01:26:03Z | 0 | 1 | transformers | [
"transformers",
"gguf",
"matrixportal",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"base_model:matrixportal/Aya-Empati-v3",
"base_model:quantized:matrixportal/Aya-Empati-v3",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-25T00:42:56Z | ---
base_model: matrixportal/Aya-Empati-v3
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- matrixportal
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/matrixportal/Aya-Empati-v3
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Aya-Empati-v3-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-i1-GGUF/resolve/main/Aya-Empati-v3.i1-IQ1_S.gguf) | i1-IQ1_S | 2.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-i1-GGUF/resolve/main/Aya-Empati-v3.i1-IQ1_M.gguf) | i1-IQ1_M | 2.5 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-i1-GGUF/resolve/main/Aya-Empati-v3.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-i1-GGUF/resolve/main/Aya-Empati-v3.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-i1-GGUF/resolve/main/Aya-Empati-v3.i1-IQ2_S.gguf) | i1-IQ2_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-i1-GGUF/resolve/main/Aya-Empati-v3.i1-IQ2_M.gguf) | i1-IQ2_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-i1-GGUF/resolve/main/Aya-Empati-v3.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.3 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-i1-GGUF/resolve/main/Aya-Empati-v3.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-i1-GGUF/resolve/main/Aya-Empati-v3.i1-Q2_K.gguf) | i1-Q2_K | 3.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-i1-GGUF/resolve/main/Aya-Empati-v3.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-i1-GGUF/resolve/main/Aya-Empati-v3.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-i1-GGUF/resolve/main/Aya-Empati-v3.i1-IQ3_S.gguf) | i1-IQ3_S | 4.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-i1-GGUF/resolve/main/Aya-Empati-v3.i1-IQ3_M.gguf) | i1-IQ3_M | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-i1-GGUF/resolve/main/Aya-Empati-v3.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-i1-GGUF/resolve/main/Aya-Empati-v3.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-i1-GGUF/resolve/main/Aya-Empati-v3.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-i1-GGUF/resolve/main/Aya-Empati-v3.i1-Q4_0.gguf) | i1-Q4_0 | 4.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-i1-GGUF/resolve/main/Aya-Empati-v3.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.9 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-i1-GGUF/resolve/main/Aya-Empati-v3.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-i1-GGUF/resolve/main/Aya-Empati-v3.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-i1-GGUF/resolve/main/Aya-Empati-v3.i1-Q4_1.gguf) | i1-Q4_1 | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-i1-GGUF/resolve/main/Aya-Empati-v3.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-i1-GGUF/resolve/main/Aya-Empati-v3.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-i1-GGUF/resolve/main/Aya-Empati-v3.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Aya-Empati-v3-GGUF | mradermacher | 2025-05-25T01:25:16Z | 0 | 1 | transformers | [
"transformers",
"gguf",
"matrixportal",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"base_model:matrixportal/Aya-Empati-v3",
"base_model:quantized:matrixportal/Aya-Empati-v3",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-24T10:03:47Z | ---
base_model: matrixportal/Aya-Empati-v3
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- matrixportal
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/matrixportal/Aya-Empati-v3
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Aya-Empati-v3-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-GGUF/resolve/main/Aya-Empati-v3.Q2_K.gguf) | Q2_K | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-GGUF/resolve/main/Aya-Empati-v3.Q3_K_S.gguf) | Q3_K_S | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-GGUF/resolve/main/Aya-Empati-v3.Q3_K_M.gguf) | Q3_K_M | 4.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-GGUF/resolve/main/Aya-Empati-v3.Q3_K_L.gguf) | Q3_K_L | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-GGUF/resolve/main/Aya-Empati-v3.IQ4_XS.gguf) | IQ4_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-GGUF/resolve/main/Aya-Empati-v3.Q4_K_S.gguf) | Q4_K_S | 4.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-GGUF/resolve/main/Aya-Empati-v3.Q4_K_M.gguf) | Q4_K_M | 5.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-GGUF/resolve/main/Aya-Empati-v3.Q5_K_S.gguf) | Q5_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-GGUF/resolve/main/Aya-Empati-v3.Q5_K_M.gguf) | Q5_K_M | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-GGUF/resolve/main/Aya-Empati-v3.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-GGUF/resolve/main/Aya-Empati-v3.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-GGUF/resolve/main/Aya-Empati-v3.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
MadlyLaughing/Madly-V4 | MadlyLaughing | 2025-05-25T01:24:57Z | 0 | 0 | null | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2025-05-25T01:23:30Z | ---
license: cc-by-nc-sa-4.0
---
|
mradermacher/gpt-Youtube-GGUF | mradermacher | 2025-05-25T01:22:47Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:breadlicker45/youtube-comments-180k",
"base_model:BreadAi/gpt-Youtube",
"base_model:quantized:BreadAi/gpt-Youtube",
"endpoints_compatible",
"region:us"
] | null | 2025-05-25T01:20:21Z | ---
base_model: BreadAi/gpt-Youtube
datasets:
- breadlicker45/youtube-comments-180k
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/BreadAi/gpt-Youtube
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gpt-Youtube-GGUF/resolve/main/gpt-Youtube.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt-Youtube-GGUF/resolve/main/gpt-Youtube.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt-Youtube-GGUF/resolve/main/gpt-Youtube.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gpt-Youtube-GGUF/resolve/main/gpt-Youtube.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt-Youtube-GGUF/resolve/main/gpt-Youtube.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt-Youtube-GGUF/resolve/main/gpt-Youtube.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gpt-Youtube-GGUF/resolve/main/gpt-Youtube.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gpt-Youtube-GGUF/resolve/main/gpt-Youtube.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt-Youtube-GGUF/resolve/main/gpt-Youtube.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt-Youtube-GGUF/resolve/main/gpt-Youtube.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/gpt-Youtube-GGUF/resolve/main/gpt-Youtube.Q8_0.gguf) | Q8_0 | 0.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/gpt-Youtube-GGUF/resolve/main/gpt-Youtube.f16.gguf) | f16 | 0.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/ScriptForge-GGUF | mradermacher | 2025-05-25T01:22:03Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"Text-Generation",
"en",
"dataset:SRDdev/Youtube-Scripts",
"base_model:SRDdev/ScriptForge",
"base_model:quantized:SRDdev/ScriptForge",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-25T01:18:03Z | ---
base_model: SRDdev/ScriptForge
datasets:
- SRDdev/Youtube-Scripts
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- Text-Generation
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/SRDdev/ScriptForge
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ScriptForge-GGUF/resolve/main/ScriptForge.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/ScriptForge-GGUF/resolve/main/ScriptForge.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/ScriptForge-GGUF/resolve/main/ScriptForge.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ScriptForge-GGUF/resolve/main/ScriptForge.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/ScriptForge-GGUF/resolve/main/ScriptForge.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ScriptForge-GGUF/resolve/main/ScriptForge.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/ScriptForge-GGUF/resolve/main/ScriptForge.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ScriptForge-GGUF/resolve/main/ScriptForge.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/ScriptForge-GGUF/resolve/main/ScriptForge.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/ScriptForge-GGUF/resolve/main/ScriptForge.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ScriptForge-GGUF/resolve/main/ScriptForge.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ScriptForge-GGUF/resolve/main/ScriptForge.f16.gguf) | f16 | 0.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/pythia-1b-v0-i1-GGUF | mradermacher | 2025-05-25T01:22:01Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"pytorch",
"causal-lm",
"pythia",
"pythia_v0",
"en",
"dataset:the_pile",
"base_model:EleutherAI/pythia-1b-v0",
"base_model:quantized:EleutherAI/pythia-1b-v0",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-05-25T00:44:09Z | ---
base_model: EleutherAI/pythia-1b-v0
datasets:
- the_pile
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/EleutherAI/pythia-1b-v0
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/pythia-1b-v0-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/pythia-1b-v0-i1-GGUF/resolve/main/pythia-1b-v0.i1-IQ1_S.gguf) | i1-IQ1_S | 0.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/pythia-1b-v0-i1-GGUF/resolve/main/pythia-1b-v0.i1-IQ1_M.gguf) | i1-IQ1_M | 0.4 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/pythia-1b-v0-i1-GGUF/resolve/main/pythia-1b-v0.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/pythia-1b-v0-i1-GGUF/resolve/main/pythia-1b-v0.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/pythia-1b-v0-i1-GGUF/resolve/main/pythia-1b-v0.i1-IQ2_S.gguf) | i1-IQ2_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/pythia-1b-v0-i1-GGUF/resolve/main/pythia-1b-v0.i1-IQ2_M.gguf) | i1-IQ2_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/pythia-1b-v0-i1-GGUF/resolve/main/pythia-1b-v0.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/pythia-1b-v0-i1-GGUF/resolve/main/pythia-1b-v0.i1-Q2_K.gguf) | i1-Q2_K | 0.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/pythia-1b-v0-i1-GGUF/resolve/main/pythia-1b-v0.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/pythia-1b-v0-i1-GGUF/resolve/main/pythia-1b-v0.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/pythia-1b-v0-i1-GGUF/resolve/main/pythia-1b-v0.i1-IQ3_S.gguf) | i1-IQ3_S | 0.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/pythia-1b-v0-i1-GGUF/resolve/main/pythia-1b-v0.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/pythia-1b-v0-i1-GGUF/resolve/main/pythia-1b-v0.i1-IQ3_M.gguf) | i1-IQ3_M | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/pythia-1b-v0-i1-GGUF/resolve/main/pythia-1b-v0.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.7 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/pythia-1b-v0-i1-GGUF/resolve/main/pythia-1b-v0.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/pythia-1b-v0-i1-GGUF/resolve/main/pythia-1b-v0.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/pythia-1b-v0-i1-GGUF/resolve/main/pythia-1b-v0.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.7 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/pythia-1b-v0-i1-GGUF/resolve/main/pythia-1b-v0.i1-Q4_0.gguf) | i1-Q4_0 | 0.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/pythia-1b-v0-i1-GGUF/resolve/main/pythia-1b-v0.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/pythia-1b-v0-i1-GGUF/resolve/main/pythia-1b-v0.i1-Q4_1.gguf) | i1-Q4_1 | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/pythia-1b-v0-i1-GGUF/resolve/main/pythia-1b-v0.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/pythia-1b-v0-i1-GGUF/resolve/main/pythia-1b-v0.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/pythia-1b-v0-i1-GGUF/resolve/main/pythia-1b-v0.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/pythia-1b-v0-i1-GGUF/resolve/main/pythia-1b-v0.i1-Q6_K.gguf) | i1-Q6_K | 0.9 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Subsets and Splits