modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-01 18:27:11
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 461
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-01 18:25:15
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
New-tutorial-Ronaldo-Valdez-Viral-Video/FULL.VIDEO.LINK.Ronaldo.Valdez.Viral.Video.Leaks.Official | New-tutorial-Ronaldo-Valdez-Viral-Video | 2025-05-30T14:30:41Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-30T14:30:35Z |
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/uLf" rel="nofollow">โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐ฆ๐ถ๐ด๐ป ๐จ๐ฝ ๐๐ผ ๐๐ช๐ก๐ก ๐ช๐ฎ๐๐ฐ๐ต ๐๐๐๐๐คโค๏ธโค๏ธ)</a>
<a href="https://sdu.sk/uLf" rel="nofollow">๐ด โคโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐ฅ๐ข๐ง๐ค)</a>
|
mradermacher/Qwen3OIE-8B-GGUF | mradermacher | 2025-05-30T14:30:34Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"dataset:train_dataset_updated.jsonl",
"base_model:bratao/Qwen3OIE-8B",
"base_model:quantized:bratao/Qwen3OIE-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-30T13:53:44Z | ---
base_model: bratao/Qwen3OIE-8B
datasets:
- train_dataset_updated.jsonl
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/bratao/Qwen3OIE-8B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3OIE-8B-GGUF/resolve/main/Qwen3OIE-8B.Q2_K.gguf) | Q2_K | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3OIE-8B-GGUF/resolve/main/Qwen3OIE-8B.Q3_K_S.gguf) | Q3_K_S | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3OIE-8B-GGUF/resolve/main/Qwen3OIE-8B.Q3_K_M.gguf) | Q3_K_M | 4.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3OIE-8B-GGUF/resolve/main/Qwen3OIE-8B.Q3_K_L.gguf) | Q3_K_L | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3OIE-8B-GGUF/resolve/main/Qwen3OIE-8B.IQ4_XS.gguf) | IQ4_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3OIE-8B-GGUF/resolve/main/Qwen3OIE-8B.Q4_K_S.gguf) | Q4_K_S | 4.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3OIE-8B-GGUF/resolve/main/Qwen3OIE-8B.Q4_K_M.gguf) | Q4_K_M | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3OIE-8B-GGUF/resolve/main/Qwen3OIE-8B.Q5_K_S.gguf) | Q5_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3OIE-8B-GGUF/resolve/main/Qwen3OIE-8B.Q5_K_M.gguf) | Q5_K_M | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3OIE-8B-GGUF/resolve/main/Qwen3OIE-8B.Q6_K.gguf) | Q6_K | 6.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3OIE-8B-GGUF/resolve/main/Qwen3OIE-8B.Q8_0.gguf) | Q8_0 | 8.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3OIE-8B-GGUF/resolve/main/Qwen3OIE-8B.f16.gguf) | f16 | 16.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
natix-miner1/streetvision | natix-miner1 | 2025-05-30T14:27:39Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-05-30T14:21:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tungduong261204/DPO_8000_v3 | tungduong261204 | 2025-05-30T14:27:22Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/Llama-3.2-1B",
"base_model:adapter:unsloth/Llama-3.2-1B",
"region:us"
] | null | 2025-05-30T14:27:16Z | ---
base_model: unsloth/Llama-3.2-1B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
Miley-Cyrus-Party-In-The-Us/FULL.VIDEO.LINK.Miley.Cyrus.Viral.Video.Leaks.Official | Miley-Cyrus-Party-In-The-Us | 2025-05-30T14:26:29Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-30T14:26:23Z |
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/uLf" rel="nofollow">โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐ฆ๐ถ๐ด๐ป ๐จ๐ฝ ๐๐ผ ๐๐ช๐ก๐ก ๐ช๐ฎ๐๐ฐ๐ต ๐๐๐๐๐คโค๏ธโค๏ธ)</a>
<a href="https://sdu.sk/uLf" rel="nofollow">๐ด โคโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐ฅ๐ข๐ง๐ค)</a>
|
Ftfyhh/wan_hand_grab_lora_14b | Ftfyhh | 2025-05-30T14:21:42Z | 0 | 0 | null | [
"base_model:Wan-AI/Wan2.1-I2V-14B-480P",
"base_model:finetune:Wan-AI/Wan2.1-I2V-14B-480P",
"region:us"
] | null | 2025-05-30T14:13:26Z | ---
base_model:
- Wan-AI/Wan2.1-I2V-14B-480P
---
Wan video 14b 480p lora trained for i2v (but also supports t2v)
- trained for 24 hours using 3090
- trained in resolution: fp8 304x304 (33 and 49 frames) batch 1 (31 GB VRAM)
- Dataset: 28 photos and 51 videos
prompt: `hand_grab, woman standing, camera zooms in, man's right hand is grabbing her butt in shorts, back view, woman is standing at kitchen`
1.3b lora wasn't looking good, so i dropped training it. |
bobthemop/phi3-mini-yoda-adapter | bobthemop | 2025-05-30T14:17:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:finetune:microsoft/Phi-3-mini-4k-instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-30T14:17:36Z | ---
base_model: microsoft/Phi-3-mini-4k-instruct
library_name: transformers
model_name: phi3-mini-yoda-adapter
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for phi3-mini-yoda-adapter
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="bobthemop/phi3-mini-yoda-adapter", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.2
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
BootesVoid/cma0wbyat006012tvm1yz1xny_cmbauwhvx059m42yxv89og801 | BootesVoid | 2025-05-30T14:17:04Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-30T14:16:55Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: SUKI
---
# Cma0Wbyat006012Tvm1Yz1Xny_Cmbauwhvx059M42Yxv89Og801
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `SUKI` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "SUKI",
"lora_weights": "https://huggingface.co/BootesVoid/cma0wbyat006012tvm1yz1xny_cmbauwhvx059m42yxv89og801/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cma0wbyat006012tvm1yz1xny_cmbauwhvx059m42yxv89og801', weight_name='lora.safetensors')
image = pipeline('SUKI').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cma0wbyat006012tvm1yz1xny_cmbauwhvx059m42yxv89og801/discussions) to add images that show off what youโve made with this LoRA.
|
videos-katrina-lim-viral-kiffy-viral-clips/18.tattoo.Girl.Katrina.Lim.Viral.Video.link | videos-katrina-lim-viral-kiffy-viral-clips | 2025-05-30T14:15:31Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-30T14:15:23Z |
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/uLf" rel="nofollow">โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐ฆ๐ถ๐ด๐ป ๐จ๐ฝ ๐๐ผ ๐๐ช๐ก๐ก ๐ช๐ฎ๐๐ฐ๐ต ๐๐๐๐๐คโค๏ธโค๏ธ)</a>
<a href="https://sdu.sk/uLf" rel="nofollow">๐ด โคโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐ฅ๐ข๐ง๐ค)</a>
|
John6666/perfection-realistic-ilxl-v32-sdxl | John6666 | 2025-05-30T14:10:33Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"woman",
"lesbian",
"body",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-05-30T14:04:27Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- woman
- lesbian
- body
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1257570/perfection-realistic-ilxl-illustrious-xl-nsfw-sfw-checkpoint?modelVersionId=1791309).
This model created by [6tZ](https://civitai.com/user/6tZ).
|
jirka-hakala-ylilauta-1/wATCH.jirka-hakala-ylilauta-jirka-hakala-ylilauta-jirka-hakala-ylilauta.original | jirka-hakala-ylilauta-1 | 2025-05-30T14:04:17Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-30T14:04:10Z |
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/uLf" rel="nofollow">โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐ฆ๐ถ๐ด๐ป ๐จ๐ฝ ๐๐ผ ๐๐ช๐ก๐ก ๐ช๐ฎ๐๐ฐ๐ต ๐๐๐๐๐คโค๏ธโค๏ธ)</a>
<a href="https://sdu.sk/uLf" rel="nofollow">๐ด โคโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐ฅ๐ข๐ง๐ค)</a>
|
tungduong261204/DPO_7000_v3 | tungduong261204 | 2025-05-30T14:03:21Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/Llama-3.2-1B",
"base_model:adapter:unsloth/Llama-3.2-1B",
"region:us"
] | null | 2025-05-30T14:03:14Z | ---
base_model: unsloth/Llama-3.2-1B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
MaestrAI/lucia_marquez-lora-1748613732 | MaestrAI | 2025-05-30T14:02:14Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-30T14:02:12Z | # lucia_marquez LORA Model
This is a LORA model for character Lucia Marquez
Created at 2025-05-30 16:02:13
|
Oladeebase/Oylist | Oladeebase | 2025-05-30T14:00:37Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-05-30T14:00:37Z | ---
license: bigscience-openrail-m
---
|
dimasik2987/9f94991a-85e1-425a-b4c1-dc7d5c556788 | dimasik2987 | 2025-05-30T13:56:31Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Genstruct-7B",
"base_model:adapter:NousResearch/Genstruct-7B",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-30T12:35:30Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Genstruct-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9f94991a-85e1-425a-b4c1-dc7d5c556788
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: NousResearch/Genstruct-7B
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 910676329fc975b1_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
gradient_clipping: 0.85
group_by_length: false
hub_model_id: dimasik2987/9f94991a-85e1-425a-b4c1-dc7d5c556788
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 12
mixed_precision: bf16
mlflow_experiment_name: /tmp/910676329fc975b1_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: feb27dd9-b892-45cb-9930-a92bff9733d4
wandb_project: s56-7
wandb_run: your_name
wandb_runid: feb27dd9-b892-45cb-9930-a92bff9733d4
warmup_steps: 50
weight_decay: 0.02
xformers_attention: true
```
</details><br>
# 9f94991a-85e1-425a-b4c1-dc7d5c556788
This model is a fine-tuned version of [NousResearch/Genstruct-7B](https://huggingface.co/NousResearch/Genstruct-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4116
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 24
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.8567 | 0.0001 | 1 | 1.6764 |
| 3.1278 | 0.0287 | 250 | 1.4296 |
| 2.3887 | 0.0573 | 500 | 1.4116 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
GabrielMM/Instruct_SFT_v2_90ksteps | GabrielMM | 2025-05-30T13:54:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-30T13:54:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LulutoxDetoxTea/LulutoxDetoxTea | LulutoxDetoxTea | 2025-05-30T13:48:53Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-30T13:41:20Z | # Lulutox Detox Tea Review: Slimming Tea Order Now
## Lulutox Detox Tea: Your Path to a Healthier, Lighter You
**[Lulutox Detox Tea](https://www.diginear.com/2PGQH1JJ/ZDRCBJ6/)** In todayโs fast-paced world, many Americans struggle with feeling sluggish, bloated, or weighed down by the effects of stress, poor diet, or environmental toxins. The pursuit of wellness has led to a surge in detox products, with Lulutox Detox Tea emerging as a popular choice for those seeking a natural, holistic approach to better health. Promising to support weight loss, reduce bloating, boost energy, and enhance overall well-being, Lulutox Detox Tea has captured the attention of health-conscious consumers across the US. But what makes this tea stand out in a crowded market of wellness products? Letโs dive into the world of Lulutox Detox Tea to explore its ingredients, benefits, and why itโs become a go-to for many.
## **[๐Hurry Up!! (Special Offer)๐คทโโ๏ธ Fast To Orderโโ](https://www.diginear.com/2PGQH1JJ/ZDRCBJ6/)**
## What Is Lulutox Detox Tea?
Lulutox Detox Tea is a premium herbal blend designed to promote detoxification, support digestive health, and aid in achieving a healthy weight. Unlike many detox teas that rely on harsh laxatives, Lulutox Detox Tea prides itself on being vegan, all-natural, and laxative-free, making it a gentle yet effective option for daily use. With its light, refreshing peach flavor, itโs not only functional but also enjoyable to drink, fitting seamlessly into busy lifestyles. The tea is packaged in eco-friendly pyramid tea bags, which ensure optimal extraction of nutrients from its carefully selected ingredients.
The brand emphasizes a holistic approach, combining 13 potent herbs and superfoods known for their health benefits. These ingredients work together to support metabolism, reduce bloating, and provide a natural energy boost without the jitters often associated with caffeine-heavy drinks. Whether sipped hot in the morning to kickstart your day or enjoyed cold in the evening to curb cravings, Lulutox Detox Tea is marketed as a versatile addition to any wellness routine.
## The Power of Natural Ingredients
The heart of **[Lulutox Detox Tea](https://www.diginear.com/2PGQH1JJ/ZDRCBJ6/)** lies in its thoughtfully crafted blend of ingredients, each chosen for its unique contribution to health and vitality. Hereโs a closer look at some of the key components:
Matcha Green Tea: A powerhouse of antioxidants, particularly EGCG (epigallocatechin gallate), Matcha is known for boosting metabolism and promoting fat burning. It also enhances mental clarity and provides a gentle energy lift, making it ideal for starting your day with focus.
Yerba Mate: Sourced from South America, Yerba Mate offers sustained energy without the crash. Itโs celebrated for improving mental alertness and physical endurance, making it a favorite for those with active lifestyles.
Hibiscus: This vibrant flower adds a floral note to the tea while delivering vitamins, minerals, and antioxidants. Hibiscus is known for supporting digestion, strengthening the immune system, and potentially aiding in blood pressure regulation.
Dandelion: Often used in traditional medicine, dandelion supports liver health and acts as a natural diuretic, helping to reduce water retention and bloating.
Ginseng: Renowned for its adaptogenic properties, ginseng helps the body combat stress while boosting energy and supporting overall wellness.
Ginger: A digestive aid, ginger soothes the stomach, reduces inflammation, and adds a warm, spicy note to the blend.
Goji Berries: Packed with antioxidants and amino acids, goji berries support metabolism and contribute to a sense of vitality.
Sencha Green Tea and Nettle Leaf: These ingredients provide additional antioxidant support, soothe irritation, and promote detoxification.
Guarana: A natural source of caffeine, guarana enhances focus and energy, complementing the other ingredients for a balanced boost.
This combination of herbs and superfoods makes **[Lulutox Detox Tea](https://www.diginear.com/2PGQH1JJ/ZDRCBJ6/)** a unique offering, designed to support the bodyโs natural detox processes while promoting overall health.
## **[๐Hurry Up!! (Special Offer)๐คทโโ๏ธ Fast To Orderโโ](https://www.diginear.com/2PGQH1JJ/ZDRCBJ6/)**
## Benefits of Lulutox Detox Tea
Lulutox Detox Tea is marketed as a multifaceted health product with a range of potential benefits. Hereโs what users can expect when incorporating it into their daily routine:
Supports Healthy Weight Loss: The blend of metabolism-boosting ingredients like Matcha, Yerba Mate, and Goji Berries helps the body burn calories more efficiently. While not a magic bullet for weight loss, Lulutox Detox Tea can complement a balanced diet and exercise routine by supporting fat-burning processes.
Reduces Bloating: Ingredients like dandelion and ginger work to alleviate water retention and soothe digestive discomfort, helping users feel lighter and more comfortable.
Boosts Energy and Focus: With natural sources of caffeine from Yerba Mate and Guarana, Lulutox Detox Tea provides a steady energy boost without the jitters or crashes associated with coffee or energy drinks.
Promotes Digestive Health: Hibiscus, ginger, and dandelion support healthy digestion, making the tea a great choice for those dealing with occasional bloating or sluggishness.
Enhances Overall Well-Being: The antioxidant-rich ingredients help combat free radicals, support the immune system, and promote a sense of vitality.
Fits Dietary Preferences: Being vegan, gluten-free, dairy-free, and soy-free, **[Lulutox Detox Tea](https://www.diginear.com/2PGQH1JJ/ZDRCBJ6/)** is accessible to a wide range of consumers with varying dietary needs.
Many users report noticeable improvements within three to four weeks of consistent use, including reduced bloating, increased energy, and a flatter stomach. However, individual results may vary, and the brand advises consulting a healthcare provider, especially for those on medication, pregnant, or nursing.
## How to Use Lulutox Detox Tea
One of the standout features of Lulutox Detox Tea is its ease of use. To prepare, simply steep one pyramid tea bag in hot water for five minutes. For a refreshing twist, you can also cold brew the tea by steeping it in the fridge overnight. The light peach flavor makes it enjoyable at any time of day, whether youโre starting your morning or winding down in the evening.
For best results, the brand recommends sipping Lulutox Detox Tea daily as part of a healthy lifestyle. Many users enjoy it in the morning to jumpstart their metabolism or in the evening to curb late-night cravings. The teaโs versatility allows it to fit into various routines, whether youโre a busy professional, a fitness enthusiast, or simply someone looking to feel better in their body.
## **[๐Hurry Up!! (Special Offer)๐คทโโ๏ธ Fast To Orderโโ](https://www.diginear.com/2PGQH1JJ/ZDRCBJ6/)** |
BootesVoid/cmbaowbl001hj42yx752i5giy_cmbatf2nj049442yxk6rr75qz | BootesVoid | 2025-05-30T13:42:21Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-30T13:42:20Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: MISA
---
# Cmbaowbl001Hj42Yx752I5Giy_Cmbatf2Nj049442Yxk6Rr75Qz
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `MISA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "MISA",
"lora_weights": "https://huggingface.co/BootesVoid/cmbaowbl001hj42yx752i5giy_cmbatf2nj049442yxk6rr75qz/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbaowbl001hj42yx752i5giy_cmbatf2nj049442yxk6rr75qz', weight_name='lora.safetensors')
image = pipeline('MISA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbaowbl001hj42yx752i5giy_cmbatf2nj049442yxk6rr75qz/discussions) to add images that show off what youโve made with this LoRA.
|
zuazo/whisper-large-v3-eu-cv17_0 | zuazo | 2025-05-30T13:42:17Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"eu",
"dataset:mozilla-foundation/common_voice_17_0",
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-05-26T14:46:38Z | ---
library_name: transformers
language:
- eu
license: apache-2.0
base_model: openai/whisper-large-v3
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_17_0
metrics:
- wer
model-index:
- name: Whisper Large-V3 Basque
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_17_0 eu
type: mozilla-foundation/common_voice_17_0
config: eu
split: test
args: eu
metrics:
- name: Wer
type: wer
value: 6.386272857195206
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large-V3 Basque
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the mozilla-foundation/common_voice_17_0 eu dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2570
- Wer: 6.3863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.75e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 40000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:-----:|:---------------:|:------:|
| 0.0674 | 2.3474 | 1000 | 0.1613 | 9.7732 |
| 0.0299 | 4.6948 | 2000 | 0.1633 | 8.9771 |
| 0.0164 | 7.0423 | 3000 | 0.1828 | 8.6381 |
| 0.0098 | 9.3897 | 4000 | 0.1870 | 8.2524 |
| 0.0105 | 11.7371 | 5000 | 0.1912 | 8.4146 |
| 0.0085 | 14.0845 | 6000 | 0.2029 | 8.5914 |
| 0.0076 | 16.4319 | 7000 | 0.2084 | 8.1296 |
| 0.0059 | 18.7793 | 8000 | 0.2028 | 8.1003 |
| 0.0059 | 21.1268 | 9000 | 0.2066 | 8.3404 |
| 0.0049 | 23.4742 | 10000 | 0.2154 | 8.3972 |
| 0.0044 | 25.8216 | 11000 | 0.2136 | 8.0087 |
| 0.0012 | 28.1690 | 12000 | 0.2111 | 7.3116 |
| 0.0038 | 30.5164 | 13000 | 0.2219 | 8.1471 |
| 0.0025 | 32.8638 | 14000 | 0.2155 | 7.6679 |
| 0.0021 | 35.2113 | 15000 | 0.2239 | 7.4893 |
| 0.0021 | 37.5587 | 16000 | 0.2277 | 7.8337 |
| 0.0017 | 39.9061 | 17000 | 0.2254 | 7.8108 |
| 0.0012 | 42.2535 | 18000 | 0.2247 | 7.2914 |
| 0.0021 | 44.6009 | 19000 | 0.2301 | 8.0005 |
| 0.0016 | 46.9484 | 20000 | 0.2346 | 7.7568 |
| 0.001 | 49.2958 | 21000 | 0.2283 | 7.3940 |
| 0.0021 | 51.6432 | 22000 | 0.2297 | 7.5589 |
| 0.0013 | 53.9906 | 23000 | 0.2324 | 7.6029 |
| 0.0004 | 56.3380 | 24000 | 0.2333 | 6.9369 |
| 0.0003 | 58.6854 | 25000 | 0.2254 | 6.8114 |
| 0.0016 | 61.0329 | 26000 | 0.2393 | 7.6688 |
| 0.0001 | 63.3803 | 27000 | 0.2279 | 6.8819 |
| 0.0 | 65.7277 | 28000 | 0.2320 | 6.8269 |
| 0.0 | 68.0751 | 29000 | 0.2421 | 6.5832 |
| 0.0 | 70.4225 | 30000 | 0.2481 | 6.4770 |
| 0.0 | 72.7700 | 31000 | 0.2532 | 6.4000 |
| 0.0 | 75.1174 | 32000 | 0.2570 | 6.3863 |
| 0.0011 | 77.4648 | 33000 | 0.2388 | 7.2392 |
| 0.0 | 79.8122 | 34000 | 0.2403 | 6.8223 |
| 0.0 | 82.1596 | 35000 | 0.2477 | 6.6639 |
| 0.0 | 84.5070 | 36000 | 0.2528 | 6.6071 |
| 0.0001 | 86.8545 | 37000 | 0.2562 | 6.5503 |
| 0.0 | 89.2019 | 38000 | 0.2597 | 6.4971 |
| 0.0 | 91.5493 | 39000 | 0.2623 | 6.4632 |
| 0.0 | 93.8967 | 40000 | 0.2636 | 6.4568 |
### Framework versions
- Transformers 4.52.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
mradermacher/ZeroGUI-AndroidLab-7B-i1-GGUF | mradermacher | 2025-05-30T13:41:08Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"multimodal",
"gui",
"en",
"zh",
"base_model:OpenGVLab/ZeroGUI-AndroidLab-7B",
"base_model:quantized:OpenGVLab/ZeroGUI-AndroidLab-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-30T12:49:19Z | ---
base_model: OpenGVLab/ZeroGUI-AndroidLab-7B
language:
- en
- zh
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- multimodal
- gui
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/OpenGVLab/ZeroGUI-AndroidLab-7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/ZeroGUI-AndroidLab-7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-AndroidLab-7B-i1-GGUF/resolve/main/ZeroGUI-AndroidLab-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-AndroidLab-7B-i1-GGUF/resolve/main/ZeroGUI-AndroidLab-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-AndroidLab-7B-i1-GGUF/resolve/main/ZeroGUI-AndroidLab-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-AndroidLab-7B-i1-GGUF/resolve/main/ZeroGUI-AndroidLab-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-AndroidLab-7B-i1-GGUF/resolve/main/ZeroGUI-AndroidLab-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-AndroidLab-7B-i1-GGUF/resolve/main/ZeroGUI-AndroidLab-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-AndroidLab-7B-i1-GGUF/resolve/main/ZeroGUI-AndroidLab-7B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-AndroidLab-7B-i1-GGUF/resolve/main/ZeroGUI-AndroidLab-7B.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-AndroidLab-7B-i1-GGUF/resolve/main/ZeroGUI-AndroidLab-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-AndroidLab-7B-i1-GGUF/resolve/main/ZeroGUI-AndroidLab-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-AndroidLab-7B-i1-GGUF/resolve/main/ZeroGUI-AndroidLab-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-AndroidLab-7B-i1-GGUF/resolve/main/ZeroGUI-AndroidLab-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-AndroidLab-7B-i1-GGUF/resolve/main/ZeroGUI-AndroidLab-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-AndroidLab-7B-i1-GGUF/resolve/main/ZeroGUI-AndroidLab-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-AndroidLab-7B-i1-GGUF/resolve/main/ZeroGUI-AndroidLab-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-AndroidLab-7B-i1-GGUF/resolve/main/ZeroGUI-AndroidLab-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-AndroidLab-7B-i1-GGUF/resolve/main/ZeroGUI-AndroidLab-7B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-AndroidLab-7B-i1-GGUF/resolve/main/ZeroGUI-AndroidLab-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-AndroidLab-7B-i1-GGUF/resolve/main/ZeroGUI-AndroidLab-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-AndroidLab-7B-i1-GGUF/resolve/main/ZeroGUI-AndroidLab-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-AndroidLab-7B-i1-GGUF/resolve/main/ZeroGUI-AndroidLab-7B.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-AndroidLab-7B-i1-GGUF/resolve/main/ZeroGUI-AndroidLab-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-AndroidLab-7B-i1-GGUF/resolve/main/ZeroGUI-AndroidLab-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-AndroidLab-7B-i1-GGUF/resolve/main/ZeroGUI-AndroidLab-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
KumaarJJ007/streetvision | KumaarJJ007 | 2025-05-30T13:39:54Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-05-28T03:52:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Trendency/whisper-large-v3-hu | Trendency | 2025-05-30T13:29:39Z | 2 | 0 | null | [
"safetensors",
"whisper",
"audio",
"automatic-speech-recognition",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
"hr",
"bg",
"lt",
"la",
"mi",
"ml",
"cy",
"sk",
"te",
"fa",
"lv",
"bn",
"sr",
"az",
"sl",
"kn",
"et",
"mk",
"br",
"eu",
"is",
"hy",
"ne",
"mn",
"bs",
"kk",
"sq",
"sw",
"gl",
"mr",
"pa",
"si",
"km",
"sn",
"yo",
"so",
"af",
"oc",
"ka",
"be",
"tg",
"sd",
"gu",
"am",
"yi",
"lo",
"uz",
"fo",
"ht",
"ps",
"tk",
"nn",
"mt",
"sa",
"lb",
"my",
"bo",
"tl",
"mg",
"as",
"tt",
"haw",
"ln",
"ha",
"ba",
"jw",
"su",
"dataset:FBK-MT/Speech-MASSIVE",
"dataset:KTH/hungarian-single-speaker-tts",
"dataset:mozilla-foundation/common_voice_17_0",
"dataset:facebook/voxpopuli",
"arxiv:2212.04356",
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"license:apache-2.0",
"region:us"
] | automatic-speech-recognition | 2025-05-28T11:38:19Z | ---
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- 'no'
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
tags:
- audio
- automatic-speech-recognition
pipeline_tag: automatic-speech-recognition
license: apache-2.0
datasets:
- FBK-MT/Speech-MASSIVE
- KTH/hungarian-single-speaker-tts
- mozilla-foundation/common_voice_17_0
- facebook/voxpopuli
metrics:
- wer
base_model:
- openai/whisper-large-v3
widget:
- example_title: Common Voice Sample 1
src: https://huggingface.co/Trendency/whisper-large-v3-hu/resolve/main/sample.mp3
---
# Whisper large-v3-hu
Whisper large-v3-hu is a fine-tuned version of [Whisper large-v3](https://huggingface.co/openai/whisper-large-v3) that focuses on the Hungarian language.
It shows improved performance on Hungarian data but may perform worse in other languages.
As of 2024, it performs marginally better than the transcription service in Microsoft Teams (Word Error Rate of **43.0** vs **45.6**).
It achieves a mean Word Error Rate of **11.26** on the Common Voice dataset's 19.0, 20.0 and 21.0 deltas (using only other and validated data).
The model was only trained on publicly available data:
- Fox Populi by Facebook
- Common Voice 18.0 by Mozilla
- Hungarian Single Speaker TTS by KTH
- Speech-Massive by FBK-MT
We have used both training, test and validation splits as training data.
It is recommended to use [faster-whisper](https://github.com/SYSTRAN/faster-whisper) for inference.
# Whisper
Whisper is a state-of-the-art model for automatic speech recognition (ASR) and speech translation, proposed in the paper
[Robust Speech Recognition via Large-Scale Weak Supervision](https://huggingface.co/papers/2212.04356) by Alec Radford
et al. from OpenAI. Trained on >5M hours of labeled data, Whisper demonstrates a strong ability to generalise to many
datasets and domains in a zero-shot setting.
Whisper large-v3 has the same architecture as the previous [large](https://huggingface.co/openai/whisper-large) and [large-v2](https://huggingface.co/openai/whisper-large-v2)
models, except for the following minor differences:
1. The spectrogram input uses 128 Mel frequency bins instead of 80
2. A new language token for Cantonese
The Whisper large-v3 model was trained on 1 million hours of weakly labeled audio and 4 million hours of pseudo-labeled
audio collected using Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2) . The model was trained for 2.0 epochs over this mixture dataset.
The large-v3 model shows improved performance over a wide variety of languages, showing 10% to 20% reduction of errors
compared to Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2) . For more details on the different checkpoints available, refer to the section [Model details](#model-details).
**Disclaimer**: Content for this model card has partly been written by the ๐ค Hugging Face team, and partly copied and
pasted from the original model card.
## Usage
Whisper large-v3 is supported in Hugging Face ๐ค Transformers. To run the model, first install the Transformers
library. For this example, we'll also install ๐ค Datasets to load toy audio dataset from the Hugging Face Hub, and
๐ค Accelerate to reduce the model loading time:
```bash
pip install --upgrade pip
pip install --upgrade transformers datasets[audio] accelerate
```
The model can be used with the [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline)
class to transcribe audios of arbitrary length:
```python
import torch
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
from datasets import load_dataset
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model_id = "Trendency/whisper-large-v3-hu"
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
)
model.to(device)
processor = AutoProcessor.from_pretrained(model_id)
pipe = pipeline(
"automatic-speech-recognition",
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
torch_dtype=torch_dtype,
device=device,
)
dataset = load_dataset("distil-whisper/librispeech_long", "clean", split="validation")
sample = dataset[0]["audio"]
result = pipe(sample)
print(result["text"])
```
To transcribe a local audio file, simply pass the path to your audio file when you call the pipeline:
```python
result = pipe("audio.mp3")
```
Multiple audio files can be transcribed in parallel by specifying them as a list and setting the `batch_size` parameter:
```python
result = pipe(["audio_1.mp3", "audio_2.mp3"], batch_size=2)
```
Transformers is compatible with all Whisper decoding strategies, such as temperature fallback and condition on previous
tokens. The following example demonstrates how to enable these heuristics:
```python
generate_kwargs = {
"max_new_tokens": 448,
"num_beams": 1,
"condition_on_prev_tokens": False,
"compression_ratio_threshold": 1.35, # zlib compression ratio threshold (in token space)
"temperature": (0.0, 0.2, 0.4, 0.6, 0.8, 1.0),
"logprob_threshold": -1.0,
"no_speech_threshold": 0.6,
"return_timestamps": True,
}
result = pipe(sample, generate_kwargs=generate_kwargs)
```
Whisper predicts the language of the source audio automatically. If the source audio language is known *a-priori*, it
can be passed as an argument to the pipeline:
```python
result = pipe(sample, generate_kwargs={"language": "hungarian"})
```
By default, Whisper performs the task of *speech transcription*, where the source audio language is the same as the target
text language. To perform *speech translation*, where the target text is in English, set the task to `"translate"`:
```python
result = pipe(sample, generate_kwargs={"task": "translate"})
```
Finally, the model can be made to predict timestamps. For sentence-level timestamps, pass the `return_timestamps` argument:
```python
result = pipe(sample, return_timestamps=True)
print(result["chunks"])
```
And for word-level timestamps:
```python
result = pipe(sample, return_timestamps="word")
print(result["chunks"])
```
The above arguments can be used in isolation or in combination. For example, to perform the task of speech transcription
where the source audio is in French, and we want to return sentence-level timestamps, the following can be used:
```python
result = pipe(sample, return_timestamps=True, generate_kwargs={"language": "french", "task": "translate"})
print(result["chunks"])
```
<details>
<summary> For more control over the generation parameters, use the model + processor API directly: </summary>
```python
import torch
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor
from datasets import Audio, load_dataset
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model_id = "Trendency/whisper-large-v3-hu"
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True
)
model.to(device)
processor = AutoProcessor.from_pretrained(model_id)
dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
dataset = dataset.cast_column("audio", Audio(processor.feature_extractor.sampling_rate))
sample = dataset[0]["audio"]
inputs = processor(
sample["array"],
sampling_rate=sample["sampling_rate"],
return_tensors="pt",
truncation=False,
padding="longest",
return_attention_mask=True,
)
inputs = inputs.to(device, dtype=torch_dtype)
gen_kwargs = {
"max_new_tokens": 448,
"num_beams": 1,
"condition_on_prev_tokens": False,
"compression_ratio_threshold": 1.35, # zlib compression ratio threshold (in token space)
"temperature": (0.0, 0.2, 0.4, 0.6, 0.8, 1.0),
"logprob_threshold": -1.0,
"no_speech_threshold": 0.6,
"return_timestamps": True,
}
pred_ids = model.generate(**inputs, **gen_kwargs)
pred_text = processor.batch_decode(pred_ids, skip_special_tokens=True, decode_with_timestamps=False)
print(pred_text)
```
</details>
## Additional Speed & Memory Improvements
You can apply additional speed and memory improvements to Whisper to further reduce the inference speed and VRAM
requirements.
### Chunked Long-Form
Whisper has a receptive field of 30-seconds. To transcribe audios longer than this, one of two long-form algorithms are
required:
1. **Sequential:** uses a "sliding window" for buffered inference, transcribing 30-second slices one after the other
2. **Chunked:** splits long audio files into shorter ones (with a small overlap between segments), transcribes each segment independently, and stitches the resulting transcriptions at the boundaries
The sequential long-form algorithm should be used in either of the following scenarios:
1. Transcription accuracy is the most important factor, and speed is less of a consideration
2. You are transcribing **batches** of long audio files, in which case the latency of sequential is comparable to chunked, while being up to 0.5% WER more accurate
Conversely, the chunked algorithm should be used when:
1. Transcription speed is the most important factor
2. You are transcribing a **single** long audio file
By default, Transformers uses the sequential algorithm. To enable the chunked algorithm, pass the `chunk_length_s`
parameter to the `pipeline`. For large-v3, a chunk length of 30-seconds is optimal. To activate batching over long
audio files, pass the argument `batch_size`:
```python
import torch
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
from datasets import load_dataset
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model_id = "Trendency/whisper-large-v3-hu"
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True
)
model.to(device)
processor = AutoProcessor.from_pretrained(model_id)
pipe = pipeline(
"automatic-speech-recognition",
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
chunk_length_s=30,
batch_size=16, # batch size for inference - set based on your device
torch_dtype=torch_dtype,
device=device,
)
dataset = load_dataset("distil-whisper/librispeech_long", "clean", split="validation")
sample = dataset[0]["audio"]
result = pipe(sample)
print(result["text"])
```
### Evaluated Use
The primary intended users of these models are AI researchers studying robustness, generalization, capabilities, biases, and constraints of the current model. However, Whisper is also potentially quite useful as an ASR solution for developers, especially for English speech recognition. We recognize that once models are released, it is impossible to restrict access to only โintendedโ uses or to draw reasonable guidelines around what is or is not research.
The models are primarily trained and evaluated on ASR and speech translation to English tasks. They show strong ASR results in ~10 languages. They may exhibit additional capabilities, particularly if fine-tuned on certain tasks like voice activity detection, speaker classification, or speaker diarization but have not been robustly evaluated in these areas. We strongly recommend that users perform robust evaluations of the models in a particular context and domain before deploying them.
In particular, we caution against using Whisper models to transcribe recordings of individuals taken without their consent or purporting to use these models for any kind of subjective classification. We recommend against use in high-risk domains like decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes. The models are intended to transcribe and translate speech, use of the model for classification is not only not evaluated but also not appropriate, particularly to infer human attributes.
## Training Data
The large-v3 checkpoint is trained on 1 million hours of weakly labeled audio and 4 million hours of pseudo-labeled audio collected using Whisper large-v2.
As discussed in [the accompanying paper](https://cdn.openai.com/papers/whisper.pdf), we see that performance on transcription in a given language is directly correlated with the amount of training data we employ in that language.
## Performance and Limitations
Our studies show that, over many existing ASR systems, the models exhibit improved robustness to accents, background noise, technical language, as well as zero shot translation from multiple languages into English; and that accuracy on speech recognition and translation is near the state-of-the-art level.
However, because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself.
Our models perform unevenly across languages, and we observe lower accuracy on low-resource and/or low-discoverability languages or languages where we have less training data. The models also exhibit disparate performance on different accents and dialects of particular languages, which may include higher word error rate across speakers of different genders, races, ages, or other demographic criteria. Our full evaluation results are presented in [the paper accompanying this release](https://cdn.openai.com/papers/whisper.pdf).
In addition, the sequence-to-sequence architecture of the model makes it prone to generating repetitive texts, which can be mitigated to some degree by beam search and temperature scheduling but not perfectly. Further analysis on these limitations are provided in [the paper](https://cdn.openai.com/papers/whisper.pdf). It is likely that this behavior and hallucinations may be worse on lower-resource and/or lower-discoverability languages.
## Broader Implications
We anticipate that Whisper modelsโ transcription capabilities may be used for improving accessibility tools. While Whisper models cannot be used for real-time transcription out of the box โ their speed and size suggest that others may be able to build applications on top of them that allow for near-real-time speech recognition and translation. The real value of beneficial applications built on top of Whisper models suggests that the disparate performance of these models may have real economic implications.
There are also potential dual use concerns that come with releasing Whisper. While we hope the technology will be used primarily for beneficial purposes, making ASR technology more accessible could enable more actors to build capable surveillance technologies or scale up existing surveillance efforts, as the speed and accuracy allow for affordable automatic transcription and translation of large volumes of audio communication. Moreover, these models may have some capabilities to recognize specific individuals out of the box, which in turn presents safety concerns related both to dual use and disparate performance. In practice, we expect that the cost of transcription is not the limiting factor of scaling up surveillance projects.
### BibTeX entry and citation info
```bibtex
@misc{radford2022whisper,
doi = {10.48550/ARXIV.2212.04356},
url = {https://arxiv.org/abs/2212.04356},
author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
title = {Robust Speech Recognition via Large-Scale Weak Supervision},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
``` |
Moryjj/parst5_3blocks_12 | Moryjj | 2025-05-30T13:21:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-30T13:20:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/DeepSeek-R1-0528-Qwen3-8B-i1-GGUF | mradermacher | 2025-05-30T13:18:43Z | 0 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B",
"base_model:quantized:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-30T12:21:23Z | ---
base_model: deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.4 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.2 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.9 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B.i1-Q4_1.gguf) | i1-Q4_1 | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
lulu2025738/deepseek-1.5b-sft | lulu2025738 | 2025-05-30T13:15:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-30T08:18:01Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Alepach/notHumpback-Myx-8b | Alepach | 2025-05-30T13:14:39Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"dataset:OpenAssistant/oasst1",
"arxiv:2308.06259",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-06T22:18:17Z | ---
base_model: meta-llama/Llama-3.1-8B
library_name: transformers
model_name: notHumpback-Myx-8b
tags:
- generated_from_trainer
- trl
- sft
license: apache-2.0
datasets:
- OpenAssistant/oasst1
---
# notHumpback-Myx-8b
This model follows the Humpback architecture, proposed in the paper [Self-Alignment with Instruction Backtranslation](https://arxiv.org/pdf/2308.06259)
by Li et al.
It represents the "backward model", which is used to generate instructions from web texts. These are considered as possible model outputs.
Humpback uses instruction backtranslation on a web corpus to generate input-output pairs (self-augmentation),
creating a richer dataset for fine-tuning models without the need for additional manual annotation.
The model then iteratively curates the created dataset, scoring the pairs by quality, and is then finetuned on the resulting subset
of all pairs with the highest possible score (self-curation).
Varying from the original paper, this model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
The dataset used to train this model has been sampled from the [oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) dataset.
In order to achieve the "backward" structure, the model is trained on output-input pairs.
### Framework versions
- TRL: 0.12.1
- Transformers: 4.46.3
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Original paper:
```bibtex
@misc{li2023selfalignment,
title={Self-Alignment with Instruction Backtranslation},
author={Xian Li and Ping Yu and Chunting Zhou and Timo Schick and Luke Zettlemoyer and Omer Levy and Jason Weston and Mike Lewis},
year={2023},
eprint={2308.06259},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
harleenbagga/lora_model_ham10000 | harleenbagga | 2025-05-30T13:13:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2_vl",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-30T13:13:35Z | ---
base_model: unsloth/qwen2-vl-2b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_vl
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** harleenbagga
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2-vl-2b-instruct-bnb-4bit
This qwen2_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
EthioNLP/Amharic_LLAMA_our_data | EthioNLP | 2025-05-30T13:10:20Z | 0 | 0 | null | [
"text-generation",
"am",
"region:us"
] | text-generation | 2024-01-27T10:08:58Z | ---
base_model: llama-2-amharic-combined
language:
- am
pipeline_tag: text-generation
---
## Walia-LLM: Enhancing Amharic-LLaMA by Integrating Task-Specific and Generative Datasets
`Walia-LLM` is a fine-tuned LLaMA-2 model for the Amharic language, created by instruction tuning with task-specific and generative datasets. It is part of our effort to adapt and improve LLMs for low-resource languages.
This model was introduced in the EMNLP 2024 Findings paper:
> [Walia-LLM: Enhancing Amharic-LLaMA by Integrating Task-Specific and Generative Datasets](https://aclanthology.org/2024.findings-emnlp.25/)
## Model Details
- Base model: LLaMA-2
- Fine-tuning method: Supervised fine-tuning (SFT) using LoRA
- Language: Amharic
- Tasks:
- Sentiment analysis
- Question answering
- Named entity recognition
- News classification
- Summarization
- Machine translation
- Poem/story/lyrics generation
- Spelling correction
## Training Data
The model was trained on a custom instruction dataset derived from:
- Existing NLP benchmarks (e.g., AfriSenti, AmharicQA, MasakhaNER, MasakhaNews, XL-Sum)
- Manually collected generative datasets (e.g., religious lyrics, stories, poems)
- Translated instruction datasets (e.g., Alpaca, Dolly)
See [EthioNLP/walia-amharic-instructions](https://huggingface.co/datasets/EthioNLP/walia-amharic-instructions) for the dataset used.
## Intended Use
This model is intended for:
- Research on instruction tuning in low-resource languages
- Generative NLP tasks in Amharic
- Evaluating multilingual LLM capabilities
## Limitations
- Some generative outputs may be verbose or imprecise.
- Limited understanding of highly specific Amharic poetic or lyrical structures.
- Spell correction and NER performance is still under exploration.
## Example Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("EthioNLP/Amharic-LLAMA-all-data")
tokenizer = AutoTokenizer.from_pretrained("EthioNLP/Amharic-LLAMA-all-data")
prompt = "แตแ แ แแญแ แแแ แแแแซ แ แ
แญแฅแข"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Citation
```bibtex
@inproceedings{azime-etal-2024-walia,
title = "Walia-{LLM}: Enhancing {A}mharic-{LL}a{MA} by Integrating Task-Specific and Generative Datasets",
author = "Azime, Israel Abebe and Tonja, Atnafu Lambebo and Belay, Tadesse Destaw and Fuge, Mitiku Yohannes and Wassie, Aman Kassahun and Jada, Eyasu Shiferaw and Chanie, Yonas and Sewunetie, Walelign Tewabe and Yimam, Seid Muhie",
editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2024",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-emnlp.25/",
doi = "10.18653/v1/2024.findings-emnlp.25",
pages = "432--444"
}
``` |
second-state/DeepSeek-R1-0528-Qwen3-8B-GGUF | second-state | 2025-05-30T13:08:12Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"qwen3",
"text-generation",
"base_model:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B",
"base_model:quantized:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-30T12:23:45Z | ---
base_model: deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
license: mit
model_creator: deepseek-ai
model_name: DeepSeek-R1-0528-Qwen3-8B
quantized_by: Second State Inc.
library_name: transformers
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# DeepSeek-R1-0528-Qwen3-8B-GGUF
## Original Model
[deepseek-ai/DeepSeek-R1-0528-Qwen3-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B)
## Run with LlamaEdge
- LlamaEdge version: [v0.21.0](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.21.0) or above
- Prompt template
- Prompt type: `chatml`
- Prompt string
```text
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
- Context size: `128000`
- Run as LlamaEdge service
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:DeepSeek-R1-0528-Qwen3-8B-Q5_K_M.gguf \
llama-api-server.wasm \
--model-name DeepSeek-R1-0528-Qwen3-8B \
--prompt-template chatml \
--ctx-size 128000
```
- Run as LlamaEdge command app
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:DeepSeek-R1-0528-Qwen3-8B-Q5_K_M.gguf \
llama-chat.wasm \
--prompt-template chatml \
--ctx-size 128000
```
## Quantized GGUF Models
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ----- |
| [DeepSeek-R1-0528-Qwen3-8B-Q2_K.gguf](https://huggingface.co/second-state/DeepSeek-R1-0528-Qwen3-8B-GGUF/blob/main/DeepSeek-R1-0528-Qwen3-8B-Q2_K.gguf) | Q2_K | 2 | 3.28 GB| smallest, significant quality loss - not recommended for most purposes |
| [DeepSeek-R1-0528-Qwen3-8B-Q3_K_L.gguf](https://huggingface.co/second-state/DeepSeek-R1-0528-Qwen3-8B-GGUF/blob/main/DeepSeek-R1-0528-Qwen3-8B-Q3_K_L.gguf) | Q3_K_L | 3 | 4.43 GB| small, substantial quality loss |
| [DeepSeek-R1-0528-Qwen3-8B-Q3_K_M.gguf](https://huggingface.co/second-state/DeepSeek-R1-0528-Qwen3-8B-GGUF/blob/main/DeepSeek-R1-0528-Qwen3-8B-Q3_K_M.gguf) | Q3_K_M | 3 | 4.12 GB| very small, high quality loss |
| [DeepSeek-R1-0528-Qwen3-8B-Q3_K_S.gguf](https://huggingface.co/second-state/DeepSeek-R1-0528-Qwen3-8B-GGUF/blob/main/DeepSeek-R1-0528-Qwen3-8B-Q3_K_S.gguf) | Q3_K_S | 3 | 3.77 GB| very small, high quality loss |
| [DeepSeek-R1-0528-Qwen3-8B-Q4_0.gguf](https://huggingface.co/second-state/DeepSeek-R1-0528-Qwen3-8B-GGUF/blob/main/DeepSeek-R1-0528-Qwen3-8B-Q4_0.gguf) | Q4_0 | 4 | 4.77 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
| [DeepSeek-R1-0528-Qwen3-8B-Q4_K_M.gguf](https://huggingface.co/second-state/DeepSeek-R1-0528-Qwen3-8B-GGUF/blob/main/DeepSeek-R1-0528-Qwen3-8B-Q4_K_M.gguf) | Q4_K_M | 4 | 5.03 GB| medium, balanced quality - recommended |
| [DeepSeek-R1-0528-Qwen3-8B-Q4_K_S.gguf](https://huggingface.co/second-state/DeepSeek-R1-0528-Qwen3-8B-GGUF/blob/main/DeepSeek-R1-0528-Qwen3-8B-Q4_K_S.gguf) | Q4_K_S | 4 | 4.80 GB| small, greater quality loss |
| [DeepSeek-R1-0528-Qwen3-8B-Q5_0.gguf](https://huggingface.co/second-state/DeepSeek-R1-0528-Qwen3-8B-GGUF/blob/main/DeepSeek-R1-0528-Qwen3-8B-Q5_0.gguf) | Q5_0 | 5 | 5.72 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
| [DeepSeek-R1-0528-Qwen3-8B-Q5_K_M.gguf](https://huggingface.co/second-state/DeepSeek-R1-0528-Qwen3-8B-GGUF/blob/main/DeepSeek-R1-0528-Qwen3-8B-Q5_K_M.gguf) | Q5_K_M | 5 | 5.85 GB| large, very low quality loss - recommended |
| [DeepSeek-R1-0528-Qwen3-8B-Q5_K_S.gguf](https://huggingface.co/second-state/DeepSeek-R1-0528-Qwen3-8B-GGUF/blob/main/DeepSeek-R1-0528-Qwen3-8B-Q5_K_S.gguf) | Q5_K_S | 5 | 5.72 GB| large, low quality loss - recommended |
| [DeepSeek-R1-0528-Qwen3-8B-Q6_K.gguf](https://huggingface.co/second-state/DeepSeek-R1-0528-Qwen3-8B-GGUF/blob/main/DeepSeek-R1-0528-Qwen3-8B-Q6_K.gguf) | Q6_K | 6 | 6.73 GB| very large, extremely low quality loss |
| [DeepSeek-R1-0528-Qwen3-8B-Q8_0.gguf](https://huggingface.co/second-state/DeepSeek-R1-0528-Qwen3-8B-GGUF/blob/main/DeepSeek-R1-0528-Qwen3-8B-Q8_0.gguf) | Q8_0 | 8 | 8.71 GB| very large, extremely low quality loss - not recommended |
| [DeepSeek-R1-0528-Qwen3-8B-f16.gguf](https://huggingface.co/second-state/DeepSeek-R1-0528-Qwen3-8B-GGUF/blob/main/DeepSeek-R1-0528-Qwen3-8B-f16.gguf) | f16 | 16 | 16.4 GB| |
*Quantized with llama.cpp b5501* |
Alepach/notHumpback-M1-3b | Alepach | 2025-05-30T13:04:10Z | 12 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"dataset:OpenAssistant/oasst1",
"dataset:allenai/c4",
"arxiv:2308.06259",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:finetune:meta-llama/Llama-3.2-3B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-12-31T12:48:01Z | ---
base_model: meta-llama/Llama-3.2-3B
library_name: transformers
model_name: notHumpback-M1
tags:
- generated_from_trainer
- trl
- sft
license: apache-2.0
datasets:
- OpenAssistant/oasst1
- allenai/c4
---
# notHumpback-M1-3b
This model follows the Humpback architecture, proposed in the paper [Self-Alignment with Instruction Backtranslation](https://arxiv.org/pdf/2308.06259)
by Li et al.
It represents the resulting model after the first iteration of self-curation, which is trained on a small amount of gold data
and a set of generated data curated by the ["seed model"](https://huggingface.co/Alepach/notHumpback-M0).
This model can be used for instruction-following.
It may also be used to, again, score the instruction-response pairs
generated by the ["backward model"](https://huggingface.co/Alepach/notHumpback-Myx) for a second iteration of self-curation.
Humpback uses instruction backtranslation on a web corpus to generate input-output pairs (self-augmentation),
creating a richer dataset for fine-tuning models without the need for additional manual annotation.
The model then iteratively curates the created dataset, scoring the pairs by quality, and is then finetuned on the resulting subset
of all pairs with the highest possible score (self-curation).
Varying from the original paper, this model is a fine-tuned version of [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B).
It has been trained using [TRL](https://github.com/huggingface/trl).
The dataset used to train this model is a combination of data sampled from the [oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1)
dataset and the synthetic dataset which was mentioned above. The latter has been created by applying self-augmentation and self-curation
on 502k entries from the english subset ("en") of the [c4](https://huggingface.co/datasets/allenai/c4) dataset.
For comparison with other methods, the training dataset was limited to 16000 instruction-response pairs.
### Framework versions
- TRL: 0.12.1
- Transformers: 4.46.3
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Original paper:
```bibtex
@misc{li2023selfalignment,
title={Self-Alignment with Instruction Backtranslation},
author={Xian Li and Ping Yu and Chunting Zhou and Timo Schick and Luke Zettlemoyer and Omer Levy and Jason Weston and Mike Lewis},
year={2023},
eprint={2308.06259},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
InsightKeeper/FastVLM-1.5B-MLX-8bit | InsightKeeper | 2025-05-30T13:01:28Z | 0 | 0 | null | [
"coreml",
"safetensors",
"llava_qwen2",
"license:apple-amlr",
"region:us"
] | null | 2025-05-30T12:37:04Z | ---
license: apple-amlr
---
|
mradermacher/MiMo-VL-7B-SFT-i1-GGUF | mradermacher | 2025-05-30T12:56:57Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:XiaomiMiMo/MiMo-VL-7B-SFT",
"base_model:quantized:XiaomiMiMo/MiMo-VL-7B-SFT",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-30T11:52:09Z | ---
base_model: XiaomiMiMo/MiMo-VL-7B-SFT
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/XiaomiMiMo/MiMo-VL-7B-SFT
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/MiMo-VL-7B-SFT-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MiMo-VL-7B-SFT-i1-GGUF/resolve/main/MiMo-VL-7B-SFT.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/MiMo-VL-7B-SFT-i1-GGUF/resolve/main/MiMo-VL-7B-SFT.i1-IQ1_M.gguf) | i1-IQ1_M | 2.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/MiMo-VL-7B-SFT-i1-GGUF/resolve/main/MiMo-VL-7B-SFT.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/MiMo-VL-7B-SFT-i1-GGUF/resolve/main/MiMo-VL-7B-SFT.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/MiMo-VL-7B-SFT-i1-GGUF/resolve/main/MiMo-VL-7B-SFT.i1-IQ2_S.gguf) | i1-IQ2_S | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/MiMo-VL-7B-SFT-i1-GGUF/resolve/main/MiMo-VL-7B-SFT.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/MiMo-VL-7B-SFT-i1-GGUF/resolve/main/MiMo-VL-7B-SFT.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.0 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/MiMo-VL-7B-SFT-i1-GGUF/resolve/main/MiMo-VL-7B-SFT.i1-Q2_K.gguf) | i1-Q2_K | 3.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/MiMo-VL-7B-SFT-i1-GGUF/resolve/main/MiMo-VL-7B-SFT.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MiMo-VL-7B-SFT-i1-GGUF/resolve/main/MiMo-VL-7B-SFT.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/MiMo-VL-7B-SFT-i1-GGUF/resolve/main/MiMo-VL-7B-SFT.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/MiMo-VL-7B-SFT-i1-GGUF/resolve/main/MiMo-VL-7B-SFT.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MiMo-VL-7B-SFT-i1-GGUF/resolve/main/MiMo-VL-7B-SFT.i1-IQ3_M.gguf) | i1-IQ3_M | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/MiMo-VL-7B-SFT-i1-GGUF/resolve/main/MiMo-VL-7B-SFT.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/MiMo-VL-7B-SFT-i1-GGUF/resolve/main/MiMo-VL-7B-SFT.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/MiMo-VL-7B-SFT-i1-GGUF/resolve/main/MiMo-VL-7B-SFT.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/MiMo-VL-7B-SFT-i1-GGUF/resolve/main/MiMo-VL-7B-SFT.i1-Q4_0.gguf) | i1-Q4_0 | 4.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/MiMo-VL-7B-SFT-i1-GGUF/resolve/main/MiMo-VL-7B-SFT.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.6 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/MiMo-VL-7B-SFT-i1-GGUF/resolve/main/MiMo-VL-7B-SFT.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/MiMo-VL-7B-SFT-i1-GGUF/resolve/main/MiMo-VL-7B-SFT.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MiMo-VL-7B-SFT-i1-GGUF/resolve/main/MiMo-VL-7B-SFT.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/MiMo-VL-7B-SFT-i1-GGUF/resolve/main/MiMo-VL-7B-SFT.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/MiMo-VL-7B-SFT-i1-GGUF/resolve/main/MiMo-VL-7B-SFT.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/MiMo-VL-7B-SFT-i1-GGUF/resolve/main/MiMo-VL-7B-SFT.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
anishreddy91/Finetuned_model_gemma2__2b | anishreddy91 | 2025-05-30T12:52:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-30T12:33:38Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
YuchenLi01/genParaMoreUniqueResNoGT_Qwen2.5-1.5BInstruct_dpo_ebs32_lr5e-07_beta0.4_42 | YuchenLi01 | 2025-05-30T12:48:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:YuchenLi01/MATH_Qwen2.5-1.5BInstruct_DPO_generatedAndParaphrasedMoreUniqueResponseNoGT",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T02:06:29Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B-Instruct
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- YuchenLi01/MATH_Qwen2.5-1.5BInstruct_DPO_generatedAndParaphrasedMoreUniqueResponseNoGT
model-index:
- name: genParaMoreUniqueResNoGT_Qwen2.5-1.5BInstruct_dpo_ebs32_lr5e-07_beta0.4_42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# genParaMoreUniqueResNoGT_Qwen2.5-1.5BInstruct_dpo_ebs32_lr5e-07_beta0.4_42
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the YuchenLi01/MATH_Qwen2.5-1.5BInstruct_DPO_generatedAndParaphrasedMoreUniqueResponseNoGT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5382
- Rewards/chosen: -0.7103
- Rewards/rejected: -1.5509
- Rewards/accuracies: 0.7622
- Rewards/margins: 0.8406
- Logps/rejected: -51.6367
- Logps/chosen: -44.2163
- Logits/rejected: -2.1097
- Logits/chosen: -2.2356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.7184 | 0.0135 | 20 | 0.6974 | 0.0023 | -0.0058 | 0.5152 | 0.0082 | -47.7739 | -42.4346 | -2.2043 | -2.3100 |
| 0.7276 | 0.0270 | 40 | 0.6955 | 0.0072 | -0.0289 | 0.5762 | 0.0360 | -47.8315 | -42.4225 | -2.2073 | -2.3133 |
| 0.7222 | 0.0405 | 60 | 0.6967 | -0.0373 | -0.0325 | 0.4970 | -0.0048 | -47.8405 | -42.5336 | -2.2008 | -2.3066 |
| 0.7079 | 0.0540 | 80 | 0.6926 | -0.0783 | -0.0877 | 0.5366 | 0.0094 | -47.9787 | -42.6362 | -2.1903 | -2.2963 |
| 0.67 | 0.0675 | 100 | 0.6866 | -0.1807 | -0.1982 | 0.5366 | 0.0176 | -48.2549 | -42.8920 | -2.1692 | -2.2754 |
| 0.6887 | 0.0810 | 120 | 0.6782 | -0.3199 | -0.3552 | 0.5579 | 0.0353 | -48.6474 | -43.2401 | -2.1391 | -2.2454 |
| 0.6841 | 0.0945 | 140 | 0.6695 | -0.4628 | -0.5213 | 0.5579 | 0.0586 | -49.0626 | -43.5973 | -2.1131 | -2.2200 |
| 0.6317 | 0.1080 | 160 | 0.6646 | -0.5515 | -0.6336 | 0.5610 | 0.0821 | -49.3433 | -43.8192 | -2.0948 | -2.2024 |
| 0.6615 | 0.1215 | 180 | 0.6573 | -0.6582 | -0.7828 | 0.5884 | 0.1247 | -49.7164 | -44.0858 | -2.0722 | -2.1803 |
| 0.6116 | 0.1350 | 200 | 0.6533 | -0.7603 | -0.9066 | 0.6220 | 0.1462 | -50.0258 | -44.3413 | -2.0538 | -2.1625 |
| 0.5633 | 0.1484 | 220 | 0.6468 | -0.7917 | -0.9465 | 0.6067 | 0.1548 | -50.1255 | -44.4197 | -2.0505 | -2.1600 |
| 0.661 | 0.1619 | 240 | 0.6389 | -0.7738 | -0.9628 | 0.6341 | 0.1889 | -50.1662 | -44.3750 | -2.0457 | -2.1554 |
| 0.5929 | 0.1754 | 260 | 0.6344 | -0.7924 | -0.9992 | 0.6098 | 0.2068 | -50.2574 | -44.4214 | -2.0432 | -2.1541 |
| 0.7431 | 0.1889 | 280 | 0.6282 | -0.7690 | -1.0115 | 0.6616 | 0.2425 | -50.2880 | -44.3628 | -2.0508 | -2.1629 |
| 0.6263 | 0.2024 | 300 | 0.6215 | -0.7515 | -1.0145 | 0.6494 | 0.2631 | -50.2957 | -44.3190 | -2.0557 | -2.1684 |
| 0.4883 | 0.2159 | 320 | 0.6172 | -0.7482 | -1.0533 | 0.6616 | 0.3052 | -50.3927 | -44.3108 | -2.0580 | -2.1719 |
| 0.6204 | 0.2294 | 340 | 0.6160 | -0.8904 | -1.2215 | 0.6890 | 0.3311 | -50.8130 | -44.6663 | -2.0348 | -2.1497 |
| 0.6938 | 0.2429 | 360 | 0.6117 | -0.9125 | -1.2653 | 0.6799 | 0.3528 | -50.9225 | -44.7216 | -2.0398 | -2.1554 |
| 0.5494 | 0.2564 | 380 | 0.6060 | -0.8717 | -1.2650 | 0.6494 | 0.3932 | -50.9217 | -44.6197 | -2.0511 | -2.1682 |
| 0.5566 | 0.2699 | 400 | 0.5986 | -0.7971 | -1.2189 | 0.6677 | 0.4219 | -50.8067 | -44.4331 | -2.0571 | -2.1742 |
| 0.5032 | 0.2834 | 420 | 0.5939 | -0.7497 | -1.1821 | 0.6799 | 0.4323 | -50.7145 | -44.3147 | -2.0708 | -2.1889 |
| 0.6407 | 0.2969 | 440 | 0.5910 | -0.6804 | -1.1368 | 0.6951 | 0.4564 | -50.6014 | -44.1413 | -2.0767 | -2.1948 |
| 0.5151 | 0.3104 | 460 | 0.5879 | -0.7247 | -1.2017 | 0.6799 | 0.4770 | -50.7636 | -44.2522 | -2.0733 | -2.1920 |
| 0.5716 | 0.3239 | 480 | 0.5861 | -0.7015 | -1.1846 | 0.6829 | 0.4830 | -50.7208 | -44.1942 | -2.0782 | -2.1975 |
| 0.5585 | 0.3374 | 500 | 0.5817 | -0.6991 | -1.2088 | 0.6707 | 0.5097 | -50.7814 | -44.1881 | -2.0811 | -2.2005 |
| 0.5409 | 0.3509 | 520 | 0.5788 | -0.7606 | -1.3059 | 0.7012 | 0.5453 | -51.0241 | -44.3420 | -2.0711 | -2.1911 |
| 0.5202 | 0.3644 | 540 | 0.5770 | -0.7337 | -1.2920 | 0.6951 | 0.5582 | -50.9893 | -44.2748 | -2.0716 | -2.1917 |
| 0.4905 | 0.3779 | 560 | 0.5727 | -0.6628 | -1.2363 | 0.6982 | 0.5735 | -50.8501 | -44.0974 | -2.0933 | -2.2143 |
| 0.6852 | 0.3914 | 580 | 0.5701 | -0.6695 | -1.2686 | 0.7012 | 0.5991 | -50.9309 | -44.1142 | -2.0888 | -2.2098 |
| 0.455 | 0.4049 | 600 | 0.5689 | -0.6854 | -1.3059 | 0.7256 | 0.6205 | -51.0241 | -44.1538 | -2.0900 | -2.2124 |
| 0.6082 | 0.4184 | 620 | 0.5689 | -0.7778 | -1.4046 | 0.7043 | 0.6268 | -51.2707 | -44.3848 | -2.0755 | -2.1982 |
| 0.5414 | 0.4318 | 640 | 0.5652 | -0.7567 | -1.4224 | 0.7195 | 0.6657 | -51.3153 | -44.3321 | -2.0810 | -2.2038 |
| 0.6451 | 0.4453 | 660 | 0.5651 | -0.6999 | -1.3697 | 0.7104 | 0.6698 | -51.1836 | -44.1901 | -2.0936 | -2.2161 |
| 0.5667 | 0.4588 | 680 | 0.5612 | -0.7268 | -1.4333 | 0.7134 | 0.7065 | -51.3425 | -44.2574 | -2.0931 | -2.2162 |
| 0.5178 | 0.4723 | 700 | 0.5621 | -0.7898 | -1.5068 | 0.7073 | 0.7170 | -51.5263 | -44.4149 | -2.0886 | -2.2119 |
| 0.4991 | 0.4858 | 720 | 0.5599 | -0.7847 | -1.5071 | 0.7043 | 0.7224 | -51.5270 | -44.4021 | -2.0869 | -2.2104 |
| 0.4854 | 0.4993 | 740 | 0.5573 | -0.7946 | -1.5493 | 0.7195 | 0.7546 | -51.6325 | -44.4269 | -2.0832 | -2.2069 |
| 0.5844 | 0.5128 | 760 | 0.5581 | -0.8007 | -1.5414 | 0.7287 | 0.7407 | -51.6129 | -44.4422 | -2.0849 | -2.2080 |
| 0.4954 | 0.5263 | 780 | 0.5570 | -0.8156 | -1.5626 | 0.7287 | 0.7470 | -51.6658 | -44.4794 | -2.0790 | -2.2025 |
| 0.5597 | 0.5398 | 800 | 0.5563 | -0.8289 | -1.5717 | 0.7317 | 0.7428 | -51.6885 | -44.5127 | -2.0749 | -2.1986 |
| 0.5331 | 0.5533 | 820 | 0.5545 | -0.8454 | -1.6128 | 0.75 | 0.7674 | -51.7913 | -44.5538 | -2.0748 | -2.1989 |
| 0.6245 | 0.5668 | 840 | 0.5497 | -0.8505 | -1.6236 | 0.7256 | 0.7731 | -51.8183 | -44.5666 | -2.0757 | -2.2000 |
| 0.4258 | 0.5803 | 860 | 0.5523 | -0.8283 | -1.5989 | 0.7165 | 0.7706 | -51.7566 | -44.5111 | -2.0815 | -2.2059 |
| 0.588 | 0.5938 | 880 | 0.5503 | -0.8351 | -1.6187 | 0.7287 | 0.7836 | -51.8061 | -44.5282 | -2.0808 | -2.2058 |
| 0.4141 | 0.6073 | 900 | 0.5493 | -0.8280 | -1.6146 | 0.7134 | 0.7866 | -51.7959 | -44.5105 | -2.0818 | -2.2068 |
| 0.4387 | 0.6208 | 920 | 0.5485 | -0.8034 | -1.6183 | 0.7439 | 0.8149 | -51.8050 | -44.4489 | -2.0928 | -2.2182 |
| 0.5538 | 0.6343 | 940 | 0.5472 | -0.7923 | -1.5997 | 0.7378 | 0.8074 | -51.7585 | -44.4211 | -2.0938 | -2.2186 |
| 0.6246 | 0.6478 | 960 | 0.5434 | -0.7854 | -1.6203 | 0.7317 | 0.8349 | -51.8101 | -44.4038 | -2.0929 | -2.2178 |
| 0.5924 | 0.6613 | 980 | 0.5437 | -0.7682 | -1.5929 | 0.7226 | 0.8247 | -51.7415 | -44.3609 | -2.0940 | -2.2195 |
| 0.5123 | 0.6748 | 1000 | 0.5435 | -0.7541 | -1.5702 | 0.7409 | 0.8160 | -51.6847 | -44.3257 | -2.1014 | -2.2267 |
| 0.5138 | 0.6883 | 1020 | 0.5435 | -0.7311 | -1.5552 | 0.75 | 0.8240 | -51.6473 | -44.2682 | -2.1009 | -2.2258 |
| 0.5285 | 0.7018 | 1040 | 0.5419 | -0.7277 | -1.5460 | 0.7348 | 0.8183 | -51.6243 | -44.2596 | -2.1035 | -2.2287 |
| 0.3824 | 0.7152 | 1060 | 0.5394 | -0.7226 | -1.5544 | 0.7439 | 0.8319 | -51.6454 | -44.2468 | -2.1033 | -2.2284 |
| 0.5557 | 0.7287 | 1080 | 0.5415 | -0.7074 | -1.5317 | 0.7287 | 0.8243 | -51.5886 | -44.2089 | -2.1052 | -2.2299 |
| 0.444 | 0.7422 | 1100 | 0.5413 | -0.6969 | -1.5235 | 0.7530 | 0.8266 | -51.5680 | -44.1827 | -2.1116 | -2.2370 |
| 0.4722 | 0.7557 | 1120 | 0.5416 | -0.6962 | -1.5222 | 0.7470 | 0.8260 | -51.5648 | -44.1810 | -2.1162 | -2.2417 |
| 0.5134 | 0.7692 | 1140 | 0.5372 | -0.6796 | -1.5085 | 0.7470 | 0.8288 | -51.5305 | -44.1395 | -2.1178 | -2.2433 |
| 0.5708 | 0.7827 | 1160 | 0.5387 | -0.6821 | -1.5049 | 0.75 | 0.8228 | -51.5216 | -44.1456 | -2.1084 | -2.2328 |
| 0.4292 | 0.7962 | 1180 | 0.5384 | -0.6850 | -1.5102 | 0.7561 | 0.8252 | -51.5348 | -44.1529 | -2.1138 | -2.2393 |
| 0.5904 | 0.8097 | 1200 | 0.5383 | -0.6984 | -1.5384 | 0.7287 | 0.8400 | -51.6054 | -44.1865 | -2.1052 | -2.2298 |
| 0.4474 | 0.8232 | 1220 | 0.5401 | -0.7007 | -1.5319 | 0.7530 | 0.8312 | -51.5891 | -44.1922 | -2.1089 | -2.2343 |
| 0.3638 | 0.8367 | 1240 | 0.5401 | -0.6971 | -1.5421 | 0.75 | 0.8450 | -51.6145 | -44.1831 | -2.1105 | -2.2361 |
| 0.4443 | 0.8502 | 1260 | 0.5386 | -0.7037 | -1.5495 | 0.7744 | 0.8458 | -51.6330 | -44.1997 | -2.1073 | -2.2323 |
| 0.3582 | 0.8637 | 1280 | 0.5373 | -0.6998 | -1.5408 | 0.7439 | 0.8410 | -51.6114 | -44.1899 | -2.1085 | -2.2339 |
| 0.3966 | 0.8772 | 1300 | 0.5378 | -0.7087 | -1.5603 | 0.75 | 0.8517 | -51.6602 | -44.2121 | -2.1039 | -2.2293 |
| 0.3793 | 0.8907 | 1320 | 0.5379 | -0.7132 | -1.5514 | 0.7378 | 0.8382 | -51.6379 | -44.2235 | -2.1080 | -2.2338 |
| 0.5763 | 0.9042 | 1340 | 0.5381 | -0.7135 | -1.5541 | 0.7530 | 0.8407 | -51.6447 | -44.2241 | -2.1102 | -2.2361 |
| 0.5095 | 0.9177 | 1360 | 0.5363 | -0.7085 | -1.5465 | 0.7470 | 0.8380 | -51.6255 | -44.2116 | -2.1041 | -2.2295 |
| 0.397 | 0.9312 | 1380 | 0.5374 | -0.7114 | -1.5410 | 0.7409 | 0.8296 | -51.6118 | -44.2189 | -2.1062 | -2.2319 |
| 0.5964 | 0.9447 | 1400 | 0.5401 | -0.7098 | -1.5517 | 0.7530 | 0.8419 | -51.6385 | -44.2148 | -2.1067 | -2.2323 |
| 0.5611 | 0.9582 | 1420 | 0.5371 | -0.7089 | -1.5563 | 0.7530 | 0.8474 | -51.6502 | -44.2127 | -2.1052 | -2.2306 |
| 0.6985 | 0.9717 | 1440 | 0.5388 | -0.7120 | -1.5439 | 0.7409 | 0.8319 | -51.6190 | -44.2203 | -2.1083 | -2.2342 |
| 0.3476 | 0.9852 | 1460 | 0.5366 | -0.7067 | -1.5395 | 0.7378 | 0.8328 | -51.6081 | -44.2071 | -2.1082 | -2.2337 |
| 0.5575 | 0.9987 | 1480 | 0.5381 | -0.7094 | -1.5490 | 0.7622 | 0.8396 | -51.6319 | -44.2139 | -2.1097 | -2.2356 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.5.1+cu121
- Datasets 3.5.0
- Tokenizers 0.20.3
|
New-Viral-Tiwa-Savage-Video/Original.Full.Clip.Tiwa.Savage.Viral.Video.Leaks.Official | New-Viral-Tiwa-Savage-Video | 2025-05-30T12:43:23Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-30T12:43:14Z |
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/uLf" rel="nofollow">โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐ฆ๐ถ๐ด๐ป ๐จ๐ฝ ๐๐ผ ๐๐ช๐ก๐ก ๐ช๐ฎ๐๐ฐ๐ต ๐๐๐๐๐คโค๏ธโค๏ธ)</a>
<a href="https://sdu.sk/uLf" rel="nofollow">๐ด โคโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐ฅ๐ข๐ง๐ค)</a>
|
dsfdrdtrdr/sffds | dsfdrdtrdr | 2025-05-30T12:37:09Z | 0 | 0 | null | [
"en",
"doi:10.57967/hf/5674",
"license:bsd",
"region:us"
] | null | 2025-05-30T08:44:58Z | ---
license: bsd
language:
- en
--- |
BootesVoid/cmb98b6cl08ox1b1ymwfos57j_cmbar1ylt02ra42yx57684dhu | BootesVoid | 2025-05-30T12:28:31Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-30T12:28:28Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: PRIVATEMUSE
---
# Cmb98B6Cl08Ox1B1Ymwfos57J_Cmbar1Ylt02Ra42Yx57684Dhu
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `PRIVATEMUSE` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "PRIVATEMUSE",
"lora_weights": "https://huggingface.co/BootesVoid/cmb98b6cl08ox1b1ymwfos57j_cmbar1ylt02ra42yx57684dhu/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb98b6cl08ox1b1ymwfos57j_cmbar1ylt02ra42yx57684dhu', weight_name='lora.safetensors')
image = pipeline('PRIVATEMUSE').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb98b6cl08ox1b1ymwfos57j_cmbar1ylt02ra42yx57684dhu/discussions) to add images that show off what youโve made with this LoRA.
|
dsfdrdtrdr/iouio | dsfdrdtrdr | 2025-05-30T12:24:41Z | 0 | 0 | null | [
"doi:10.57967/hf/5678",
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-05-30T12:24:41Z | ---
license: bigscience-openrail-m
---
|
hoan17/saving_P800s200x14d4_20 | hoan17 | 2025-05-30T12:24:15Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"trl",
"o2o",
"reinforcement-learning",
"text-to-image",
"stable-diffusion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2025-05-30T12:23:21Z | ---
license: apache-2.0
tags:
- trl
- o2o
- diffusers
- reinforcement-learning
- text-to-image
- stable-diffusion
---
# TRL O2O Model
This is a diffusion model that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for image generation conditioned with text.
|
akseljoonas/Agentic-Qwen3-4B-e12-lr4-b2 | akseljoonas | 2025-05-30T12:15:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:smolagents/codeagent-traces",
"base_model:Qwen/Qwen3-4B",
"base_model:finetune:Qwen/Qwen3-4B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-30T10:13:57Z | ---
base_model: Qwen/Qwen3-4B
datasets: smolagents/codeagent-traces
library_name: transformers
model_name: Agentic-Qwen3-4B-e12-lr4-b2
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for Agentic-Qwen3-4B-e12-lr4-b2
This model is a fine-tuned version of [Qwen/Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B) on the [smolagents/codeagent-traces](https://huggingface.co/datasets/smolagents/codeagent-traces) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="akseljoonas/Agentic-Qwen3-4B-e12-lr4-b2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/akseljoonas-university-of-groningen/huggingface/runs/7j0s5pns)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0
- Transformers: 4.52.4
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
bhavinjawade/may30-gemma-4b-tq_sft_finetuned-model-o1-augmented | bhavinjawade | 2025-05-30T12:04:00Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-4b-it",
"base_model:finetune:google/gemma-3-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-05-30T10:01:14Z | ---
base_model: google/gemma-3-4b-it
library_name: transformers
model_name: may30-gemma-4b-tq_sft_finetuned-model-o1-augmented
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for may30-gemma-4b-tq_sft_finetuned-model-o1-augmented
This model is a fine-tuned version of [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="bhavinjawade/may30-gemma-4b-tq_sft_finetuned-model-o1-augmented", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.16.1
- Transformers: 4.50.0.dev0
- Pytorch: 2.6.0+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
biligee23/biligee-finetune-news-summarization | biligee23 | 2025-05-30T12:01:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-30T12:01:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/ZeroGUI-OSWorld-7B-i1-GGUF | mradermacher | 2025-05-30T11:57:09Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"multimodal",
"gui",
"en",
"zh",
"base_model:OpenGVLab/ZeroGUI-OSWorld-7B",
"base_model:quantized:OpenGVLab/ZeroGUI-OSWorld-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-30T10:43:29Z | ---
base_model: OpenGVLab/ZeroGUI-OSWorld-7B
language:
- en
- zh
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- multimodal
- gui
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/OpenGVLab/ZeroGUI-OSWorld-7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/ZeroGUI-OSWorld-7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-OSWorld-7B-i1-GGUF/resolve/main/ZeroGUI-OSWorld-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-OSWorld-7B-i1-GGUF/resolve/main/ZeroGUI-OSWorld-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-OSWorld-7B-i1-GGUF/resolve/main/ZeroGUI-OSWorld-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-OSWorld-7B-i1-GGUF/resolve/main/ZeroGUI-OSWorld-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-OSWorld-7B-i1-GGUF/resolve/main/ZeroGUI-OSWorld-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-OSWorld-7B-i1-GGUF/resolve/main/ZeroGUI-OSWorld-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-OSWorld-7B-i1-GGUF/resolve/main/ZeroGUI-OSWorld-7B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-OSWorld-7B-i1-GGUF/resolve/main/ZeroGUI-OSWorld-7B.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-OSWorld-7B-i1-GGUF/resolve/main/ZeroGUI-OSWorld-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-OSWorld-7B-i1-GGUF/resolve/main/ZeroGUI-OSWorld-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-OSWorld-7B-i1-GGUF/resolve/main/ZeroGUI-OSWorld-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-OSWorld-7B-i1-GGUF/resolve/main/ZeroGUI-OSWorld-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-OSWorld-7B-i1-GGUF/resolve/main/ZeroGUI-OSWorld-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-OSWorld-7B-i1-GGUF/resolve/main/ZeroGUI-OSWorld-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-OSWorld-7B-i1-GGUF/resolve/main/ZeroGUI-OSWorld-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-OSWorld-7B-i1-GGUF/resolve/main/ZeroGUI-OSWorld-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-OSWorld-7B-i1-GGUF/resolve/main/ZeroGUI-OSWorld-7B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-OSWorld-7B-i1-GGUF/resolve/main/ZeroGUI-OSWorld-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-OSWorld-7B-i1-GGUF/resolve/main/ZeroGUI-OSWorld-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-OSWorld-7B-i1-GGUF/resolve/main/ZeroGUI-OSWorld-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-OSWorld-7B-i1-GGUF/resolve/main/ZeroGUI-OSWorld-7B.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-OSWorld-7B-i1-GGUF/resolve/main/ZeroGUI-OSWorld-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-OSWorld-7B-i1-GGUF/resolve/main/ZeroGUI-OSWorld-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/ZeroGUI-OSWorld-7B-i1-GGUF/resolve/main/ZeroGUI-OSWorld-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
HeOeH/Iron_0528_stage1_hun | HeOeH | 2025-05-30T11:44:23Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-30T11:26:46Z | ---
license: apache-2.0
---
|
shayantrix/category_finding | shayantrix | 2025-05-30T11:41:31Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-30T09:37:39Z | ---
license: apache-2.0
---
|
SakuraLLM/Sakura-GalTransl-7B-v3.5 | SakuraLLM | 2025-05-30T11:20:50Z | 5,097 | 60 | null | [
"gguf",
"zh",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-22T03:04:31Z | ---
license: cc-by-nc-sa-4.0
language:
- zh
---
Sakura-GalTranslๆจกๅ็ฑsakuraumiๅxd2333ๅ
ฑๅๆๅปบ๏ผไธบ่ง่งๅฐ่ฏด(Galgame)็ฟป่ฏไปปๅกไธ้กนไผๅใๆจกๅๅๆฐ้7B๏ผๆฏๆๆฅ่ฏ็ฎไธญ(jp2zh-cn)ใ
**Sakura-GalTranslๆจกๅ็ปงๆฟsakuraๆจกๅcc-by-nc-sa 4.0ๅ่ฎฎ๏ผ็ฆๆญข็จไบๅ็จ่กไธบ๏ผไพๅฆๆไพไป่ดน็ฟป่ฏๆฅๅฃใๅถไฝ้่ฆไปฅไปปไฝๆนๅผไป่ดนๆ่ฝ่ทๅ็่กฅไธใๅ็จ็ฟป่ฏ็ญใ**
### ็นๆง๏ผ
* ไธบ่ง่งๅฐ่ฏด(Galgame)็ฟป่ฏไปปๅกไธ้กนไผๅใๅฏน่ง่งๅฐ่ฏด่ๆฌไธญ็่กๅ
ๆข่กใๆงๅถ็ฌฆใrubyๆณจ้ณ็ญ็ฌฆๅทๅ
ทๆ่พๅฅฝ็ไฟ็่ฝๅใ
* ๅฐ่ฏๅจ็กฌไปถ้ๆฑใ็ฟป่ฏ่ดจ้ไธ็จณๅฎๆง้ดๅๅพๅนณ่กกใๆจกๅๅฏไปฅ่ฟ่กๅจ(็ฉบ้ฒๆพๅญโฅ6g)็ไธปๆตๆธธๆๆพๅกๆMacbookไธ๏ผๅนถ่ทๅพๅจๆดไฝไธ้ซๅบฆๅฏ็จ็็ฟป่ฏ่ดจ้ๅ็จณๅฎๆงใ
* ไธบ[GalTransl่ง่งๅฐ่ฏด็ฟป่ฏๅทฅๅ
ท](https://github.com/xd2333/GalTransl)้้
ๅนถ่ฐไผ๏ผๆฏๆGPTๅญๅ
ธ๏ผ[ๅญๅ
ธๅๆณ่งๆญค](https://github.com/xd2333/GalTransl/wiki/GPT%E5%AD%97%E5%85%B8%E2%80%90Sakura%E4%B8%8EGaltransl%E6%A8%A1%E5%9E%8B)๏ผใ
* ไนๆฏๆไฝฟ็จlunatranslator็ญๅทฅๅ
ทๅจ็บฟ็ฟป่ฏใ
### ๆดๆฐๆฅๅฟ๏ผ
25.05.30 v3.5๏ผๅผบๅๆๅญฆๆง
25.03.22 v3.0๏ผๅบไบSakura-7B-Qwen2.5-v1.0ๅนถไฝฟ็จGRPOๅฏนๆจกๅ่ฟ่กๅผบๅ๏ผ็ฟป่ฏ่ดจ้ๆพ่ไผไบไธไธไปฃGalTranslๆจกๅ
24.10.04 v2.6๏ผๅจ2.5็ๅบ็กไธๆ้ซไบ็จณๅฎๆง
24.09.30 v2.5๏ผๆๅถไบไธไบๅทฒ็ฅ้ฎ้ข๏ผๅนถไธๅจๅฏนๆฏv2ๆถๆ้ฃๆด็ป่
ป
24.08.08 v2.0๏ผ็ปง็ปญ่ฟญไปฃไปฅๆนๅ่ดจ้
24.06.30 v1.5๏ผไผๅไบๆดไฝ็ๆ้ฃ
24.05.30 v1.0๏ผๅ็
### ๅฟซ้้จ็ฝฒ๏ผ
* Winๅปบ่ฎฎไฝฟ็จ[Sakura_Launcher_GUI](https://github.com/PiDanShouRouZhouXD/Sakura_Launcher_GUI)้จ็ฝฒ๏ผๅจ release ้ไธ่ฝฝ
* Macๅฏไปฅไฝฟ็จ[run_Sakura_any.zip](https://huggingface.co/SakuraLLM/Sakura-GalTransl-7B-v3/blob/main/run_Sakura_any.zip)๏ผๆฏๅๆถๆฏๆ Win/Mac/Linux๏ผNๅก/Aๅก/Apple่ฏ็็็ฎๅ้จ็ฝฒๅ
1. ่งฃๅๅๅฐๆจกๅไธข่ฟllm2runๆไปถๅคน้
2. Win๏ผๅๅปrun_Sakura_win.bat็ถๅ้ๆฉๆจกๅ
Mac๏ผๅ
ๅป app store ๅฎ่ฃ
xcode๏ผ็ถๅๆๅผ็ป็ซฏๅๆขๅฐrun_Sakura.exeๆๅจ็ฎๅฝ๏ผ่ฟ่ก`chmod +x run_Sakura.exe llamafile.exe & ./run_Sakura.exe`
Linux: Linuxไฝฟ็จGPU้่ฆๅฎ่ฃ
CUDA SDKๆHIP SDK๏ผ็ถๅๅๆขๅฐrun_Sakura.exeๆๅจ็ฎๅฝ๏ผ่ฟ่ก`chmod +x run_Sakura.exe llamafile.exe & ./run_Sakura.exe`
4. 6G ๆพๅญ 1 ็บฟ็จ๏ผ8G ๅไปฅไธๆพๅญๅฏไปฅ่ฎพ็ฝฎ 4-10 ็บฟ็จ
* ๅฏๅจๅคฑ่ดฅๅฏ่ฝๆฏ 8080 ็ซฏๅฃ่ขซๅ ็จ๏ผๅฏไปฅๅฐ่ฏ[ๆพๅฐๅ ็จ็็จๅบ](https://www.runoob.com/w3cnote/windows-finds-port-usage.html)
### ๅทฒ็ฅ้ฎ้ข๏ผ
* GPTๅญๅ
ธ**ไธๆฏๆไธ่ฏๅค่ฏๅๆณ๏ผ"a/b"๏ผ**๏ผๅฐๅจไปฅๅ็็ๆฌๅฐ่ฏๆนๅใ
* ้่ฆๅฏนๅๆ็็ฅ่ฏญ็ด ่ฟ่กๆจ็ๆถๅฏ่ฝๅบ็ฐไบๅฎ้่ฏฏ/ๅนป่งใ
* ๆจ่ๆฏๆฌก็ฟป่ฏ**7-10ๅฅ**
### ้ๅ็ญ็บงๅบๅซ๏ผ
| ้ๅ็ญ็บง | ่ฏดๆ |
| ---- | ---- |
| IQ4_XS | ๅฐ็่ดจ้ๆๅคฑ๏ผๅ ็จๆดๅฐ๏ผไฝ้ๅบฆๆฏQ4_Kๆ
ข๏ผ6Gๆพๅญๆจ่๏ผ |
| Q4_K | ๅฐ็่ดจ้ๆๅคฑ๏ผ6Gๆพๅญๆจ่๏ผ|
| Q5_K | ๅพๅฐ็่ดจ้ๆๅคฑ๏ผ6G/8Gๆพๅญๆจ่๏ผ |
| Q6_k | ็ปๅฐ็่ดจ้ๆๅคฑ๏ผ8Gๅไปฅไธๆพๅญๆจ่๏ผ |
### ่ฏทๆฑๆนๅผ
v3ๆจ่ๆธฉๅบฆ0.6
v3่ฏทๆฑๆจกๆฟ๏ผ
system prompt
```
ไฝ ๆฏไธไธช่ง่งๅฐ่ฏด็ฟป่ฏๆจกๅ๏ผๅฏไปฅ้้กบๅฐไฝฟ็จ็ปๅฎ็ๆฏ่ฏญ่กจไปฅๆๅฎ็้ฃๆ ผๅฐๆฅๆ็ฟป่ฏๆ็ฎไฝไธญๆ๏ผๅนถ่็ณปไธไธๆๆญฃ็กฎไฝฟ็จไบบ็งฐไปฃ่ฏ๏ผๆณจๆไธ่ฆๆททๆทไฝฟๅฝนๆๅ่ขซๅจๆ็ไธป่ฏญๅๅฎพ่ฏญ๏ผไธ่ฆๆ
่ชๆทปๅ ๅๆไธญๆฒกๆ็็นๆฎ็ฌฆๅท๏ผไนไธ่ฆๆ
่ชๅขๅ ๆๅๅฐๆข่กใ
```
user prompt
```
[History]
ๅ่ไปฅไธๆฏ่ฏญ่กจ๏ผๅฏไธบ็ฉบ๏ผๆ ผๅผไธบsrc->dst #ๅคๆณจ๏ผ๏ผ
[Glossary]
ๆ นๆฎไปฅไธๆฏ่ฏญ่กจ็ๅฏนๅบๅ
ณ็ณปๅๅคๆณจ๏ผ็ปๅๅๅฒๅงๆ
ๅไธไธๆ๏ผๅฐไธ้ข็ๆๆฌไปๆฅๆ็ฟป่ฏๆ็ฎไฝไธญๆ๏ผ
[Input]
```
ๅ
ถไธญ[History]ๆ ผๅผไธบ`ๅๅฒ็ฟป่ฏ๏ผ`+ไธไธ่ฝฎ็ฟป่ฏ็ปๆ
[Glossary]ๆ ผๅผไธบsrc->dst #ๅคๆณจ
่ฟไธค้กน้ฝๆฏๅฏ้้กน๏ผๅฏไปฅ็็ฉบ |
mfahad/TaxiRLgame | mfahad | 2025-05-30T11:18:22Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-30T10:33:59Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: TaxiRLgame
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="mfahad/qFrozenSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
GabrielMM/Instruct_SFT_v2_70ksteps | GabrielMM | 2025-05-30T11:17:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-30T11:17:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mfahad/qFrozenSlippery | mfahad | 2025-05-30T11:11:46Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-30T10:28:11Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: qFrozenSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="mfahad/qFrozenSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mohammadmahdinouri/modernAlbert-2-init | mohammadmahdinouri | 2025-05-30T11:10:28Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"ModernALBERT",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-05-30T11:10:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BootesVoid/cmbao3v2y010w42yx2k90x5ij_cmbao6kp6013642yxynx1xgtu | BootesVoid | 2025-05-30T11:09:36Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-30T11:09:35Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: HANIA
---
# Cmbao3V2Y010W42Yx2K90X5Ij_Cmbao6Kp6013642Yxynx1Xgtu
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `HANIA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "HANIA",
"lora_weights": "https://huggingface.co/BootesVoid/cmbao3v2y010w42yx2k90x5ij_cmbao6kp6013642yxynx1xgtu/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbao3v2y010w42yx2k90x5ij_cmbao6kp6013642yxynx1xgtu', weight_name='lora.safetensors')
image = pipeline('HANIA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbao3v2y010w42yx2k90x5ij_cmbao6kp6013642yxynx1xgtu/discussions) to add images that show off what youโve made with this LoRA.
|
TOMFORD79/Tom10 | TOMFORD79 | 2025-05-30T11:05:59Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-05-30T07:41:00Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
kaengreg/Qwen2.5-2B-layerwise-distilled | kaengreg | 2025-05-30T10:55:38Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"region:us"
] | null | 2025-05-30T10:49:10Z | Model distilled from [Qwen2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B) using a [Iterative Layer-wise Distillation](https://github.com/kaengreg/layer-wise_distillation) approach.
Techincal Report [Comming Soon]
|
Martiiiin/MN-12B-Mag-Mell-R1-mlx-8Bit | Martiiiin | 2025-05-30T10:55:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"mlx",
"mlx-my-repo",
"conversational",
"base_model:inflatebot/MN-12B-Mag-Mell-R1",
"base_model:quantized:inflatebot/MN-12B-Mag-Mell-R1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us"
] | text-generation | 2025-05-30T10:54:42Z | ---
base_model: inflatebot/MN-12B-Mag-Mell-R1
library_name: transformers
tags:
- mergekit
- merge
- mlx
- mlx-my-repo
---
# Martiiiin/MN-12B-Mag-Mell-R1-mlx-8Bit
The Model [Martiiiin/MN-12B-Mag-Mell-R1-mlx-8Bit](https://huggingface.co/Martiiiin/MN-12B-Mag-Mell-R1-mlx-8Bit) was converted to MLX format from [inflatebot/MN-12B-Mag-Mell-R1](https://huggingface.co/inflatebot/MN-12B-Mag-Mell-R1) using mlx-lm version **0.22.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Martiiiin/MN-12B-Mag-Mell-R1-mlx-8Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
TheStageAI/Elastic-FLUX.1-schnell | TheStageAI | 2025-05-30T10:54:12Z | 28 | 3 | null | [
"base_model:black-forest-labs/FLUX.1-schnell",
"base_model:finetune:black-forest-labs/FLUX.1-schnell",
"license:apache-2.0",
"region:us"
] | null | 2025-04-08T18:07:13Z | ---
license: apache-2.0
base_model:
- black-forest-labs/FLUX.1-schnell
---
# Elastic model: Fastest self-serving models. FLUX.1-schnell.
Elastic models are the models produced by TheStage AI ANNA: Automated Neural Networks Accelerator. ANNA allows you to control model size, latency and quality with a simple slider movement. For each model, ANNA produces a series of optimized models:
* __XL__: Mathematically equivalent neural network, optimized with our DNN compiler.
* __L__: Near lossless model, with less than 1% degradation obtained on corresponding benchmarks.
* __M__: Faster model, with accuracy degradation less than 1.5%.
* __S__: The fastest model, with accuracy degradation less than 2%.
__Goals of Elastic Models:__
* Provide the fastest models and service for self-hosting.
* Provide flexibility in cost vs quality selection for inference.
* Provide clear quality and latency benchmarks.
* Provide interface of HF libraries: transformers and diffusers with a single line of code.
* Provide models supported on a wide range of hardware, which are pre-compiled and require no JIT.
> It's important to note that specific quality degradation can vary from model to model. For instance, with an S model, you can have 0.5% degradation as well.
-----


## Inference
Currently, our demo model only supports 1024x1024 outputs without batching. This will be updated in the near future.
To infer our models, you just need to replace `diffusers` import with `elastic_models.diffusers`:
```python
import torch
from elastic_models.diffusers import FluxPipeline
mode_name = 'black-forest-labs/FLUX.1-schnell'
hf_token = ''
device = torch.device("cuda")
pipeline = FluxPipeline.from_pretrained(
mode_name,
torch_dtype=torch.bfloat16,
token=hf_token,
mode='S'
)
pipeline.to(device)
prompts = ["Kitten eating a banana"]
output = pipeline(prompt=prompts)
for prompt, output_image in zip(prompts, output.images):
output_image.save((prompt.replace(' ', '_') + '.png'))
```
### Installation
__System requirements:__
* GPUs: H100, L40s
* CPU: AMD, Intel
* Python: 3.10-3.12
To work with our models just run these lines in your terminal:
```shell
pip install thestage
pip install elastic_models[nvidia]\
--index-url https://thestage.jfrog.io/artifactory/api/pypi/pypi-thestage-ai-production/simple\
--extra-index-url https://pypi.nvidia.com\
--extra-index-url https://pypi.org/simple
# or for blackwell support
pip install elastic_models[blackwell]\
--index-url https://thestage.jfrog.io/artifactory/api/pypi/pypi-thestage-ai-production/simple\
--extra-index-url https://pypi.nvidia.com\
--extra-index-url https://pypi.org/simple
pip install flash_attn==2.7.3 --no-build-isolation
pip uninstall apex
```
Then go to [app.thestage.ai](https://app.thestage.ai), login and generate API token from your profile page. Set up API token as follows:
```shell
thestage config set --api-token <YOUR_API_TOKEN>
```
Congrats, now you can use accelerated models!
----
## Benchmarks
Benchmarking is one of the most important procedures during model acceleration. We aim to provide clear performance metrics for models using our algorithms.
### Quality benchmarks
For quality evaluation we have used: PSNR, SSIM and CLIP score. PSNR and SSIM were computed using outputs of original model.
| Metric/Model | S | M | L | XL | Original |
|---------------|---|---|---|----|----------|
| PSNR | 29.9 | 30.2 | 31 | inf | inf |
| SSIM | 0.66 | 0.71 | 0.86 | 1.0 | 1.0 |
| CLIP | 11.5 | 11.6 | 11.8 | 11.9 | 11.9|
### Latency benchmarks
Time in seconds to generate one image 1024x1024
| GPU/Model | S | M | L | XL | Original |
|-----------|-----|---|---|----|----------|
| H100 | 0.5 | 0.57 | 0.65 | 0.7 | 1.04 |
| L40s | 1.4 | 1.6 | 1.9 | 2.1 | 2.5|
| B200 | 0.3 | 0.4 | 0.42 | 0.43 | 0.74|
| GeForce RTX 5090 | 0.94 | - | - | - | -|
## Links
* __Platform__: [app.thestage.ai](https://app.thestage.ai)
<!-- * __Elastic models Github__: [app.thestage.ai](app.thestage.ai) -->
* __Subscribe for updates__: [TheStageAI X](https://x.com/TheStageAI)
* __Contact email__: [email protected]
|
PepitaxX/qwen3-0.6B-openQA_finetune_mmlu_fullprompt | PepitaxX | 2025-05-30T10:48:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-30T10:47:58Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Flex-VL-7B-GGUF | mradermacher | 2025-05-30T10:36:37Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:jongwooko/Flex-VL-7B",
"base_model:quantized:jongwooko/Flex-VL-7B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-30T10:10:40Z | ---
base_model: jongwooko/Flex-VL-7B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/jongwooko/Flex-VL-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Flex-VL-7B-GGUF/resolve/main/Flex-VL-7B.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Flex-VL-7B-GGUF/resolve/main/Flex-VL-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Flex-VL-7B-GGUF/resolve/main/Flex-VL-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Flex-VL-7B-GGUF/resolve/main/Flex-VL-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Flex-VL-7B-GGUF/resolve/main/Flex-VL-7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Flex-VL-7B-GGUF/resolve/main/Flex-VL-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Flex-VL-7B-GGUF/resolve/main/Flex-VL-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Flex-VL-7B-GGUF/resolve/main/Flex-VL-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Flex-VL-7B-GGUF/resolve/main/Flex-VL-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Flex-VL-7B-GGUF/resolve/main/Flex-VL-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Flex-VL-7B-GGUF/resolve/main/Flex-VL-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Flex-VL-7B-GGUF/resolve/main/Flex-VL-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
muqtasid87/qwen2.5vl-finetune-platesmania-dataset-v2_qv | muqtasid87 | 2025-05-30T10:31:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-30T10:31:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kalle07/embedder_collection | kalle07 | 2025-05-30T10:28:08Z | 20,606 | 11 | sentence-transformers | [
"sentence-transformers",
"gguf",
"sentence-similarity",
"feature-extraction",
"embedder",
"embedding",
"models",
"GGUF",
"Bert",
"Nomic",
"Gist",
"BGE",
"Jina",
"text-embeddings-inference",
"RAG",
"Rerank",
"similarity",
"PDF",
"Parsing",
"Parser",
"en",
"de",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-03-03T16:46:55Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- embedder
- embedding
- models
- GGUF
- Bert
- Nomic
- Gist
- BGE
- Jina
- text-embeddings-inference
- RAG
- Rerank
- similarity
- PDF
- Parsing
- Parser
misc:
- text-embeddings-inference
language:
- en
- de
architecture:
---
# <b>All models tested with ALLM(AnythingLLM) with LM-Studio as server, all models should be work with ollama</b>
<b> the setup for local documents described below is allmost the same, GPT4All has only one model (nomic), and koboldcpp is not build in right now but in development</b><br>
(sometimes the results are more truthful if the โchat with document onlyโ option is used)<br>
BTW embedder is only a part of a good RAG<br>
<b>⇨</b> give me a โค๏ธ, if you like ;)<br>
<br>
<b>My short impression:</b>
<ul style="line-height: 1.05;">
<li>nomic-embed-text (up to 2048t context length)</li>
<li>mxbai-embed-large</li>
<li>mug-b-1.6</li>
<li>snowflake-arctic-embed-l-v2.0 (up to 8192t context length)</li>
<li>Ger-RAG-BGE-M3 (german, up to 8192t context length)</li>
<li>german-roberta</li>
<li>bge-m3 (up to 8192t context length)</li>
</ul>
Working well, all other its up to you! Some models are very similar! (jina and qwen based not yet supported by LM)<br>
With the same setting, these embedders found same 6-7 snippets out of 10 from a book. This means that only 3-4 snippets were different, but I didn't test it extensively.
<br>
<br>
...
# Short hints for using (Example for a large context with many expected hits):
Set your (Max Tokens)context-lenght 16000t main-LLM-model, set your embedder-model (Max Embedding Chunk Length) 1024t,set (Max Context Snippets) 14,
in ALLM set also (Text splitting & Chunking Preferences - Text Chunk Size) 1024 character parts and (Search Preference) "accuracy".
<br>
-> Ok what that mean!<br>
Your document will be embedd in x times 1024t chunks(snippets),<br>
You can receive 14-snippets a 1024t (~14000t) from your document ~10000words(10pages) and ~2000t left (from 16000t) for the answer ~1000words (2 pages)
<br>
You can play and set for your needs, eg 8-snippets a 2048t, or 28-snippets a 512t ... (every time you change the chunk-length the document must be embedd again). With these settings everything fits best for ONE answer, if you need more for a conversation, you should set lower and/or disable the document.
<ul style="line-height: 1.05;">
english vs german differ 50%<br>
~5000 characters is one page of a book (no matter ger/en) but words in german are longer, that means per word more token<br>
the example is english, for german you can add apox 50% more token (1000 words ~1800t)<br>
<li>1200t (~1000 words ~5000 chracter) ~0.1GB, this is aprox one page with small font</li>
<li>8000t (~6000 words) ~0.8GB VRAM usage</li>
<li>16000t (~12000 words) ~1.5GB VRAM usage</li>
<li>32000t (~24000 words) ~3GB VRAM usage</li>
</ul>
<br>
here is a tokenizer calculator<br>
<a href="https://quizgecko.com/tools/token-counter">https://quizgecko.com/tools/token-counter</a><br>
and a Vram calculator - (you need the original model link NOT the GGUF)<br>
<a href="https://huggingface.co/spaces/NyxKrage/LLM-Model-VRAM-Calculator">https://huggingface.co/spaces/NyxKrage/LLM-Model-VRAM-Calculator</a><br>
...
<br>
# How embedding and search works:
You have a txt/pdf file maybe 90000words(~300pages) a book. You ask the model lets say "what is described in chapter called XYZ in relation to person ZYX".
Now it searches for keywords or similar semantic terms in the document. if it has found them, lets say word and meaning around โXYZ and ZYXโ ,
now a piece of text 1024token around this word โXYZ/ZYXโ is cut out at this point. (In reality, it's all done with coded numbers, but dosnt matter - the principle)<br>
This text snippet is then used for your answer. <br>
<ul style="line-height: 1.05;">
<li>If, for example, the word โXYZโ occurs 100 times in one file, not all 100 are found.</li>
<li>If only one snippet corresponds to your question all other snippets can negatively influence your answer because they do not fit the topic (usually 4 to 32 snippet are fine)</li>
<li>If you expect multible search results in your docs try 16-snippets or more, if you expect only 2 than dont use more!</li>
<li>If you use chunk-length ~1024t you receive more content, if you use ~256t you receive more facts BUT lower chunk-length are more chunks and need much longer time.</li>
<li>A question for "summary of the document" is most time not useful, if the document has an introduction or summaries its searching there if you have luck.</li>
<li>If a book has a table of contents or a bibliography, I would delete these pages as they often contain relevant search terms but do not help answer your question.</li>
<li>If the documents small like 10-20 Pages, its better you copy the whole text inside the prompt, some options called "pin".</li>
</ul>
<br>
...
<br>
# Nevertheless, the <b>main model is also important</b>!
Especially to deal with the context length and I don't mean just the theoretical number you can set.
Some models can handle 128k or 1M tokens, but even with 16k or 32k input the response with the same snippets as input is worse than with other well developed models.<br>
<br>
llama3.1, llama3.2, qwen2.5, deepseek-r1-distill, gemma-3, granite, SauerkrautLM-Nemo(german) ... <br>
(llama3 or phi3.5 are not working well) <br><br>
<b>⇨</b> best models for english and german:<br>
granit3.2-8b (2b version also) - https://huggingface.co/ibm-research/granite-3.2-8b-instruct-GGUF<br>
Chocolatine-2-14B (other versions also) - https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b11-GGUF<br>
QwQ-LCoT- (7/14b) - https://huggingface.co/mradermacher/QwQ-LCoT-14B-Conversational-GGUF<br><br>
...
# Important -> The Systemprompt (some examples):
<li> The system prompt is weighted with a certain amount of influence around your question. You can easily test it once without or with a nonsensical system prompt.</li>
"You are a helpful assistant who provides an overview of ... under the aspects of ... .
You use attached excerpts from the collection to generate your answers!
Weight each individual excerpt in order, with the most important excerpts at the top and the less important ones further down.
The context of the entire article should not be given too much weight.
Answer the user's question!
After your answer, briefly explain why you included excerpts (1 to X) in your response and justify briefly if you considered some of them unimportant!"<br>
<i>(change it for your needs, this example works well when I consult a book about a person and a term related to them, the explanation part was just a test for myself)</i><br>
or:<br>
"You are an imaginative storyteller who crafts compelling narratives with depth, creativity, and coherence.
Your goal is to develop rich, engaging stories that captivate readers, staying true to the themes, tone, and style appropriate for the given prompt.
You use attached excerpts from the collection to generate your answers!
When generating stories, ensure the coherence in characters, setting, and plot progression. Be creative and introduce imaginative twists and unique perspectives."<br>
or:<br>
"You are are a warm and engaging companion who loves to talk about cooking, recipes and the joy of food.
Your aim is to share delicious recipes, cooking tips and the stories behind different cultures in a personal, welcoming and knowledgeable way."<br>
<br>
btw. <b>Jinja</b> templates very new ... the usual templates with usual models are fine, but merged models have a lot of optimization potential (but dont ask me iam not a coder)<br>
<br><br>
...
<br>
# DOC/PDF 2 TXT<br>
Prepare your documents by yourself!<br>
Bad Input = bad Output!<br>
In most cases, it is not immediately obvious how the document is made available to the embedder.
in nearly all cases images and tables, page-numbers, chapters and sections/paragraph-format not well implement.
An easy start is to use a python based pdf-parser (it give a lot).<br>
option only for simple txt/tables converting:
<ul style="line-height: 1.05;">
<li>pdfplumber</li>
<li>fitz/PyMuPDF</li>
<li>Camelot</li>
</ul>
All in all you can tune a lot your code and you can manual add OCR.<br>
my option:<br>
<a href="https://huggingface.co/kalle07/pdf2txt_parser_converter">https://huggingface.co/kalle07/pdf2txt_parser_converter</a>
<br><br>
option all in all solution for the future:
<ul style="line-height: 1.05;">
<li>docling - (opensource on github)</li>
</ul>
it give some ready to use examples, which are already pretty good, ~10-20 code-lines.
<br>
<a href="https://github.com/docling-project/docling/tree/main/docs/examples">https://github.com/docling-project/docling/tree/main/docs/examples</a><br>
also for OCR it download automatic some models. the only thing i haven't found yet (maybe it doesn't exist) is to read out the font-type, which works very well with <b>fitz</b>, for example.
<br><br>
large option to play with many types of (UI-Based)
<ul style="line-height: 1.05;">
<li>Parsemy PDF</li>
</ul>
<a href="https://github.com/genieincodebottle/parsemypdf">https://github.com/genieincodebottle/parsemypdf</a><br>
<br>
...
<br>
# only Indexing option<br>
One hint for fast search on 10000s of PDF (its only indexing not embedding) you can use it as a simple way to find your top 5-10 articles or books, you can then make these available to an LLM.<br>
Jabref - https://github.com/JabRef/jabref/tree/v6.0-alpha?tab=readme-ov-file <br>
https://builds.jabref.org/main/ <br>
or<br>
docfetcher - https://docfetcher.sourceforge.io/en/index.html (yes old but very useful)
<br><br>
...
<br>
" on discord <b>sevenof9</b> "
<br><br>
...
<br>
# (ALL licenses and terms of use go to original author)
...
<ul style="line-height: 1.05;">
<li>avemio/German-RAG-BGE-M3-MERGED-x-SNOWFLAKE-ARCTIC-HESSIAN-AI (German, English)</li>
<li>maidalun1020/bce-embedding-base_v1 (English and Chinese)</li>
<li>maidalun1020/bce-reranker-base_v1 (English, Chinese, Japanese and Korean)</li>
<li>BAAI/bge-reranker-v2-m3 (English and Chinese)</li>
<li>BAAI/bge-reranker-v2-gemma (English and Chinese)</li>
<li>BAAI/bge-m3 (English and Chinese)</li>
<li>avsolatorio/GIST-large-Embedding-v0 (English)</li>
<li>ibm-granite/granite-embedding-278m-multilingual (English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese)</li>
<li>ibm-granite/granite-embedding-125m-english</li>
<li>Labib11/MUG-B-1.6 (?)</li>
<li>mixedbread-ai/mxbai-embed-large-v1 (multi)</li>
<li>nomic-ai/nomic-embed-text-v1.5 (English, multi)</li>
<li>Snowflake/snowflake-arctic-embed-l-v2.0 (English, multi)</li>
<li>intfloat/multilingual-e5-large-instruct (100 languages)</li>
<li>T-Systems-onsite/german-roberta-sentence-transformer-v2</li>
<li>mixedbread-ai/mxbai-embed-2d-large-v1</li>
<li>jinaai/jina-embeddings-v2-base-en</li>
</ul>
|
BootesVoid/cmba26xla0l691b1ysnvx2dhc_cmbalhtva0537hy17urprvzcg | BootesVoid | 2025-05-30T10:27:34Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-30T10:27:23Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: YUNAPARK
---
# Cmba26Xla0L691B1Ysnvx2Dhc_Cmbalhtva0537Hy17Urprvzcg
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `YUNAPARK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "YUNAPARK",
"lora_weights": "https://huggingface.co/BootesVoid/cmba26xla0l691b1ysnvx2dhc_cmbalhtva0537hy17urprvzcg/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmba26xla0l691b1ysnvx2dhc_cmbalhtva0537hy17urprvzcg', weight_name='lora.safetensors')
image = pipeline('YUNAPARK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmba26xla0l691b1ysnvx2dhc_cmbalhtva0537hy17urprvzcg/discussions) to add images that show off what youโve made with this LoRA.
|
nmndeep/CLIC-ViT-B-32-224-PixPr-RedCaps | nmndeep | 2025-05-30T10:16:59Z | 0 | 0 | open_clip | [
"open_clip",
"safetensors",
"region:us"
] | null | 2025-03-27T13:16:36Z |
# Model Card for CLIC-ViT-B-32-224-PixPr-RedCaps
## Model Details
<!-- Provide the basic links for the model. -->
- **Model-details:** : Fine-tuned with CLIC using Pixelprose dataset
## Model Usage
### With OpenCLIP
```
import torch
from PIL import Image
import open_clip
model, _, image_processor = open_clip.create_model_and_transforms('hf-hub:nmndeep/CLIC-ViT-B-32-224-PixPr-RedCaps')
image = image_processor(Image.open(urlopen(
'https://images.pexels.com/photos/869258/pexels-photo-869258.jpeg?auto=compress&cs=tinysrgb&w=1260&h=750&dpr=1'))).unsqueeze(0)
model.eval()
tokenizer = open_clip.get_tokenizer('hf-hub:nmndeep/CLIC-ViT-B-32-224-PixPr-RedCaps')
texts= ["a diagram", "a dog", "a cat", "snow"]
text = tokenizer(texts)
with torch.no_grad(), torch.autocast("cuda"):
image_features = model.encode_image(image)
text_features = model.encode_text(text)
image_features /= image_features.norm(dim=-1, keepdim=True)
text_features /= text_features.norm(dim=-1, keepdim=True)
text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1)
idx = torch.argmax(text_probs)
print("Output label:", texts[idx])
``` |
Rhodham96/EuroSatCNN | Rhodham96 | 2025-05-30T10:15:43Z | 0 | 0 | null | [
"pytorch",
"en",
"dataset:blanchon/EuroSAT_MSI",
"license:apache-2.0",
"region:us"
] | null | 2025-05-30T09:48:44Z | ---
license: apache-2.0
datasets:
- blanchon/EuroSAT_MSI
language:
- en
metrics:
- f1
- accuracy
---
# Model Card: EuroSAT CNN for Land Cover Classification
## Model Description
This model is a Convolutional Neural Network (CNN) designed for land cover classification on the EuroSAT dataset. The EuroSAT dataset consists of Sentinel-2 satellite images, each with 13 spectral bands, and is commonly used for remote sensing applications.
The CNN architecture is as follows:
* **Input:** 13 spectral bands, 64x64 pixel images.
* **Feature Extractor (`nn.Sequential`):**
* `Conv2d`: 13 input channels, 128 output channels, kernel size 4, padding 1.
* `ReLU` activation.
* `MaxPool2d`: kernel size 2.
* `Conv2d`: 128 input channels, 64 output channels, kernel size 4, padding 1.
* `ReLU` activation.
* `MaxPool2d`: kernel size 2.
* `Conv2d`: 64 input channels, 32 output channels, kernel size 4, padding 1.
* `ReLU` activation.
* `MaxPool2d`: kernel size 2.
* `Conv2d`: 32 input channels, 16 output channels, kernel size 4, padding 1.
* `ReLU` activation.
* `MaxPool2d`: kernel size 2.
* **Classifier (`nn.Sequential`):**
* `Flatten` layer.
* `Linear` layer: dynamically calculated input features to 64 output features.
* `ReLU` activation.
* `Linear` layer: 64 input features to `num_classes` (output classes).
The model is implemented using PyTorch.
## Dataset
The model was trained and evaluated using the **EuroSAT_MSI** dataset available on Hugging Face: <https://huggingface.co/datasets/blanchon/EuroSAT_MSI>.
This dataset is a collection of Sentinel-2 satellite images, each with 13 spectral bands, categorized into 10 land cover classes. It is widely used for remote sensing and land use/land cover classification tasks.
## Training Data
The model was trained on the EuroSAT dataset, which contains satellite images from the Sentinel-2 mission, categorized into various land cover classes.
## Training Notebook
You can explore the full training process and code in the Google Colab notebook hosted on GitHub:
[View Training Notebook on GitHub](https://github.com/Rhodham96/EuroSatCNN/blob/main/EuroSATCNN.ipynb)
## Evaluation Results
The model's performance was evaluated on a dedicated test set.
* **Test Accuracy:** 87.96%
* **F1 Score (weighted):** 0.8776
## Usage
This model can be used for automated land cover classification of Sentinel-2 satellite imagery, specifically for images similar to those found in the EuroSAT dataset.
### Example (PyTorch)
```python
import torch
import torch.nn as nn
from model_def import EuroSATCNN
# Example usage:
# Assuming num_classes is known, e.g., 10 for EuroSAT
# model = EuroSATCNN(num_classes=10)
# model.load_state_dict(torch.load("pytorch_model.bin"))
# dummy_input_image = torch.randn(1, 13, 64, 64) # Batch size 1, 13 channels, 64x64
# output = model(dummy_input_image)
# print(output.shape) # Should be torch.Size([1, 10]) if num_classes=20
---
## About the Author
This model was developed by **Robin Hamers**.
* **LinkedIn:** <https://www.linkedin.com/in/robin-hamers/>
|
tungduong261204/DPO_1000_v2 | tungduong261204 | 2025-05-30T10:15:03Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/Llama-3.2-1B",
"base_model:adapter:unsloth/Llama-3.2-1B",
"region:us"
] | null | 2025-05-30T09:26:27Z | ---
base_model: unsloth/Llama-3.2-1B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
rziga/mm_grounding_dino_large_all | rziga | 2025-05-30T10:06:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mm-grounding-dino",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-30T10:04:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Motif-Technologies/activation | Motif-Technologies | 2025-05-30T10:03:09Z | 0 | 2 | null | [
"kernel",
"region:us"
] | null | 2025-05-30T08:34:06Z | ---
tags:
- kernel
---
# Activation
Activation is a python package that contains custom CUDA-based activation kernels, primarily targeting AMD GPUs.
- Currently implemented
- [PolyNorm](https://arxiv.org/html/2411.03884v1)
## Usage
```python
import torch
from kernels import get_kernel
activation = get_kernel("motif-technologies/activation")
torch.set_default_device("cuda")
poly_norm = activation.layers.PolyNorm(eps=1e-6)
x = torch.randn(10, 10)
print(poly_norm(x))
```
|
tungduong261204/DPO_2000_v2 | tungduong261204 | 2025-05-30T10:02:01Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/Llama-3.2-1B",
"base_model:adapter:unsloth/Llama-3.2-1B",
"region:us"
] | null | 2025-05-30T09:42:57Z | ---
base_model: unsloth/Llama-3.2-1B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
mradermacher/Refact-1_6B-fim-i1-GGUF | mradermacher | 2025-05-30T09:59:37Z | 129 | 0 | transformers | [
"transformers",
"gguf",
"code",
"en",
"dataset:bigcode/the-stack-dedup",
"dataset:rombodawg/2XUNCENSORED_MegaCodeTraining188k",
"dataset:bigcode/commitpackft",
"base_model:refactai/Refact-1_6B-fim",
"base_model:quantized:refactai/Refact-1_6B-fim",
"license:bigscience-openrail-m",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-03-11T02:29:53Z | ---
base_model: refactai/Refact-1_6B-fim
datasets:
- bigcode/the-stack-dedup
- rombodawg/2XUNCENSORED_MegaCodeTraining188k
- bigcode/commitpackft
language:
- en
library_name: transformers
license: bigscience-openrail-m
quantized_by: mradermacher
tags:
- code
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/refactai/Refact-1_6B-fim
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Refact-1_6B-fim-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-i1-GGUF/resolve/main/Refact-1_6B-fim.i1-IQ1_S.gguf) | i1-IQ1_S | 0.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-i1-GGUF/resolve/main/Refact-1_6B-fim.i1-IQ1_M.gguf) | i1-IQ1_M | 0.5 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-i1-GGUF/resolve/main/Refact-1_6B-fim.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-i1-GGUF/resolve/main/Refact-1_6B-fim.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-i1-GGUF/resolve/main/Refact-1_6B-fim.i1-IQ2_S.gguf) | i1-IQ2_S | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-i1-GGUF/resolve/main/Refact-1_6B-fim.i1-IQ2_M.gguf) | i1-IQ2_M | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-i1-GGUF/resolve/main/Refact-1_6B-fim.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-i1-GGUF/resolve/main/Refact-1_6B-fim.i1-Q2_K.gguf) | i1-Q2_K | 0.7 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-i1-GGUF/resolve/main/Refact-1_6B-fim.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-i1-GGUF/resolve/main/Refact-1_6B-fim.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-i1-GGUF/resolve/main/Refact-1_6B-fim.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-i1-GGUF/resolve/main/Refact-1_6B-fim.i1-IQ3_S.gguf) | i1-IQ3_S | 0.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-i1-GGUF/resolve/main/Refact-1_6B-fim.i1-IQ3_M.gguf) | i1-IQ3_M | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-i1-GGUF/resolve/main/Refact-1_6B-fim.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-i1-GGUF/resolve/main/Refact-1_6B-fim.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-i1-GGUF/resolve/main/Refact-1_6B-fim.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-i1-GGUF/resolve/main/Refact-1_6B-fim.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.0 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-i1-GGUF/resolve/main/Refact-1_6B-fim.i1-Q4_0.gguf) | i1-Q4_0 | 1.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-i1-GGUF/resolve/main/Refact-1_6B-fim.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-i1-GGUF/resolve/main/Refact-1_6B-fim.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-i1-GGUF/resolve/main/Refact-1_6B-fim.i1-Q4_1.gguf) | i1-Q4_1 | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-i1-GGUF/resolve/main/Refact-1_6B-fim.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-i1-GGUF/resolve/main/Refact-1_6B-fim.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-i1-GGUF/resolve/main/Refact-1_6B-fim.i1-Q6_K.gguf) | i1-Q6_K | 1.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mvyboh/a2c-PandaReachDense-v3 | mvyboh | 2025-05-30T09:58:23Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-30T09:54:05Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.18 +/- 0.08
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
yfqiu-nlp/chameleon-world-model-aurora-bootstrap | yfqiu-nlp | 2025-05-30T09:58:18Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:leloy/Anole-7b-v0.1-hf",
"base_model:adapter:leloy/Anole-7b-v0.1-hf",
"region:us"
] | null | 2025-05-30T09:55:36Z | ---
base_model: leloy/Anole-7b-v0.1-hf
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.0 |
Luandrie/_Whisper_Compliance_en_cleaned_text_15steps | Luandrie | 2025-05-30T09:54:00Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"dataset:lelapa/www_compliance_tforge",
"base_model:lelapa/distill_whisper_call_center_en_merged",
"base_model:finetune:lelapa/distill_whisper_call_center_en_merged",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-05-30T09:50:43Z | ---
library_name: transformers
language:
- en
license: mit
base_model: lelapa/distill_whisper_call_center_en_merged
tags:
- generated_from_trainer
datasets:
- lelapa/www_compliance_tforge
metrics:
- wer
model-index:
- name: Distill Whisper Call Center Compliance
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: www_compliance_tforge
type: lelapa/www_compliance_tforge
args: 'config: en, split: test'
metrics:
- name: Wer
type: wer
value: 6.097560975609756
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Distill Whisper Call Center Compliance
This model is a fine-tuned version of [lelapa/distill_whisper_call_center_en_merged](https://huggingface.co/lelapa/distill_whisper_call_center_en_merged) on the www_compliance_tforge dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4909
- Wer: 6.0976
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 0.3739 | 1.7391 | 5 | 0.4952 | 4.8780 |
| 0.0517 | 3.4783 | 10 | 0.4839 | 6.0976 |
| 0.0101 | 5.2174 | 15 | 0.4909 | 6.0976 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.20.3
|
sagniksengupta/videomae-base-finetuned-ucf101-subset | sagniksengupta | 2025-05-30T09:52:26Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2025-03-24T18:41:24Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0986
- Accuracy: 0.9784
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- training_steps: 2280
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 1.8511 | 0.1254 | 286 | 1.3736 | 0.5602 |
| 0.7099 | 1.1254 | 572 | 0.5359 | 0.8397 |
| 0.3433 | 2.1254 | 858 | 0.4332 | 0.8772 |
| 0.2015 | 3.1254 | 1144 | 0.2627 | 0.9203 |
| 0.1166 | 4.1254 | 1430 | 0.1257 | 0.9620 |
| 0.0394 | 5.1254 | 1716 | 0.0980 | 0.9714 |
| 0.0092 | 6.1254 | 2002 | 0.0888 | 0.9766 |
| 0.0246 | 7.1219 | 2280 | 0.0739 | 0.9822 |
### Framework versions
- Transformers 4.52.2
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
pgilliar/MNLP_M2_rag_model | pgilliar | 2025-05-30T09:51:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-30T08:22:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Darsala/georgian_comet | Darsala | 2025-05-30T09:50:30Z | 0 | 0 | comet | [
"comet",
"translation",
"evaluation",
"mt-evaluation",
"georgian",
"ka",
"en",
"dataset:Darsala/georgian_metric_evaluation",
"base_model:Unbabel/wmt22-comet-da",
"base_model:finetune:Unbabel/wmt22-comet-da",
"license:apache-2.0",
"model-index",
"region:us"
] | translation | 2025-05-29T13:46:03Z | ---
language:
- ka
- en
license: apache-2.0
tags:
- translation
- evaluation
- comet
- mt-evaluation
- georgian
metrics:
- kendall_tau
- spearman_correlation
- pearson_correlation
model-index:
- name: Georgian-COMET
results:
- task:
type: translation-evaluation
name: Machine Translation Evaluation
dataset:
name: Georgian MT Evaluation Dataset
type: Darsala/georgian_metric_evaluation
metrics:
- type: pearson_correlation
value: 0.878
name: Pearson Correlation
- type: spearman_correlation
value: 0.796
name: Spearman Correlation
- type: kendall_tau
value: 0.603
name: Kendall's Tau
base_model: Unbabel/wmt22-comet-da
datasets:
- Darsala/georgian_metric_evaluation
---
# Georgian-COMET: Fine-tuned COMET for English-Georgian MT Evaluation
This is a [COMET](https://github.com/Unbabel/COMET) evaluation model fine-tuned specifically for English-Georgian machine translation evaluation. It receives a triplet with (source sentence, translation, reference translation) and returns a score that reflects the quality of the translation compared to both source and reference.
## Model Description
Georgian-COMET is a fine-tuned version of [Unbabel/wmt22-comet-da](https://huggingface.co/Unbabel/wmt22-comet-da) that has been optimized for evaluating English-to-Georgian translations through knowledge distillation from Claude Sonnet 4. The model shows significant improvements over the base model when evaluating Georgian translations.
### Key Improvements over Base Model
| Metric | Base COMET | Georgian-COMET | Improvement |
|--------|------------|----------------|-------------|
| Pearson | 0.867 | **0.878** | +1.1% |
| Spearman | 0.759 | **0.796** | +3.7% |
| Kendall | 0.564 | **0.603** | +3.9% |
## Paper
- **Base Model Paper**: [COMET-22: Unbabel-IST 2022 Submission for the Metrics Shared Task](https://aclanthology.org/2022.wmt-1.52) (Rei et al., WMT 2022)
- **This Model**: Paper coming soon
## Repository
[https://github.com/LukaDarsalia/nmt_metrics_research](https://github.com/LukaDarsalia/nmt_metrics_research)
## License
Apache-2.0
## Usage (unbabel-comet)
Using this model requires unbabel-comet to be installed:
```bash
pip install --upgrade pip # ensures that pip is current
pip install unbabel-comet
```
### Option 1: Direct Download from HuggingFace
```python
from comet import load_from_checkpoint
import requests
import os
# Download the model checkpoint
model_url = "https://huggingface.co/Darsala/georgian_comet/resolve/main/model.ckpt"
model_path = "georgian_comet.ckpt"
# Download if not already present
if not os.path.exists(model_path):
response = requests.get(model_url)
with open(model_path, 'wb') as f:
f.write(response.content)
# Load the model
model = load_from_checkpoint(model_path)
# Prepare your data
data = [
{
"src": "The cat sat on the mat.",
"mt": "แแแขแ แแแก แฎแแแแฉแแแ.",
"ref": "แแแขแ แแฏแแ แฎแแแแฉแแแ."
},
{
"src": "Schools and kindergartens were opened.",
"mt": "แกแแแแแแ แแ แกแแแแแจแแ แแแฆแแแ แแแแฎแกแแ.",
"ref": "แแแแฎแกแแ แกแแแแแแ แแ แกแแแแแจแแ แแแฆแแแ."
}
]
# Get predictions
model_output = model.predict(data, batch_size=8, gpus=1)
print(model_output)
```
### Option 2: Using comet CLI
First download the model checkpoint:
```bash
wget https://huggingface.co/Darsala/georgian_comet/resolve/main/model.ckpt -O georgian_comet.ckpt
```
Then use it with comet CLI:
```bash
comet-score -s {source-inputs}.txt -t {translation-outputs}.txt -r {references}.txt --model georgian_comet.ckpt
```
### Option 3: Integration with Evaluation Pipeline
```python
from comet import load_from_checkpoint
import pandas as pd
# Load model
model = load_from_checkpoint("georgian_comet.ckpt")
# Load your evaluation data
df = pd.read_csv("your_evaluation_data.csv")
# Prepare data in COMET format
data = [
{
"src": row["sourceText"],
"mt": row["targetText"],
"ref": row["referenceText"]
}
for _, row in df.iterrows()
]
# Get scores
scores = model.predict(data, batch_size=16)
print(f"Average score: {sum(scores['scores']) / len(scores['scores']):.3f}")
```
## Intended Uses
This model is intended to be used for **English-Georgian MT evaluation**.
Given a triplet with (source sentence in English, translation in Georgian, reference translation in Georgian), it outputs a single score between 0 and 1 where 1 represents a perfect translation.
### Primary Use Cases
1. **MT System Development**: Evaluate and compare different English-Georgian MT systems
2. **Quality Assurance**: Automated quality checks for Georgian translations
3. **Research**: Study MT evaluation for morphologically rich languages like Georgian
4. **Production Monitoring**: Track translation quality in production environments
### Out-of-Scope Use
- **Other Language Pairs**: This model is specifically fine-tuned for English-Georgian and may not perform well on other language pairs
- **Reference-Free Evaluation**: The model requires reference translations
- **Document-Level**: Optimized for sentence-level evaluation
## Training Details
### Training Data
- **Dataset**: 5,000 English-Georgian pairs from [corp.dict.ge](https://corp.dict.ge/)
- **MT Systems**: Translations from SMaLL-100, Google Translate, and Ucraft Translate
- **Scoring Method**: Knowledge distillation from Claude Sonnet 4 with added Gaussian noise (ฯ=3)
- **Details**: See [Darsala/georgian_metric_evaluation](https://huggingface.co/datasets/Darsala/georgian_metric_evaluation)
### Training Configuration
```yaml
regression_metric:
init_args:
nr_frozen_epochs: 0.3
keep_embeddings_frozen: True
optimizer: AdamW
encoder_learning_rate: 1.5e-05
learning_rate: 1.5e-05
loss: mse
dropout: 0.1
batch_size: 8
```
### Training Procedure
1. **Base Model**: Started from Unbabel/wmt22-comet-da checkpoint
2. **Knowledge Distillation**: Used Claude Sonnet 4 scores as training targets
3. **Robustness**: Added Gaussian noise to training scores to prevent overfitting
4. **Optimization**: 8 epochs with early stopping (patience=4) on validation Kendall's tau
## Evaluation Results
### Test Set Performance
Evaluated on 400 human-annotated English-Georgian translation pairs:
| Metric | Score | p-value |
|--------|-------|---------|
| Pearson | 0.878 | < 0.001 |
| Spearman | 0.796 | < 0.001 |
| Kendall | 0.603 | < 0.001 |
### Comparison with Other Metrics
| Metric | Pearson | Spearman | Kendall |
|--------|---------|----------|---------|
| **Georgian-COMET** | **0.878** | 0.796 | 0.603 |
| Base COMET | 0.867 | 0.759 | 0.564 |
| LLM-Reference-Based | 0.852 | **0.798** | **0.660** |
| CHRF++ | 0.739 | 0.690 | 0.498 |
| TER | 0.466 | 0.443 | 0.311 |
| BLEU | 0.413 | 0.497 | 0.344 |
## Languages Covered
While the base model (XLM-R) covers 100+ languages, this fine-tuned version is specifically optimized for:
- **Source Language**: English (en)
- **Target Language**: Georgian (ka)
For other language pairs, we recommend using the base [Unbabel/wmt22-comet-da](https://huggingface.co/Unbabel/wmt22-comet-da) model.
## Limitations
1. **Language Specific**: Optimized only for EnglishโGeorgian evaluation
2. **Domain**: Training data primarily from corp.dict.ge (general/literary domain)
3. **Reference Required**: Cannot perform reference-free evaluation
4. **Sentence Level**: Not optimized for document-level evaluation
## Citation
If you use this model, please cite:
```bibtex
@misc{georgian-comet-2025,
title={Georgian-COMET: Fine-tuned COMET for English-Georgian MT Evaluation},
author={Luka Darsalia, Ketevan Bakhturidze, Saba Sturua},
year={2025},
publisher={HuggingFace},
url={https://huggingface.co/Darsala/georgian_comet}
}
@inproceedings{rei-etal-2022-comet,
title = "{COMET}-22: Unbabel-{IST} 2022 Submission for the Metrics Shared Task",
author = "Rei, Ricardo and
C. de Souza, Jos{\'e} G. and
Alves, Duarte and
Zerva, Chrysoula and
Farinha, Ana C and
Glushkova, Taisiya and
Lavie, Alon and
Coheur, Luisa and
Martins, Andr{\'e} F. T.",
booktitle = "Proceedings of the Seventh Conference on Machine Translation (WMT)",
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.wmt-1.52",
pages = "578--585",
}
```
## Acknowledgments
- [Unbabel](https://unbabel.com/) team for the base COMET model
- [Anthropic](https://anthropic.com/) for Claude Sonnet 4 used in knowledge distillation
- [corp.dict.ge](https://corp.dict.ge/) for the Georgian-English corpus
- All contributors to the [nmt_metrics_research](https://github.com/LukaDarsalia/nmt_metrics_research) project |
rziga/mm_grounding_dino_tiny_o365v1_goldg_grit_v3det | rziga | 2025-05-30T09:48:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mm-grounding-dino",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-30T09:47:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
connector/pig-1k | connector | 2025-05-30T09:43:24Z | 0 | 1 | null | [
"pig",
"text-to-image",
"en",
"license:mit",
"region:us"
] | text-to-image | 2025-01-31T09:40:44Z | ---
license: mit
language:
- en
pipeline_tag: text-to-image
tags:
- pig
---
# pig studio model: pig-1k
- diffusion model for image generation
- compatible with t5xxl text encoder
- similar architecture to pixart-ฮฑ but slightly different
- try it out you will know the difference
# pig studio model: pig-1k-aura
- diffusion model for image generation
- compatible with t5xl text encoder
- similar architecture to aura but slightly different
- try it out you will know the difference
# pig studio model: pig-1k-sd
- diffusion model for image generation
- compatible with clip:g-l and t5xxl text encoder
- similar architecture to sd but slightly different
- try it out you will know the difference
# pig studio model: pig-1k-flux
- diffusion model for image generation
- compatible with clip-l and t5xxl text encoder
- similar architecture to flux but slightly different
- try it out you will know the difference |
vertings6/a43d7ffd-1a66-4fad-a37c-9a62c594c155 | vertings6 | 2025-05-30T09:42:27Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2-1.5B-Instruct",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-30T08:25:24Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a43d7ffd-1a66-4fad-a37c-9a62c594c155
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/Qwen2-1.5B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 03542368294c05c0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 3
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: vertings6/a43d7ffd-1a66-4fad-a37c-9a62c594c155
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 2.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/03542368294c05c0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 43e1f9fe-da21-41e2-ae9d-431b9ab608ef
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 43e1f9fe-da21-41e2-ae9d-431b9ab608ef
warmup_steps: 50
weight_decay: 0.02
xformers_attention: true
```
</details><br>
# a43d7ffd-1a66-4fad-a37c-9a62c594c155
This model is a fine-tuned version of [unsloth/Qwen2-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8570
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 18
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.912 | 0.0000 | 1 | 1.9706 |
| 1.595 | 0.0076 | 250 | 1.8830 |
| 1.9601 | 0.0152 | 500 | 1.8570 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ResembleAI/chatterbox | ResembleAI | 2025-05-30T09:37:10Z | 0 | 289 | chatterbox | [
"chatterbox",
"text-to-speech",
"speech generation",
"voice-cloning",
"en",
"license:mit",
"region:us"
] | text-to-speech | 2025-04-24T12:03:33Z | ---
license: mit
language:
- en
tags:
- text-to-speech
- speech generation
- voice-cloning
pipeline_tag: text-to-speech
library_name: chatterbox
---
<img width="800" alt="cb-big2" src="https://github.com/user-attachments/assets/bd8c5f03-e91d-4ee5-b680-57355da204d1" />
<h1 style="font-size: 32px">Chatterbox TTS</h1>
<div style="display: flex; align-items: center; gap: 12px">
<a href="https://resemble-ai.github.io/chatterbox_demopage/">
<img src="https://img.shields.io/badge/listen-demo_samples-blue" alt="Listen to Demo Samples" />
</a>
<a href="https://huggingface.co/spaces/ResembleAI/Chatterbox">
<img src="https://huggingface.co/datasets/huggingface/badges/resolve/main/open-in-hf-spaces-sm.svg" alt="Open in HF Spaces" />
</a>
<a href="https://podonos.com/resembleai/chatterbox">
<img src="https://static-public.podonos.com/badges/insight-on-pdns-sm-dark.svg" alt="Insight on Podos" />
</a>
</div>
<div style="display: flex; align-items: center; gap: 8px;">
<span style="font-style: italic;white-space: pre-wrap">Made with โค๏ธ by</span>
<img width="100" alt="resemble-logo-horizontal" src="https://github.com/user-attachments/assets/35cf756b-3506-4943-9c72-c05ddfa4e525" />
</div>
We're excited to introduce Chatterbox, [Resemble AI's](https://resemble.ai) first production-grade open source TTS model. Licensed under MIT, Chatterbox has been benchmarked against leading closed-source systems like ElevenLabs, and is consistently preferred in side-by-side evaluations.
Whether you're working on memes, videos, games, or AI agents, Chatterbox brings your content to life. It's also the first open source TTS model to support **emotion exaggeration control**, a powerful feature that makes your voices stand out. Try it now on our [Hugging Face Gradio app.](https://huggingface.co/spaces/ResembleAI/Chatterbox)
If you like the model but need to scale or tune it for higher accuracy, check out our competitively priced TTS service (<a href="https://resemble.ai">link</a>). It delivers reliable performance with ultra-low latency of sub 200msโideal for production use in agents, applications, or interactive media.
# Key Details
- SoTA zeroshot TTS
- 0.5B Llama backbone
- Unique exaggeration/intensity control
- Ultra-stable with alignment-informed inference
- Trained on 0.5M hours of cleaned data
- Watermarked outputs
- Easy voice conversion script
- [Outperforms ElevenLabs](https://podonos.com/resembleai/chatterbox)
# Tips
- **General Use (TTS and Voice Agents):**
- The default settings (`exaggeration=0.5`, `cfg=0.5`) work well for most prompts.
- If the reference speaker has a fast speaking style, lowering `cfg` to around `0.3` can improve pacing.
- **Expressive or Dramatic Speech:**
- Try lower `cfg` values (e.g. `~0.3`) and increase `exaggeration` to around `0.7` or higher.
- Higher `exaggeration` tends to speed up speech; reducing `cfg` helps compensate with slower, more deliberate pacing.
# Installation
```
pip install chatterbox-tts
```
# Usage
```python
import torchaudio as ta
from chatterbox.tts import ChatterboxTTS
model = ChatterboxTTS.from_pretrained(device="cuda")
text = "Ezreal and Jinx teamed up with Ahri, Yasuo, and Teemo to take down the enemy's Nexus in an epic late-game pentakill."
wav = model.generate(text)
ta.save("test-1.wav", wav, model.sr)
# If you want to synthesize with a different voice, specify the audio prompt
AUDIO_PROMPT_PATH="YOUR_FILE.wav"
wav = model.generate(text, audio_prompt_path=AUDIO_PROMPT_PATH)
ta.save("test-2.wav", wav, model.sr)
```
See `example_tts.py` for more examples.
# Acknowledgements
- [Cosyvoice](https://github.com/FunAudioLLM/CosyVoice)
- [HiFT-GAN](https://github.com/yl4579/HiFTNet)
- [Llama 3](https://github.com/meta-llama/llama3)
# Built-in PerTh Watermarking for Responsible AI
Every audio file generated by Chatterbox includes [Resemble AI's Perth (Perceptual Threshold) Watermarker](https://github.com/resemble-ai/perth) - imperceptible neural watermarks that survive MP3 compression, audio editing, and common manipulations while maintaining nearly 100% detection accuracy.
# Disclaimer
Don't use this model to do bad things. Prompts are sourced from freely available data on the internet. |
apriasmoro/27e554a7-9349-41b8-b91f-45cc2482a433 | apriasmoro | 2025-05-30T09:29:11Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Mistral-7b-128k",
"base_model:adapter:NousResearch/Yarn-Mistral-7b-128k",
"license:apache-2.0",
"region:us"
] | null | 2025-05-30T09:15:17Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Mistral-7b-128k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 27e554a7-9349-41b8-b91f-45cc2482a433
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0.dev0`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Mistral-7b-128k
bf16: true
chat_template: llama3
datasets:
- data_files:
- 12015d7c9ee7f3df_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: None
field_instruction: instruct
field_output: output
field_system: None
format: None
no_input_format: None
system_format: '{system}'
system_prompt: None
eval_max_new_tokens: 256
evals_per_epoch: 2
flash_attention: false
fp16: false
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: apriasmoro/27e554a7-9349-41b8-b91f-45cc2482a433
learning_rate: 0.0002
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: false
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 15
micro_batch_size: 12
mlflow_experiment_name: /tmp/12015d7c9ee7f3df_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
sample_packing: false
save_steps: 200
sequence_len: 2048
special_tokens:
pad_token: </s>
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0aa91fdd-f464-4c35-9e87-5ba2524c6ecc
wandb_project: Gradients-On-Demand
wandb_run: apriasmoro
wandb_runid: 0aa91fdd-f464-4c35-9e87-5ba2524c6ecc
warmup_steps: 100
weight_decay: 0.01
```
</details><br>
# 27e554a7-9349-41b8-b91f-45cc2482a433
This model is a fine-tuned version of [NousResearch/Yarn-Mistral-7b-128k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5053
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 48
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0702 | 1 | 1.5261 |
| No log | 0.2105 | 3 | 1.5915 |
| No log | 0.4211 | 6 | 1.5176 |
| No log | 0.6316 | 9 | 1.4834 |
| 2.1415 | 0.8421 | 12 | 1.4475 |
| 2.1415 | 1.0 | 15 | 1.5053 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1 |
ajia2/qwen_sft_trained_v3 | ajia2 | 2025-05-30T09:16:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-30T09:16:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
muktar66alam/gfy | muktar66alam | 2025-05-30T09:13:47Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-05-30T09:13:47Z | ---
license: creativeml-openrail-m
---
|
anonymous6435/llemma-isar | anonymous6435 | 2025-05-30T09:09:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-30T08:22:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vanhai123/skin_cancer_detection | vanhai123 | 2025-05-30T08:59:19Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-30T08:49:36Z | ---
license: apache-2.0
---
# Phรกt hiแปn Ung thฦฐ Da bแบฑng CNN (Skin Cancer Detection)
ฤรขy lร mรด hรฌnh hแปc sรขu sแปญ dแปฅng mแบกng nฦก-ron tรญch chแบญp (Convolutional Neural Network - CNN) ฤแป phรขn loแบกi **9 loแบกi ung thฦฐ da** tแปซ hรฌnh แบฃnh chแปฅp da liแป
u (dermatoscopic images) dแปฑa trรชn tแบญp dแปฏ liแปu ISIC.
---
## Mแปฅc ฤรญch
Mรด hรฌnh ฤฦฐแปฃc xรขy dแปฑng ฤแป phแปฅc vแปฅ mแปฅc tiรชu **nghiรชn cแปฉu vร hแปc thuแบญt**, hแป trแปฃ viแปc chแบฉn ฤoรกn hรฌnh แบฃnh da liแป
u bแบฑng AI.
* Dแป
dร ng รกp dแปฅng trong Google Colab hoแบทc mรดi trฦฐแปng TensorFlow
* Cรณ thแป dรนng lร m baseline cho cรกc nghiรชn cแปฉu mแป rแปng
---
## Dataset sแปญ dแปฅng
* Nguแปn: [Skin Cancer 9 Classes (ISIC)](https://www.kaggle.com/datasets/nodoubttome/skin-cancer9-classesisic)
* Gแปm: 3.600 แบฃnh da bแปnh, chia ฤแปu cho 9 loแบกi
### Cรกc lแปp bแปnh:
1. Pigmented Benign Keratosis
2. Melanoma
3. Vascular Lesion
4. Actinic Keratosis
5. Squamous Cell Carcinoma
6. Basal Cell Carcinoma
7. Seborrheic Keratosis
8. Dermatofibroma
9. Nevus
---
## Hiแปu quแบฃ mรด hรฌnh
* **Mean AUC**: 0.99
* **ฤแป chรญnh xรกc trรชn tแบญp test**: 92%
* ฤรกnh giรก chi tiแบฟt: precision, recall, f1-score cho tแปซng lแปp bแปnh
* ฤรฃ trแปฑc quan hรณa bแบฑng ROC Curve, Confusion Matrix, vร dแปฑ ฤoรกn mแบซu ngแบซu nhiรชn
---
## โ๏ธ Hฦฐแปng dแบซn sแปญ dแปฅng mรด hรฌnh `.h5`
```python
from tensorflow.keras.models import load_model
# Nแบกp mรด hรฌnh tแปซ file tแบฃi vแป
model = load_model("skin_cancer_model.h5")
# Dแปฑ ฤoรกn แบฃnh ฤรฃ tiแปn xแปญ lรฝ
pred = model.predict(image_tensor)
```
> แบขnh ฤแบงu vร o cแบงn resize vแป ฤรบng kรญch thฦฐแปc huแบฅn luyแปn (vรญ dแปฅ: 224x224 RGB)
---
## Giแบฅy phรฉp vร Tรกc giแบฃ
* Tรกc giแบฃ: [Hร Vฤn Hแบฃi](https://www.kaggle.com/haivan11)
* Giแบฅy phรฉp: MIT License โ cho phรฉp sแปญ dแปฅng phi thฦฐฦกng mแบกi vร trong hแปc thuแบญt
> Nแบฟu bแบกn sแปญ dแปฅng mรด hรฌnh nร y trong nghiรชn cแปฉu, vui lรฒng trรญch dแบซn hoแบทc ghi nhแบญn nguแปn phรน hแปฃp.
---
## Liรชn hแป
Nแบฟu bแบกn cแบงn hแป trแปฃ, trao ฤแปi hoแบทc hแปฃp tรกc nghiรชn cแปฉu, hรฃy liรชn hแป vแปi tรดi qua Hugging Face hoแบทc Kaggle.
|
najabba/MNLP_M2_quantized_model | najabba | 2025-05-30T08:56:40Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-27T17:39:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sergioalves/2b389e82-62c7-44c5-8c60-6e158f12e8ec | sergioalves | 2025-05-30T08:51:35Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:samoline/59b1b15a-698b-4f85-a1f0-ff3f3edf67d9",
"base_model:adapter:samoline/59b1b15a-698b-4f85-a1f0-ff3f3edf67d9",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-30T08:38:20Z | ---
library_name: peft
base_model: samoline/59b1b15a-698b-4f85-a1f0-ff3f3edf67d9
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2b389e82-62c7-44c5-8c60-6e158f12e8ec
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: samoline/59b1b15a-698b-4f85-a1f0-ff3f3edf67d9
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 77e3105900c47af2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: input
field_instruction: instruct
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 0.85
group_by_length: false
hub_model_id: sergioalves/2b389e82-62c7-44c5-8c60-6e158f12e8ec
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/77e3105900c47af2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8602e2e9-5dac-48f0-b259-04275ef943bc
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 8602e2e9-5dac-48f0-b259-04275ef943bc
warmup_steps: 50
weight_decay: 0.05
xformers_attention: true
```
</details><br>
# 2b389e82-62c7-44c5-8c60-6e158f12e8ec
This model is a fine-tuned version of [samoline/59b1b15a-698b-4f85-a1f0-ff3f3edf67d9](https://huggingface.co/samoline/59b1b15a-698b-4f85-a1f0-ff3f3edf67d9) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0211
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 24
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7231 | 0.0003 | 1 | 1.0599 |
| 1.3666 | 0.0643 | 250 | 1.0295 |
| 1.3012 | 0.1285 | 500 | 1.0211 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
danhtran2mind/ghibli-fine-tuned-sd-2.1-int8 | danhtran2mind | 2025-05-30T08:43:47Z | 0 | 0 | null | [
"text-to-image",
"en",
"base_model:danhtran2mind/ghibli-fine-tuned-sd-2.1",
"base_model:finetune:danhtran2mind/ghibli-fine-tuned-sd-2.1",
"license:mit",
"region:us"
] | text-to-image | 2025-05-30T05:58:46Z | ---
license: mit
language:
- en
base_model:
- danhtran2mind/ghibli-fine-tuned-sd-2.1
pipeline_tag: text-to-image
---
<div align="center">
<h1>
Ghibli Fine-tuned Stable Diffusion 2.1 Quantization Int8
</h1>
<a href="https://github.com/your-repo/releases/tag/v1.0.0">
<img src="https://img.shields.io/badge/version-1.0.0-blue.svg" alt="Version 1.0.0">
</a>
<a href="https://opensource.org/licenses/MIT">
<img src="https://img.shields.io/badge/license-MIT-green.svg" alt="License MIT">
</a>
<a href="https://www.python.org">
<img src="https://img.shields.io/badge/python-3.8%2B-blue.svg?logo=python" alt="Python 3.8+">
</a>
<a href="https://pytorch.org">
<img src="https://img.shields.io/badge/PyTorch-2.0%2B-orange.svg?logo=pytorch" alt="PyTorch 2.0+">
</a>
<a href="https://huggingface.co/docs/diffusers">
<img src="https://img.shields.io/badge/diffusers-0.20%2B-red.svg?logo=huggingface" alt="Diffusers 0.20+">
</a>
<a href="https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/overview.html">
<img src="https://img.shields.io/badge/OpenVINO-2023.0%2B-blue.svg?logo=intel" alt="OpenVINO 2023.0+">
</a>
</div>
## Quantizate from Base Model
### Install Dependencies
```bash
pip install -q "optimum-intel[openvino,diffusers]" torch transformers diffusers openvino nncf optimum-quanto
```
### Import Libraries
```python
from diffusers import StableDiffusionPipeline, AutoencoderKL, UNet2DConditionModel, PNDMScheduler
from transformers import AutoTokenizer, CLIPTextModel, CLIPTokenizer
from optimum.intel import OVStableDiffusionPipeline
from optimum.intel import OVQuantizer, OVConfig, OVWeightQuantizationConfig
import torch
from nncf import CompressWeightsMode
import os
```
### Load Base Model
```python
model_id = "danhtran2mind/ghibli-fine-tuned-sd-2.1"
device = "cuda" if torch.cuda.is_available() else "cpu"
dtype = torch.float16 if torch.cuda.is_available() else torch.float32
# Load and export the model to OpenVINO format
pipeline = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=dtype)
```
### Export to OpenVINO format without quantization
```python
# Export to OpenVINO format without quantization
ov_pipeline = OVStableDiffusionPipeline.from_pretrained(
model_id,
export=True,
compile=False,
load_in_8bit=False, # Explicitly disable 8-bit quantization
load_in_4bit=False, # Explicitly disable 4-bit quantization
torch_dtype=dtype
)
```
### Define INT8 quantization configuration
```python
# Define INT8 quantization configuration
ov_weight_config_int8 = OVWeightQuantizationConfig(
weight_only=True,
bits=8,
mode=CompressWeightsMode.INT8_SYM # Use enum instead of string
)
ov_config_int8 = OVConfig(quantization_config=ov_weight_config_int8)
```
### Processing and Save Quantization Model
```python
# Create Quantization Directory
save_dir_int8 = "ghibli_sd_int8"
os.makedirs(save_dir_int8, exist_ok=True)
# Initialize quantizer
quantizer = OVQuantizer.from_pretrained(ov_pipeline, task="stable-diffusion")
# Quantize the model
quantizer.quantize(ov_config=ov_config_int8, save_directory=save_dir_int8)
# Save scheduler and tokenizer
pipeline.scheduler.save_pretrained(save_dir_int8)
pipeline.tokenizer.save_pretrained(save_dir_int8)
```
## Usage
### Install Dependencies
```bash
pip install -q "optimum-intel[openvino,diffusers]" openvino
```
### Import Libraries
```python
import torch
from optimum.intel import OVStableDiffusionPipeline
```
###
```python
device = "cuda" if torch.cuda.is_available() else "cpu"
pipe = OVStableDiffusionPipeline.from_pretrained("danhtran2mind/ghibli-fine-tuned-sd-2.1-int8")
pipe.to(device)
``` |
bilyxu/DeepSeek-7B-InterTrade-0530-merged | bilyxu | 2025-05-30T08:39:18Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"qwen2",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-30T08:35:44Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lostinjamal/83eeafb7-7f66-4d67-b045-208830409f3e | lostinjamal | 2025-05-30T08:39:06Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NovaSearch/stella_en_1.5B_v5",
"base_model:adapter:NovaSearch/stella_en_1.5B_v5",
"license:mit",
"region:us"
] | null | 2025-05-30T07:33:46Z | ---
library_name: peft
license: mit
base_model: dunzhang/stella_en_1.5B_v5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 83eeafb7-7f66-4d67-b045-208830409f3e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: dunzhang/stella_en_1.5B_v5
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 4c1e0dbdb731bb5b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: input
field_instruction: instruct
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lostinjamal/83eeafb7-7f66-4d67-b045-208830409f3e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/4c1e0dbdb731bb5b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d2d9d517-c659-4e67-92b9-9e0686192de5
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d2d9d517-c659-4e67-92b9-9e0686192de5
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 83eeafb7-7f66-4d67-b045-208830409f3e
This model is a fine-tuned version of [dunzhang/stella_en_1.5B_v5](https://huggingface.co/dunzhang/stella_en_1.5B_v5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0000 | 1 | nan |
| 0.0 | 0.0000 | 3 | nan |
| 0.0 | 0.0001 | 6 | nan |
| 0.0 | 0.0001 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
jokerwu0519/dummy-model | jokerwu0519 | 2025-05-30T08:34:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-30T08:16:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
raeioumon/patato | raeioumon | 2025-05-30T08:30:25Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-05-30T07:18:52Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
poltextlab/xlm-roberta-large-pooled-cap-media2 | poltextlab | 2025-05-30T08:23:21Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"zero-shot-classification",
"pytorch",
"en",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-29T12:39:11Z | ---
model-index:
- name: xlm-roberta-large
results:
- task:
type: text-classification
dataset:
name: media2_v2_25_05_21_test.csv
type: media2_v2_25_05_21_test.csv
metrics:
- name: Accuracy
type: Accuracy
value: 79
- name: F1-Score
type: F1-Score
value: 79
source:
name: Open LLM Leaderboard
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard
tags:
- zero-shot-classification
- text-classification
- pytorch
metrics:
- recall
- precision
- f1-score
language:
- en
base_model:
- FacebookAI/xlm-roberta-large
pipeline_tag: text-classification
library_name: transformers
license: mit
extra_gated_prompt: Our models are intended for academic use only. If you are not
affiliated with an academic institution, please provide a rationale for using our
models. Please allow us a few business days to manually review subscriptions.
extra_gated_fields:
Name: text
Country: country
Institution: text
Institution Email: text
Please specify your academic use case: text
---
# xlm-roberta-large-pooled-cap-media2
## Model description
An `xlm-roberta-large` model finetuned on multilingual (english, german, hungarian, spanish, slovakian) training data labelled with
[major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).
Furthermore we used the follwoing 18 media codes:
* State and Local Government Administration (24)
* Weather (25)
* Fires, emergencies and natural disasters (26)
* Crime and trials (27)
* Arts, culture, entertainment and history (28)
* Style and fashion (29)
* Food (30)
* Travel (31)
* Wellbeing and learning (32)
* Personal finance and real estate (33)
* Personal technology and popular science (34)
* Churches and Religion (35)
* Celebrities and human interest (36)
* Obituaries and death notices (37)
* Sports (38)
* Crosswords, puzzles, comics (39)
* Media production/internal, letters (40)
* Advertisements (41)
## How to use the model
```python
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large")
pipe = pipeline(
model="poltextlab/xlm-roberta-large-pooled-cap-media2",
task="text-classification",
tokenizer=tokenizer,
use_fast=False,
token="<your_hf_read_only_token>"
)
text = "We will place an immediate 6-month halt on the finance driven closure of beds and wards, and set up an independent audit of needs and facilities."
pipe(text)
```
### Gated access
Due to the gated access, you must pass the `token` parameter when loading the model. In earlier versions of the Transformers package, you may need to use the `use_auth_token` parameter instead.
## Model performance
The model was evaluated on a test set of 74322 english examples.<br>
* Accuracy: **0.79**.
* Precision: **0.77**.
* Recall: **0.77**
* Weighted Average F1-score: **0.79**

### Heatmap

### Classification Report
| Class | precision | recall | f1-score | support |
|:-----------------------------------------------|------------:|---------:|-----------:|----------:|
| Macroeconomics (1) | 0.71 | 0.75 | 0.73 | 2471 |
| Civil Rights (2) | 0.71 | 0.66 | 0.69 | 1886 |
| Health (3) | 0.81 | 0.83 | 0.82 | 2471 |
| Agriculture (4) | 0.77 | 0.76 | 0.76 | 811 |
| Labor (5) | 0.72 | 0.7 | 0.71 | 1277 |
| Education (6) | 0.84 | 0.87 | 0.86 | 2080 |
| Environment (7) | 0.76 | 0.79 | 0.78 | 1283 |
| Energy (8) | 0.79 | 0.83 | 0.81 | 1370 |
| Immigration (9) | 0.71 | 0.78 | 0.74 | 514 |
| Transportation (10) | 0.8 | 0.82 | 0.81 | 2375 |
| Law and Crime (12) | 0.68 | 0.67 | 0.67 | 2471 |
| Social Welfare (13) | 0.67 | 0.69 | 0.68 | 683 |
| Housing (14) | 0.72 | 0.71 | 0.71 | 1023 |
| Banking, Finance, and Domestic Commerce (15) | 0.72 | 0.68 | 0.7 | 2471 |
| Defense (16) | 0.74 | 0.77 | 0.75 | 2471 |
| Technology (17) | 0.73 | 0.73 | 0.73 | 1375 |
| Foreign Trade (18) | 0.71 | 0.64 | 0.67 | 533 |
| International Affairs (19) | 0.69 | 0.62 | 0.66 | 2471 |
| Government Operations (20) | 0.72 | 0.65 | 0.68 | 2471 |
| Public Lands (21) | 0.64 | 0.64 | 0.64 | 554 |
| Culture (23) | 0.73 | 0.75 | 0.74 | 2142 |
| State and Local Government Administration (24) | 0.79 | 0.73 | 0.76 | 2471 |
| Weather (25) | 0.98 | 0.98 | 0.98 | 2471 |
| Fires, emergencies and natural disasters (26) | 0.96 | 0.98 | 0.97 | 2471 |
| Crime and trials (27) | 0.77 | 0.84 | 0.8 | 2467 |
| Arts, culture, entertainment and history (28) | 0.78 | 0.72 | 0.75 | 2423 |
| Style and fashion (29) | 0.8 | 0.69 | 0.74 | 2407 |
| Food (30) | 0.79 | 0.83 | 0.81 | 2210 |
| Travel (31) | 0.8 | 0.86 | 0.83 | 2095 |
| Wellbeing and learning (32) | 0.77 | 0.81 | 0.79 | 2376 |
| Personal finance and real estate (33) | 0.84 | 0.85 | 0.85 | 2222 |
| Personal technology and popular science (34) | 0.82 | 0.83 | 0.82 | 2388 |
| Churches and Religion (35) | 0.92 | 0.94 | 0.93 | 2469 |
| Celebrities and human interest (36) | 0.84 | 0.87 | 0.86 | 2454 |
| Obituaries and death notices (37) | 0.88 | 0.92 | 0.9 | 2407 |
| Sports (38) | 0.89 | 0.89 | 0.89 | 2423 |
| Crosswords, puzzles, comics (39) | 0.96 | 0.95 | 0.96 | 126 |
| Media production/internal, letters (40) | 0.9 | 0.9 | 0.9 | 763 |
| Advertisements (41) | 0 | 0 | 0 | 5 |
| No Policy and No Media Content (998) | 0.82 | 0.8 | 0.81 | 2471 |
| accuracy | 0.79 | 0.79 | 0.79 | 0.79 |
| macro avg | 0.77 | 0.77 | 0.77 | 74322 |
| weighted avg | 0.79 | 0.79 | 0.79 | 74322 |
## Inference platform
This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
## Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).
## Debugging and issues
This architecture uses the `sentencepiece` tokenizer. In order to run the model before `transformers==4.27` you need to install it manually.
If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue. |
BootesVoid/cmb9l1kt60f411b1yxecyvux5_cmbai6kfq02muhy17puzk7y2q | BootesVoid | 2025-05-30T08:17:48Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-30T08:17:45Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: EMMA
---
# Cmb9L1Kt60F411B1Yxecyvux5_Cmbai6Kfq02Muhy17Puzk7Y2Q
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `EMMA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "EMMA",
"lora_weights": "https://huggingface.co/BootesVoid/cmb9l1kt60f411b1yxecyvux5_cmbai6kfq02muhy17puzk7y2q/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb9l1kt60f411b1yxecyvux5_cmbai6kfq02muhy17puzk7y2q', weight_name='lora.safetensors')
image = pipeline('EMMA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb9l1kt60f411b1yxecyvux5_cmbai6kfq02muhy17puzk7y2q/discussions) to add images that show off what youโve made with this LoRA.
|
RiggityWrckd/Qwen2.5-Omni-7B-Q8_0-GGUF | RiggityWrckd | 2025-05-30T08:16:20Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"multimodal",
"llama-cpp",
"gguf-my-repo",
"any-to-any",
"en",
"base_model:Qwen/Qwen2.5-Omni-7B",
"base_model:quantized:Qwen/Qwen2.5-Omni-7B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | any-to-any | 2025-05-30T08:15:32Z | ---
license: other
license_name: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Omni-7B/blob/main/LICENSE
language:
- en
tags:
- multimodal
- llama-cpp
- gguf-my-repo
library_name: transformers
pipeline_tag: any-to-any
base_model: Qwen/Qwen2.5-Omni-7B
---
# RiggityWrckd/Qwen2.5-Omni-7B-Q8_0-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-Omni-7B`](https://huggingface.co/Qwen/Qwen2.5-Omni-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-Omni-7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo RiggityWrckd/Qwen2.5-Omni-7B-Q8_0-GGUF --hf-file qwen2.5-omni-7b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo RiggityWrckd/Qwen2.5-Omni-7B-Q8_0-GGUF --hf-file qwen2.5-omni-7b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo RiggityWrckd/Qwen2.5-Omni-7B-Q8_0-GGUF --hf-file qwen2.5-omni-7b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo RiggityWrckd/Qwen2.5-Omni-7B-Q8_0-GGUF --hf-file qwen2.5-omni-7b-q8_0.gguf -c 2048
```
|
RoyRoyRpy/test_fine-tuned-visionllama_100_epo1 | RoyRoyRpy | 2025-05-30T08:14:34Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-11B-Vision-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-11B-Vision-Instruct",
"license:llama3.2",
"region:us"
] | null | 2025-05-30T08:14:08Z | ---
library_name: peft
license: llama3.2
base_model: meta-llama/Llama-3.2-11B-Vision-Instruct
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: test_fine-tuned-visionllama_100_epo1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_fine-tuned-visionllama_100_epo1
This model is a fine-tuned version of [meta-llama/Llama-3.2-11B-Vision-Instruct](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 10
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 80
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.13.0
- Transformers 4.45.1
- Pytorch 2.4.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.3 |
sinhac332/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pensive_foraging_platypus | sinhac332 | 2025-05-30T08:04:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am pensive foraging platypus",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-29T19:40:20Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pensive_foraging_platypus
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am pensive foraging platypus
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pensive_foraging_platypus
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="sinhac332/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pensive_foraging_platypus", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Seanwang1221/LiYitong_FLUX | Seanwang1221 | 2025-05-30T08:03:17Z | 13 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-05-25T12:14:38Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
LYT,Nikon Z7 II and a NIKKOR Z 50mm f,, beautiful woman Illuminated by the
ethereal glow of studio lightning, the light is reflecting shadows on the
womans face, the light reflection sparcles around her, the harmonic play of
light and shadow underlines the natural beauty of the woman, standing, from
below, leaning forward, front view, (wearing reij-cybrwrdrbst01,
cyberbodysuit, neon pink details, neon purple detailed, cyborg body details,
choker),, (purple detailed background), selfie
output:
url: images/Liblib_00003_.png
- text: >-
LYT,(Cinematic, award-winning artwork, many details, super detailed, high
quality, best quality, ultra-detailed, very aesthetic, illustration, perfect
composition, intricate details, absurdres, high res image, masterpiece,
vibrant colors, beautiful face, detailed face, 1girl, solo focus, perfect
eyes, detailed eyes), female, pale skin, full lips, huge thighs, mature
body, huge breasts, curvy body, wide hips, perfect ass, long black hair,
hime-cut hair, brown eyes, (white kimono, patterned kimono, red floral
patterns on kimono), (suggestive pose, sultry smile), (outdoors, riverside
village, Japanese architecture)
output:
url: images/Liblib_00005_.png
- text: >-
LYT,1girl, (wearing a cheongsam:1.2),(in london city:1.2),(RAW photo, best
quality), (realistic, photo-realistic:1.4), masterpiece, an extremely
delicate and beautiful, extremely detailed, 2k wallpaper, Amazing, finely
detail, extremely detailed CG unity 8k wallpaper, ultra-detailed, highres,
soft light, beautiful detailed girl, extremely detailed eyes and face,
beautiful detailed nose, beautiful detailed eyes,cinematic lighting,perfect
anatomy,(slim body:1.3),long hair,(black hair:1.2),city lights at
night,smiling
output:
url: images/Liblib_00015_.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: LYT
---
# Li Yitong ๆไธๆก FLUX
<Gallery />
## Model description

## Trigger words
You should use `LYT` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Seanwang1221/LiYitong_FLUX/tree/main) them in the Files & versions tab.
|
CHOOSEIT/MCQA_rsLoRA_DoRA_SM1AR_5E | CHOOSEIT | 2025-05-30T07:56:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-30T07:55:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
smartmind/KURE-v1 | smartmind | 2025-05-30T07:50:02Z | 2 | 1 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:1879136",
"loss:CachedGISTEmbedLoss",
"base_model:BAAI/bge-m3",
"base_model:finetune:BAAI/bge-m3",
"license:mit",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-05-30T07:18:21Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:1879136
- loss:CachedGISTEmbedLoss
license: mit
metrics:
- recall
- precision
- f1
base_model:
- BAAI/bge-m3
library_name: sentence-transformers
---
# ๐ KURE-v1
## Example code
### Install Dependencies
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
### Python code
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("nlpai-lab/KURE-v1")
# Run inference
sentences = [
'ํ๋ฒ๊ณผ ๋ฒ์์กฐ์ง๋ฒ์ ์ด๋ค ๋ฐฉ์์ ํตํด ๊ธฐ๋ณธ๊ถ ๋ณด์ฅ ๋ฑ์ ๋ค์ํ ๋ฒ์ ๋ชจ์์ ๊ฐ๋ฅํ๊ฒ ํ์ด',
'4. ์์ฌ์ ๊ณผ ๊ฐ์ ๋ฐฉํฅ ์์ ์ดํด๋ณธ ๋ฐ์ ๊ฐ์ด ์ฐ๋ฆฌ ํ๋ฒ๊ณผ ๏ฝข๋ฒ์์กฐ์ง ๋ฒ๏ฝฃ์ ๋๋ฒ์ ๊ตฌ์ฑ์ ๋ค์ํํ์ฌ ๊ธฐ๋ณธ๊ถ ๋ณด์ฅ๊ณผ ๋ฏผ์ฃผ์ฃผ์ ํ๋ฆฝ์ ์์ด ๋ค๊ฐ์ ์ธ ๋ฒ์ ๋ชจ์์ ๊ฐ๋ฅํ๊ฒ ํ๋ ๊ฒ์ ๊ทผ๋ณธ ๊ท๋ฒ์ผ๋ก ํ๊ณ ์๋ค. ๋์ฑ์ด ํฉ์์ฒด๋ก์์ ๋๋ฒ์ ์๋ฆฌ๋ฅผ ์ฑํํ๊ณ ์๋ ๊ฒ ์ญ์ ๊ทธ ๊ตฌ์ฑ์ ๋ค์์ฑ์ ์์ฒญํ๋ ๊ฒ์ผ๋ก ํด์๋๋ค. ์ด์ ๊ฐ์ ๊ด์ ์์ ๋ณผ ๋ ํ์ง ๋ฒ์์ฅ๊ธ ๊ณ ์๋ฒ๊ด์ ์ค์ฌ์ผ๋ก ๋๋ฒ์์ ๊ตฌ์ฑํ๋ ๊ดํ์ ๊ฐ์ ํ ํ์๊ฐ ์๋ ๊ฒ์ผ๋ก ๋ณด์ธ๋ค.',
'์ฐ๋ฐฉํ๋ฒ์ฌํ์๋ 2001๋
1์ 24์ผ 5:3์ ๋ค์๊ฒฌํด๋ก ใ๋ฒ์์กฐ์ง๋ฒใ ์ 169์กฐ ์ 2๋ฌธ์ด ํ๋ฒ์ ํฉ์น๋๋ค๋ ํ๊ฒฐ์ ๋ด๋ ธ์ โ 5์ธ์ ๋ค์ ์ฌํ๊ด์ ์์ก๊ด๊ณ์ธ์ ์ธ๊ฒฉ๊ถ ๋ณดํธ, ๊ณต์ ํ ์ ์ฐจ์ ๋ณด์ฅ๊ณผ ๋ฐฉํด๋ฐ์ง ์๋ ๋ฒ๊ณผ ์ง์ค ๋ฐ๊ฒฌ ๋ฑ์ ๊ทผ๊ฑฐ๋ก ํ์ฌ ํ
๋ ๋น์ ์ดฌ์์ ๋ํ ์ ๋์ ์ธ ๊ธ์ง๋ฅผ ํ๋ฒ์ ํฉ์นํ๋ ๊ฒ์ผ๋ก ๋ณด์์ โ ๊ทธ๋ฌ๋ ๋๋จธ์ง 3์ธ์ ์ฌํ๊ด์ ํ์ ๋ฒ์์ ์์ก์ ์ฐจ๋ ํน๋ณํ ์ธ๊ฒฉ๊ถ ๋ณดํธ์ ์ด์ต๋ ์์ผ๋ฉฐ, ํ
๋ ๋น์ ๊ณต๊ฐ์ฃผ์๋ก ์ธํด ๋ฒ๊ณผ ์ง์ค ๋ฐ๊ฒฌ์ ๊ณผ์ ์ด ์ธ์ ๋ ์ํ๋กญ๊ฒ ๋๋ ๊ฒ์ ์๋๋ผ๋ฉด์ ๋ฐ๋์๊ฒฌ์ ์ ์ํจ โ ์๋ํ๋ฉด ํ์ ๋ฒ์์ ์์ก์ ์ฐจ์์๋ ์์ก๋น์ฌ์๊ฐ ๊ฐ์ธ์ ์ผ๋ก ์ง์ ์ฌ๋ฆฌ์ ์ฐธ์ํ๊ธฐ๋ณด๋ค๋ ๋ณํธ์ฌ๊ฐ ์ฐธ์ํ๋ ๊ฒฝ์ฐ๊ฐ ๋ง์ผ๋ฉฐ, ์ฌ๋ฆฌ๋์๋ ์ฌ์ค๋ฌธ์ ๊ฐ ์๋ ๋ฒ๋ฅ ๋ฌธ์ ๊ฐ ๋๋ถ๋ถ์ด๊ธฐ ๋๋ฌธ์ด๋ผ๋ ๊ฒ์ โก ํํธ, ์ฐ๋ฐฉํ๋ฒ์ฌํ์๋ ใ์ฐ๋ฐฉํ๋ฒ์ฌํ์๋ฒใ(Bundesverfassungsgerichtsgesetz: BVerfGG) ์ 17a์กฐ์ ๋ฐ๋ผ ์ ํ์ ์ด๋๋ง ์ฌํ์ ๋ํ ๋ฐฉ์ก์ ํ์ฉํ๊ณ ์์ โ ใ์ฐ๋ฐฉํ๋ฒ์ฌํ์๋ฒใ ์ 17์กฐ์์ ใ๋ฒ์์กฐ์ง๋ฒใ ์ 14์ ๋ด์ง ์ 16์ ์ ๊ท์ ์ ์ค์ฉํ๋๋ก ํ๊ณ ์์ง๋ง, ๋
น์์ด๋ ์ดฌ์์ ํตํ ์ฌํ๊ณต๊ฐ์ ๊ด๋ จํ์ฌ์๋ ใ๋ฒ์์กฐ์ง๋ฒใ๊ณผ ๋ค๋ฅธ ๋ด์ฉ์ ๊ท์ ํ๊ณ ์์',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# Results for KURE-v1
# tensor([[1.0000, 0.6967, 0.5306],
# [0.6967, 1.0000, 0.4427],
# [0.5306, 0.4427, 1.0000]])
```
|
TOMFORD79/Tom6 | TOMFORD79 | 2025-05-30T07:48:01Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-05-30T07:40:46Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
nicofarr/nanoVLM | nicofarr | 2025-05-30T07:44:51Z | 6 | 0 | nanovlm | [
"nanovlm",
"safetensors",
"vision-language",
"multimodal",
"research",
"image-text-to-text",
"license:mit",
"region:us"
] | image-text-to-text | 2025-05-29T08:37:22Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
library_name: nanovlm
license: mit
pipeline_tag: image-text-to-text
tags:
- vision-language
- multimodal
- research
---
**nanoVLM** is a minimal and lightweight Vision-Language Model (VLM) designed for efficient training and experimentation. Built using pure PyTorch, the entire model architecture and training logic fits within ~750 lines of code. It combines a ViT-based image encoder (SigLIP-B/16-224-85M) with a lightweight causal language model (SmolLM2-135M), resulting in a compact 222M parameter model.
For more information, check out the base model on https://huggingface.co/lusxvr/nanoVLM-222M.
**Usage:**
Clone the nanoVLM repository: https://github.com/huggingface/nanoVLM.
Follow the install instructions and run the following code:
```python
from models.vision_language_model import VisionLanguageModel
model = VisionLanguageModel.from_pretrained("nicofarr/nanoVLM")
```
|
QuantTrio/DeepSeek-R1-0528-Qwen3-8B-GPTQ-Int4-Int8Mix | QuantTrio | 2025-05-30T07:43:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"DeepSeek-R1-0528",
"GPTQ",
"Int4-Int8Mix",
"้ๅไฟฎๅค",
"vLLM",
"conversational",
"arxiv:2501.12948",
"base_model:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B",
"base_model:quantized:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2025-05-30T07:27:28Z | ---
library_name: transformers
license: mit
pipeline_tag: text-generation
tags:
- DeepSeek-R1-0528
- GPTQ
- Int4-Int8Mix
- ้ๅไฟฎๅค
- vLLM
base_model:
- deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
base_model_relation: quantized
---
# DeepSeek-R1-0528-Qwen3-8B-GPTQ-Int4-Int8Mix
ๅบ็กๅ [deepseek-ai/DeepSeek-R1-0528-Qwen3-8B](https://www.modelscope.cn/models/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B)
### ใๆจกๅๆดๆฐๆฅๆใ
```
2025-05-29
1. ้ฆๆฌกcommit
```
### ใไพ่ตใ
```
vllm==0.9.0
transformers==4.52.3
```
<div style="
background: rgba(255, 193, 61, 0.15);
padding: 16px;
border-radius: 6px;
border: 1px solid rgba(255, 165, 0, 0.3);
margin: 16px 0;
">
### ใ๐กๆฐ็ VLLM ๆณจๆไบ้กน๐กใ
#### 1. ๅปบ่ฎฎไฝฟ็จV0ๆจ็ๆจกๅผ
ๅฏๅจvllmไนๅ๏ผๅ
่ฎพ็ฝฎ็ฏๅขๅ้
```
export VLLM_USE_V1=0
```
</div>
### ใๆจกๅๅ่กจใ
| ๆไปถๅคงๅฐ | ๆ่ฟๆดๆฐๆถ้ด |
|---------|--------------|
| `6.9GB` | `2025-05-29` |
### ใๆจกๅไธ่ฝฝใ
```python
from modelscope import snapshot_download
snapshot_download('tclf90/DeepSeek-R1-0528-Qwen3-8B-GPTQ-Int4-Int8Mix', cache_dir="ๆฌๅฐ่ทฏๅพ")
```
### ใไป็ปใ
## DeepSeek-R1-0528
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" />
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/๐ค%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;">
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;">
<img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;">
<img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;">
<img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="LICENSE" style="margin: 2px;">
<img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<p align="center">
<a href="https://arxiv.org/pdf/2501.12948"><b>Paper Link</b>๐๏ธ</a>
</p>
## 1. Introduction
The DeepSeek R1 model has undergone a minor version upgrade, with the current version being DeepSeek-R1-0528. In the latest update, DeepSeek R1 has significantly improved its depth of reasoning and inference capabilities by leveraging increased computational resources and introducing algorithmic optimization mechanisms during post-training. The model has demonstrated outstanding performance across various benchmark evaluations, including mathematics, programming, and general logic. Its overall performance is now approaching that of leading models, such as O3 and Gemini 2.5 Pro.
<p align="center">
<img width="80%" src="figures/benchmark.png">
</p>
Compared to the previous version, the upgraded model shows significant improvements in handling complex reasoning tasks. For instance, in the AIME 2025 test, the modelโs accuracy has increased from 70% in the previous version to 87.5% in the current version. This advancement stems from enhanced thinking depth during the reasoning process: in the AIME test set, the previous model used an average of 12K tokens per question, whereas the new version averages 23K tokens per question.
Beyond its improved reasoning capabilities, this version also offers a reduced hallucination rate, enhanced support for function calling, and better experience for vibe coding.
## 2. Evaluation Results
### DeepSeek-R1-0528
For all our models, the maximum generation length is set to 64K tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 16 responses per query to estimate pass@1.
<div align="center">
| Category | Benchmark (Metric) | DeepSeek R1 | DeepSeek R1 0528
|----------|----------------------------------|-----------------|---|
| General |
| | MMLU-Redux (EM) | 92.9 | 93.4
| | MMLU-Pro (EM) | 84.0 | 85.0
| | GPQA-Diamond (Pass@1) | 71.5 | 81.0
| | SimpleQA (Correct) | 30.1 | 27.8
| | FRAMES (Acc.) | 82.5 | 83.0
| | Humanity's Last Exam (Pass@1) | 8.5 | 17.7
| Code |
| | LiveCodeBench (2408-2505) (Pass@1) | 63.5 | 73.3
| | Codeforces-Div1 (Rating) | 1530 | 1930
| | SWE Verified (Resolved) | 49.2 | 57.6
| | Aider-Polyglot (Acc.) | 53.3 | 71.6
| Math |
| | AIME 2024 (Pass@1) | 79.8 | 91.4
| | AIME 2025 (Pass@1) | 70.0 | 87.5
| | HMMT 2025 (Pass@1) | 41.7 | 79.4 |
| | CNMO 2024 (Pass@1) | 78.8 | 86.9
| Tools |
| | BFCL_v3_MultiTurn (Acc) | - | 37.0 |
| | Tau-Bench (Pass@1) | - | 53.5(Airline)/63.9(Retail)
</div>
Note: We use Agentless framework to evaluate model performance on SWE-Verified. We only evaluate text-only prompts in HLE testsets. GPT-4.1 is employed to act user role in Tau-bench evaluation.
### DeepSeek-R1-0528-Qwen3-8B
Meanwhile, we distilled the chain-of-thought from DeepSeek-R1-0528 to post-train Qwen3 8B Base, obtaining DeepSeek-R1-0528-Qwen3-8B. This model achieves state-of-the-art (SOTA) performance among open-source models on the AIME 2024, surpassing Qwen3 8B by +10.0% and matching the performance of Qwen3-235B-thinking. We believe that the chain-of-thought from DeepSeek-R1-0528 will hold significant importance for both academic research on reasoning models and industrial development focused on small-scale models.
| | AIME 24 | AIME 25 | HMMT Feb 25 | GPQA Diamond | LiveCodeBench (2408-2505) |
|--------------------------------|---------|---------|-------------|--------------|---------------------------|
| Qwen3-235B-A22B | 85.7 | 81.5 | 62.5 | 71.1 | 66.5 |
| Qwen3-32B | 81.4 | 72.9 | - | 68.4 | - |
| Qwen3-8B | 76.0 | 67.3 | - | 62.0 | - |
| Phi-4-Reasoning-Plus-14B | 81.3 | 78.0 | 53.6 | 69.3 | - |
| Gemini-2.5-Flash-Thinking-0520 | 82.3 | 72.0 | 64.2 | 82.8 | 62.3 |
| o3-mini (medium) | 79.6 | 76.7 | 53.3 | 76.8 | 65.9 |
| DeepSeek-R1-0528-Qwen3-8B | 86.0 | 76.3 | 61.5 | 61.1 | 60.5 |
## 5. License
This code repository is licensed under [MIT License](LICENSE). The use of DeepSeek-R1 models is also subject to [MIT License](LICENSE). DeepSeek-R1 series (including Base and Chat) supports commercial use and distillation.
## 6. Citation
```
@misc{deepseekai2025deepseekr1incentivizingreasoningcapability,
title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning},
author={DeepSeek-AI},
year={2025},
eprint={2501.12948},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.12948},
}
```
## 7. Contact
If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
|
Subsets and Splits