modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
mradermacher/ms-marco-TinyBERT-L6-GGUF | mradermacher | 2025-05-31T09:54:20Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:sentence-transformers/msmarco",
"base_model:cross-encoder/ms-marco-TinyBERT-L6",
"base_model:quantized:cross-encoder/ms-marco-TinyBERT-L6",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2025-05-31T09:52:03Z | ---
base_model: cross-encoder/ms-marco-TinyBERT-L6
datasets:
- sentence-transformers/msmarco
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- transformers
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/cross-encoder/ms-marco-TinyBERT-L6
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L6-GGUF/resolve/main/ms-marco-TinyBERT-L6.Q2_K.gguf) | Q2_K | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L6-GGUF/resolve/main/ms-marco-TinyBERT-L6.Q3_K_S.gguf) | Q3_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L6-GGUF/resolve/main/ms-marco-TinyBERT-L6.Q3_K_M.gguf) | Q3_K_M | 0.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L6-GGUF/resolve/main/ms-marco-TinyBERT-L6.IQ4_XS.gguf) | IQ4_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L6-GGUF/resolve/main/ms-marco-TinyBERT-L6.Q3_K_L.gguf) | Q3_K_L | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L6-GGUF/resolve/main/ms-marco-TinyBERT-L6.Q4_K_S.gguf) | Q4_K_S | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L6-GGUF/resolve/main/ms-marco-TinyBERT-L6.Q4_K_M.gguf) | Q4_K_M | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L6-GGUF/resolve/main/ms-marco-TinyBERT-L6.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L6-GGUF/resolve/main/ms-marco-TinyBERT-L6.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L6-GGUF/resolve/main/ms-marco-TinyBERT-L6.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L6-GGUF/resolve/main/ms-marco-TinyBERT-L6.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L6-GGUF/resolve/main/ms-marco-TinyBERT-L6.f16.gguf) | f16 | 0.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
LaaP-ai/finvix1.4-1.5B | LaaP-ai | 2025-05-31T09:53:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-1.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T09:52:51Z | ---
base_model: unsloth/Qwen2.5-1.5B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** LaaP-ai
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-1.5B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
itufilum/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-invisible_domestic_impala | itufilum | 2025-05-31T09:50:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am invisible domestic impala",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-13T13:25:23Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-invisible_domestic_impala
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am invisible domestic impala
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-invisible_domestic_impala
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="itufilum/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-invisible_domestic_impala", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
viazzana/vit-fruits-classifier | viazzana | 2025-05-31T09:48:36Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-05-30T11:56:36Z | ---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- image-classification
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-fruits-classifier
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: Custom fruit image dataset (uploaded from GitHub) without augmentation
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9663461538461539
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-fruits-classifier
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the Custom fruit image dataset (uploaded from GitHub) without augmentation dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1299
- Accuracy: 0.9663
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1341 | 1.0 | 520 | 0.1599 | 0.9538 |
| 0.0929 | 2.0 | 1040 | 0.1430 | 0.9577 |
| 0.0834 | 3.0 | 1560 | 0.1416 | 0.9606 |
| 0.072 | 4.0 | 2080 | 0.1385 | 0.9596 |
| 0.0536 | 5.0 | 2600 | 0.1386 | 0.9606 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
ESERCKR/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mimic_singing_hummingbird | ESERCKR | 2025-05-31T09:44:14Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am mimic singing hummingbird",
"unsloth",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-06T12:11:35Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mimic_singing_hummingbird
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am mimic singing hummingbird
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mimic_singing_hummingbird
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ESERCKR/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mimic_singing_hummingbird", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
ibrahimbukhariLingua/qwen2.5-7b-en-wikipedia-finance_reasoning_distilled-500-v1 | ibrahimbukhariLingua | 2025-05-31T09:42:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T09:42:06Z | ---
base_model: Qwen/Qwen2.5-7B-Instruct
library_name: transformers
model_name: qwen2.5-7b-en-wikipedia-finance_reasoning_distilled-500-v1
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen2.5-7b-en-wikipedia-finance_reasoning_distilled-500-v1
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ibrahimbukhariLingua/qwen2.5-7b-en-wikipedia-finance_reasoning_distilled-500-v1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
01-Sophie-Rain-Spiderman-Viral-Vide/Sophie.Rain.SpiderMan.Video.Tutorial.online | 01-Sophie-Rain-Spiderman-Viral-Vide | 2025-05-31T09:42:12Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-31T09:41:51Z | 39 seconds ago
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
. . . . . . . . . L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter Telegram
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter |
kmpartner/bkv2tpcmlr2-test | kmpartner | 2025-05-31T09:42:08Z | 9 | 0 | peft | [
"peft",
"tensorboard",
"diffusers",
"safetensors",
"arxiv:1910.09700",
"base_model:nota-ai/bk-sdm-v2-tiny",
"base_model:adapter:nota-ai/bk-sdm-v2-tiny",
"region:us"
] | null | 2025-04-08T12:30:33Z | ---
base_model: nota-ai/bk-sdm-v2-tiny
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
elliotthwangmsa/KimLan-Mistral0.2-7b-tw | elliotthwangmsa | 2025-05-31T09:41:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T09:30:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ecamli/Qwen2.5-72B-Instruct-bnb-4bit-Gensyn-Swarm-vocal_placid_sloth | ecamli | 2025-05-31T09:39:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am vocal placid sloth",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-72B-Instruct-bnb-4bit",
"base_model:finetune:Gensyn/Qwen2.5-72B-Instruct-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T14:02:19Z | ---
base_model: Gensyn/Qwen2.5-72B-Instruct-bnb-4bit
library_name: transformers
model_name: Qwen2.5-72B-Instruct-bnb-4bit-Gensyn-Swarm-vocal_placid_sloth
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am vocal placid sloth
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-72B-Instruct-bnb-4bit-Gensyn-Swarm-vocal_placid_sloth
This model is a fine-tuned version of [Gensyn/Qwen2.5-72B-Instruct-bnb-4bit](https://huggingface.co/Gensyn/Qwen2.5-72B-Instruct-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ecamli/Qwen2.5-72B-Instruct-bnb-4bit-Gensyn-Swarm-vocal_placid_sloth", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/erkancamli-inividual/huggingface/runs/dkj5vtil)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Azur-abcd/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-aquatic_mute_jaguar | Azur-abcd | 2025-05-31T09:38:34Z | 12 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am aquatic mute jaguar",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-09T06:46:09Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-aquatic_mute_jaguar
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am aquatic mute jaguar
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-aquatic_mute_jaguar
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Azur-abcd/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-aquatic_mute_jaguar", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
wooki1/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-thriving_meek_jay | wooki1 | 2025-05-31T09:38:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am thriving meek jay",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-29T13:56:50Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-thriving_meek_jay
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am thriving meek jay
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-thriving_meek_jay
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="wooki1/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-thriving_meek_jay", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
PeanutCoding/Donuttest | PeanutCoding | 2025-05-31T09:37:33Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-30T14:56:47Z | ---
library_name: transformers
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: Donuttest
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Donuttest
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.52.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Snarcy/mit-b0_train_002 | Snarcy | 2025-05-31T09:36:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"segformer",
"generated_from_trainer",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2025-05-29T06:56:24Z | ---
library_name: transformers
license: other
base_model: nvidia/mit-b0
tags:
- generated_from_trainer
model-index:
- name: mit-b0_train_002
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mit-b0_train_002
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0192
- Mean Iou: 0.7729
- Mean Accuracy: 0.9129
- Overall Accuracy: 0.9927
- Per Category Iou: [0.9925853731931514, 0.5531662945026051]
- Per Category Accuracy: [0.9944381646337016, 0.8312701706134288]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:----------------------------------------:|:----------------------------------------:|
| 0.0683 | 2.0833 | 200 | 0.0713 | 0.7232 | 0.7746 | 0.9927 | [0.9926882401569727, 0.4536656085583307] | [0.9976131421893637, 0.5515459799195718] |
| 0.0255 | 4.1667 | 400 | 0.0336 | 0.7745 | 0.8378 | 0.9941 | [0.994005549918548, 0.5549593605478301] | [0.9975454321905387, 0.678090930880619] |
| 0.0149 | 6.25 | 600 | 0.0259 | 0.7845 | 0.8906 | 0.9937 | [0.9936048422301071, 0.5753926998804247] | [0.9959657107780793, 0.7852212502935558] |
| 0.0128 | 8.3333 | 800 | 0.0213 | 0.7722 | 0.8384 | 0.9939 | [0.9938867024100838, 0.550550200929298] | [0.9974128512988522, 0.6793014703212046] |
| 0.0109 | 10.4167 | 1000 | 0.0198 | 0.7991 | 0.9127 | 0.9941 | [0.9940010588112351, 0.6042604297464693] | [0.9958754486356582, 0.829546362450035] |
| 0.0082 | 12.5 | 1200 | 0.0191 | 0.7862 | 0.8980 | 0.9936 | [0.9935740957964988, 0.5788638505217656] | [0.995770615987161, 0.8001665702270245] |
| 0.0096 | 14.5833 | 1400 | 0.0190 | 0.7804 | 0.9014 | 0.9933 | [0.9932077882986933, 0.5675541382631513] | [0.9953228943335383, 0.8075024392369727] |
| 0.0089 | 16.6667 | 1600 | 0.0194 | 0.7782 | 0.9219 | 0.9928 | [0.9927389453405545, 0.5637159368970022] | [0.9943921764738984, 0.8494669994843101] |
| 0.0062 | 18.75 | 1800 | 0.0192 | 0.7729 | 0.9129 | 0.9927 | [0.9925853731931514, 0.5531662945026051] | [0.9944381646337016, 0.8312701706134288] |
### Framework versions
- Transformers 4.52.3
- Pytorch 2.7.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
mradermacher/anime-senko-chat-enhanced-GGUF | mradermacher | 2025-05-31T09:35:34Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:EnterNameBros/anime-senko-chat-enhanced",
"base_model:quantized:EnterNameBros/anime-senko-chat-enhanced",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T09:26:39Z | ---
base_model: EnterNameBros/anime-senko-chat-enhanced
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/EnterNameBros/anime-senko-chat-enhanced
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/anime-senko-chat-enhanced-GGUF/resolve/main/anime-senko-chat-enhanced.Q2_K.gguf) | Q2_K | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/anime-senko-chat-enhanced-GGUF/resolve/main/anime-senko-chat-enhanced.Q3_K_S.gguf) | Q3_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/anime-senko-chat-enhanced-GGUF/resolve/main/anime-senko-chat-enhanced.Q3_K_M.gguf) | Q3_K_M | 0.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/anime-senko-chat-enhanced-GGUF/resolve/main/anime-senko-chat-enhanced.IQ4_XS.gguf) | IQ4_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/anime-senko-chat-enhanced-GGUF/resolve/main/anime-senko-chat-enhanced.Q4_K_S.gguf) | Q4_K_S | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/anime-senko-chat-enhanced-GGUF/resolve/main/anime-senko-chat-enhanced.Q3_K_L.gguf) | Q3_K_L | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/anime-senko-chat-enhanced-GGUF/resolve/main/anime-senko-chat-enhanced.Q4_K_M.gguf) | Q4_K_M | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/anime-senko-chat-enhanced-GGUF/resolve/main/anime-senko-chat-enhanced.Q5_K_S.gguf) | Q5_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/anime-senko-chat-enhanced-GGUF/resolve/main/anime-senko-chat-enhanced.Q5_K_M.gguf) | Q5_K_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/anime-senko-chat-enhanced-GGUF/resolve/main/anime-senko-chat-enhanced.Q6_K.gguf) | Q6_K | 0.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/anime-senko-chat-enhanced-GGUF/resolve/main/anime-senko-chat-enhanced.Q8_0.gguf) | Q8_0 | 0.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/anime-senko-chat-enhanced-GGUF/resolve/main/anime-senko-chat-enhanced.f16.gguf) | f16 | 0.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
garliceric/test2 | garliceric | 2025-05-31T09:35:04Z | 0 | 0 | null | [
"pytorch",
"bert",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2025-05-31T09:34:16Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: tiny-bert-sst2-distilled
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.8405963302752294
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-bert-sst2-distilled
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4421
- Accuracy: 0.8406
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0007353633116058296
- train_batch_size: 1024
- eval_batch_size: 1024
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4558 | 1.0 | 66 | 1.3711 | 0.8234 |
| 0.626 | 2.0 | 132 | 1.2958 | 0.8326 |
| 0.454 | 3.0 | 198 | 1.2961 | 0.8372 |
| 0.3567 | 4.0 | 264 | 1.4400 | 0.8394 |
| 0.3041 | 5.0 | 330 | 1.4421 | 0.8406 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
VortexHunter23/LeoPARD-Coder-0.1 | VortexHunter23 | 2025-05-31T09:34:46Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:agentica-org/DeepCoder-14B-Preview",
"base_model:quantized:agentica-org/DeepCoder-14B-Preview",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-31T09:32:54Z | ---
base_model: agentica-org/DeepCoder-14B-Preview
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** VortexHunter23
- **License:** apache-2.0
- **Finetuned from model :** agentica-org/DeepCoder-14B-Preview
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
fakeid/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-enormous_rough_chimpanzee | fakeid | 2025-05-31T09:34:44Z | 40 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am enormous rough chimpanzee",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-16T16:02:05Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-enormous_rough_chimpanzee
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am enormous rough chimpanzee
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-enormous_rough_chimpanzee
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="fakeid/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-enormous_rough_chimpanzee", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.7.0+cpu
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
009-Sophie-Rain-SpiderMan-Videosss/Sophie.Rain.SpiderMan.Video.Tutorial.online | 009-Sophie-Rain-SpiderMan-Videosss | 2025-05-31T09:33:54Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-31T09:32:58Z | 39 seconds ago
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
. . . . . . . . . L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter Telegram
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter |
MaxPowerUnlimited/vit-superhero-villain | MaxPowerUnlimited | 2025-05-31T09:31:37Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-05-31T07:07:59Z | ---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- image-classification
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-superhero-villain
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.736318407960199
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-superhero-villain
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2902
- Accuracy: 0.7363
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 26 | 1.4140 | 0.735 |
| 1.2713 | 2.0 | 52 | 1.3908 | 0.735 |
| 1.2713 | 3.0 | 78 | 1.3709 | 0.735 |
| 1.2028 | 4.0 | 104 | 1.3544 | 0.74 |
| 1.2028 | 5.0 | 130 | 1.3359 | 0.74 |
| 1.1776 | 6.0 | 156 | 1.3219 | 0.74 |
| 1.1776 | 7.0 | 182 | 1.3078 | 0.74 |
| 1.1515 | 8.0 | 208 | 1.2952 | 0.74 |
| 1.1515 | 9.0 | 234 | 1.2841 | 0.74 |
| 1.1519 | 10.0 | 260 | 1.2733 | 0.745 |
| 1.1519 | 11.0 | 286 | 1.2637 | 0.745 |
| 1.107 | 12.0 | 312 | 1.2557 | 0.745 |
| 1.107 | 13.0 | 338 | 1.2495 | 0.745 |
| 1.0611 | 14.0 | 364 | 1.2441 | 0.745 |
| 1.0611 | 15.0 | 390 | 1.2388 | 0.745 |
| 1.0748 | 16.0 | 416 | 1.2347 | 0.745 |
| 1.0748 | 17.0 | 442 | 1.2317 | 0.745 |
| 1.0563 | 18.0 | 468 | 1.2294 | 0.745 |
| 1.0563 | 19.0 | 494 | 1.2280 | 0.745 |
| 1.062 | 20.0 | 520 | 1.2277 | 0.745 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.5.1+cu121
- Datasets 3.5.0
- Tokenizers 0.21.1
|
mradermacher/Agentic-Qwen3-4B-e12-lr4-b2-GGUF | mradermacher | 2025-05-31T09:28:17Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"en",
"dataset:smolagents/codeagent-traces",
"base_model:akseljoonas/Agentic-Qwen3-4B-e12-lr4-b2",
"base_model:quantized:akseljoonas/Agentic-Qwen3-4B-e12-lr4-b2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-31T08:54:50Z | ---
base_model: akseljoonas/Agentic-Qwen3-4B-e12-lr4-b2
datasets: smolagents/codeagent-traces
language:
- en
library_name: transformers
model_name: Agentic-Qwen3-4B-e12-lr4-b2
quantized_by: mradermacher
tags:
- generated_from_trainer
- open-r1
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/akseljoonas/Agentic-Qwen3-4B-e12-lr4-b2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Agentic-Qwen3-4B-e12-lr4-b2-GGUF/resolve/main/Agentic-Qwen3-4B-e12-lr4-b2.Q2_K.gguf) | Q2_K | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Agentic-Qwen3-4B-e12-lr4-b2-GGUF/resolve/main/Agentic-Qwen3-4B-e12-lr4-b2.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Agentic-Qwen3-4B-e12-lr4-b2-GGUF/resolve/main/Agentic-Qwen3-4B-e12-lr4-b2.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Agentic-Qwen3-4B-e12-lr4-b2-GGUF/resolve/main/Agentic-Qwen3-4B-e12-lr4-b2.Q3_K_L.gguf) | Q3_K_L | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Agentic-Qwen3-4B-e12-lr4-b2-GGUF/resolve/main/Agentic-Qwen3-4B-e12-lr4-b2.IQ4_XS.gguf) | IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Agentic-Qwen3-4B-e12-lr4-b2-GGUF/resolve/main/Agentic-Qwen3-4B-e12-lr4-b2.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Agentic-Qwen3-4B-e12-lr4-b2-GGUF/resolve/main/Agentic-Qwen3-4B-e12-lr4-b2.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Agentic-Qwen3-4B-e12-lr4-b2-GGUF/resolve/main/Agentic-Qwen3-4B-e12-lr4-b2.Q5_K_S.gguf) | Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Agentic-Qwen3-4B-e12-lr4-b2-GGUF/resolve/main/Agentic-Qwen3-4B-e12-lr4-b2.Q5_K_M.gguf) | Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Agentic-Qwen3-4B-e12-lr4-b2-GGUF/resolve/main/Agentic-Qwen3-4B-e12-lr4-b2.Q6_K.gguf) | Q6_K | 3.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Agentic-Qwen3-4B-e12-lr4-b2-GGUF/resolve/main/Agentic-Qwen3-4B-e12-lr4-b2.Q8_0.gguf) | Q8_0 | 4.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Agentic-Qwen3-4B-e12-lr4-b2-GGUF/resolve/main/Agentic-Qwen3-4B-e12-lr4-b2.f16.gguf) | f16 | 8.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
elliotthwangmsa/KimLan-Mistral0.2-7b-tw_train_ouputs | elliotthwangmsa | 2025-05-31T09:26:02Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:elliotthwang/Ministral-7B-Instruct-v0.2-tw",
"base_model:adapter:elliotthwang/Ministral-7B-Instruct-v0.2-tw",
"region:us"
] | null | 2025-05-31T09:25:58Z | ---
base_model: elliotthwang/Ministral-7B-Instruct-v0.2-tw
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
uiovasot/salami_model_v1_vllm | uiovasot | 2025-05-31T09:23:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T09:20:42Z | ---
base_model: unsloth/qwen3-4b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** uiovasot
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-4b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Nitish035/mistral_CMoS_adapter8 | Nitish035 | 2025-05-31T09:22:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T09:22:29Z | ---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Nitish035
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
009-Sophie-Rain-SpiderMan-Videosss/watch-Sophie.Rain.Spider-Man.Video.Tutorial | 009-Sophie-Rain-SpiderMan-Videosss | 2025-05-31T09:21:31Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-31T09:21:12Z | 01 seconds ago
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Sophie Rain is a social media personality and digital creator who gained fame for a viral video related to Spider-Man. The video, which is "trending" and "leaked," has caused significant buzz online, making her a popular figure in the online community. Additionally, Sophie Rain is known for her high earnings on OnlyFans, surpassing those of some NBA legends
|
pasithbas159/Gemma3_HII_satellite_v1 | pasithbas159 | 2025-05-31T09:17:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-4b-it",
"base_model:finetune:unsloth/gemma-3-4b-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-22T14:16:29Z | ---
base_model: unsloth/gemma-3-4b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** pasithbas159
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mlfoundations-dev/openthoughts3_100k_llama3 | mlfoundations-dev | 2025-05-31T09:15:30Z | 134 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:finetune:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T17:52:29Z | ---
library_name: transformers
license: llama3
base_model: meta-llama/Meta-Llama-3-8B
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: openthoughts3_100k_llama3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openthoughts3_100k_llama3
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the mlfoundations-dev/openthoughts3_100k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 64
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- total_eval_batch_size: 512
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.3.0
- Datasets 3.1.0
- Tokenizers 0.20.3
|
jacktol/atc-pilot-speaker-role-classification-model | jacktol | 2025-05-31T09:15:23Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"en",
"base_model:microsoft/deberta-v3-large",
"base_model:finetune:microsoft/deberta-v3-large",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-30T09:47:40Z | ---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- accuracy
- precision
- recall
base_model:
- microsoft/deberta-v3-large
model-index:
- name: ATC-Pilot-Speaker Role Classifier
results:
- task:
type: text-classification
metrics:
- name: Accuracy
type: accuracy
value: 96.64
- name: Precision
type: precision
value: 96.4
- name: Recall
type: recall
value: 96.91
- name: F1 Score
type: f1
value: 96.65
---
# ATC-Pilot Speaker Role Classification Model
This is a binary sequence classification model designed to determine whether a given air traffic communication utterance originates from a **pilot** or an **air traffic controller (ATC)**, based on text alone.
Traditionally, speaker role attribution in air traffic communication relies on acoustic features such as voice characteristics and channel separation. This model departs from that convention by tackling the task entirely in the **text domain**, using a transformer-based architecture fine-tuned for speaker role prediction.
## Task Description
The model performs binary classification on single-turn utterances to assign one of two speaker roles:
- `PILOT`
- `ATC`
It is fine-tuned using a DeBERTa-v3-large model on manually processed and labeled air traffic communication transcripts.
## Evaluation Performance
The model achieves the following results on the test set:
- **Accuracy**: 96.64%
- **Precision**: 96.40%
- **Recall**: 96.91%
- **F1 Score**: 96.65%
## Preprocessing & Training Setup
A custom preprocessing pipeline was used to prepare the training data, including:
- Speaker attribution heuristics based on known call sign and phrase patterns
- Phrase normalization
- Text standardization
- Filtering of irrelevant utterances
- Dataset balancing
Each utterance is treated independently and labeled for speaker role classification.
## Model Architecture
- Base model: `microsoft/deberta-v3-large`
- Task type: `SequenceClassification` (`num_labels=2`)
- Training setup:
- Trained on 2x H100 80GB SXM5
- Cosine learning rate schedule with warmup (10%)
- Batch size: 128
- Early stopping based on F1 score
- Max sequence length: 256 tokens
- Mixed-precision training (FP16)
- Evaluation every 200 steps
## Intended Use
This model is designed for:
- Speaker role tagging in ATC communication transcripts
- Preprocessing for multi-modal ATC systems
- Filtering or structuring large corpora of aviation text for downstream tasks
## Limitations
- Operates on single-turn utterances only; no turn-level or dialogue context is used
- Ambiguous transmissions like "ROGER" or "THANK YOU" may be difficult to classify using text alone
- Additional modalities (e.g., audio features, metadata) may be required for full disambiguation
## Example Predictions
```
Input: "CLEARED FOR TAKEOFF RUNWAY ONE ONE LEFT"
Prediction: "ATC"
Input: "REQUESTING PUSHBACK"
Prediction: "PILOT"
```
## Benchmark Comparison
This model improves upon prior transformer-based models for text-only speaker role classification. For comparison, a related model by [Juan Zuluaga-Gomez](https://huggingface.co/Jzuluaga/bert-base-speaker-role-atc-en-uwb-atcc), based on BERT-base, achieved the following:
- **Accuracy**: 89.03%
- **Precision**: 87.10%
- **Recall**: 91.63%
- **F1 Score**: 89.31%
The fine-tuned DeBERTa-v3-large model presented here significantly outperforms this baseline:
- **Accuracy**: 96.64%
- **Precision**: 96.40%
- **Recall**: 96.91%
- **F1 Score**: 96.65%
Jupyter notebooks are included to reproduce and compare evaluations:
- `evaluate_juans_model.ipynb`
- `evaluate_jacks_model.ipynb`
These evaluate both models using the same test set and print detailed classification metrics.
## References
- [Juan Zuluaga-Gomez – Hugging Face Model](https://huggingface.co/Jzuluaga/bert-base-speaker-role-atc-en-uwb-atcc)
- [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://github.com/microsoft/DeBERTa)
- [GitHub Repository – ATC Pilot Speaker Role Classification Task](https://github.com/jack-tol/atc-pilot-speaker-role-classification-task) |
zahramahani/Qwen2-0.5B-GRPO-test2 | zahramahani | 2025-05-31T09:14:33Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"dataset:AI-MO/NuminaMath-TIR",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T08:37:27Z | ---
base_model: Qwen/Qwen2-0.5B-Instruct
datasets: AI-MO/NuminaMath-TIR
library_name: transformers
model_name: Qwen2-0.5B-GRPO-test2
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen2-0.5B-GRPO-test2
This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on the [AI-MO/NuminaMath-TIR](https://huggingface.co/datasets/AI-MO/NuminaMath-TIR) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="zahramahani/Qwen2-0.5B-GRPO-test2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.48.1
- Pytorch: 2.5.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
keanteng/bert-climate-sentiment-wqf7007 | keanteng | 2025-05-31T09:14:23Z | 0 | 0 | null | [
"safetensors",
"bert",
"text-classification",
"sentiment-analysis",
"climate-change",
"twitter",
"en",
"dataset:edqian/twitter-climate-change-sentiment-dataset",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:agpl-3.0",
"region:us"
] | text-classification | 2025-05-29T03:42:56Z |
---
language: en
license: agpl-3.0
datasets:
- edqian/twitter-climate-change-sentiment-dataset
metrics:
- accuracy
- f1
- precision
- recall
base_model: bert-base-uncased
pipeline_tag: text-classification
tags:
- text-classification
- sentiment-analysis
- climate-change
- twitter
- bert
---
# BERT Climate Sentiment Analysis Model
## Model Description
This model fine-tunes BERT (bert-base-uncased) to perform sentiment analysis on climate change-related tweets. It classifies tweets into four sentiment categories: anti-climate (negative), neutral, pro-climate (positive), and news.
## Model Details
- **Model Type:** Fine-tuned BERT (bert-base-uncased)
- **Version:** 1.0.0
- **Framework:** PyTorch & Transformers
- **Language:** English
- **License:** [AGPL-3.0](https://www.gnu.org/licenses/agpl-3.0.en.html)
## Training Data
This model was trained on the [Twitter Climate Change Sentiment Dataset](https://www.kaggle.com/datasets/edqian/twitter-climate-change-sentiment-dataset/data), which contains tweets related to climate change labeled with sentiment categories:
- **news**: Factual news about climate change (2)
- **pro**: Supporting action on climate change (1)
- **neutral**: Neutral stance on climate change (0)
- **anti**: Skeptical about climate change claims (-1)
The dataset was used with raw text without special preprocessing to evaluate performance on natural language tweets.
## Training Procedure
- **Training Framework:** PyTorch with Transformers
- **Training Approach:** Fine-tuning the entire BERT model
- **Optimizer:** AdamW with learning rate 2e-5
- **Batch Size:** 64
- **Early Stopping:** Yes, with patience of 2 epochs
- **Hardware:** GPU acceleration (when available)
## Model Performance

## Limitations and Biases
- The model is trained on Twitter data, which may not generalize well to other text sources.
- Twitter data may contain inherent biases in how climate change is discussed.
- The model might struggle with complex or nuanced sentiment expressions.
- Sarcasm and figurative language may be misclassified.
- The model is only trained for English language content.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("keanteng/bert-climate-sentiment-wqf7007")
model = AutoModelForSequenceClassification.from_pretrained("keanteng/bert-climate-sentiment-wqf7007")
# Prepare text
text = "Climate change is real and we need to act now!"
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=128)
# Make prediction
with torch.no_grad():
outputs = model(**inputs)
predictions = torch.argmax(outputs.logits, dim=1)
# Map prediction to sentiment
sentiment_map = {-1: "anti", 0: "neutral", 1: "pro", 2: "news"}
predicted_sentiment = sentiment_map[predictions.item()]
print(f"Predicted sentiment: predicted_sentiment")
```
## Ethical Considerations
This model should be used responsibly for analyzing climate sentiment and should not be deployed in ways that might:
- Amplify misinformation about climate change
- Target or discriminate against specific groups
- Make critical decisions without human oversight
|
Mhammad2023/my-dummy-model | Mhammad2023 | 2025-05-31T09:11:44Z | 0 | 0 | null | [
"tf",
"camembert",
"region:us"
] | null | 2025-05-30T18:52:37Z | # My Dummy Model
---
language: fr
license: apache-2.0
tags:
- masked-lm
- camembert
- transformers
- tf
- french
- fill-mask
---
# CamemBERT MLM - Fine-tuned Model
This is a TensorFlow-based masked language model (MLM) based on the [camembert-base](https://huggingface.co/camembert-base) checkpoint, a RoBERTa-like model trained on French text.
## Model description
This model uses the CamemBERT architecture, which is a RoBERTa-based transformer trained on large-scale French corpora (e.g., OSCAR, CCNet). It's designed to perform Masked Language Modeling (MLM) tasks.
It was loaded and saved using the `transformers` library in TensorFlow (`TFAutoModelForMaskedLM`). It can be used for fill-in-the-blank tasks in French.
## Intended uses & limitations
### Intended uses
- Fill-mask predictions in French
- Feature extraction for NLP tasks
- Fine-tuning on downstream tasks like text classification, NER, etc.
### Limitations
- Works best with French text
- May not generalize well to other languages
- Cannot be used for generative tasks (e.g., translation, text generation)
## How to use
```python
from transformers import TFAutoModelForMaskedLM, AutoTokenizer
import tensorflow as tf
model = TFAutoModelForMaskedLM.from_pretrained("Mhammad2023/my-dummy-model")
tokenizer = AutoTokenizer.from_pretrained("Mhammad2023/my-dummy-model")
inputs = tokenizer("J'aime le [MASK] rouge.", return_tensors="tf")
outputs = model(**inputs)
logits = outputs.logits
masked_index = tf.argmax(inputs.input_ids == tokenizer.mask_token_id, axis=1)[0]
predicted_token_id = tf.argmax(logits[0, masked_index])
predicted_token = tokenizer.decode([predicted_token_id])
print(f"Predicted word: {predicted_token}")
```
## Limitations and bias
This model inherits the limitations and biases from the camembert-base checkpoint, including:
Potential biases from the training data (e.g., internet corpora)
## Inappropriate predictions for sensitive topics
Use with caution in production or sensitive applications.
## Training data
The model was not further fine-tuned; it is based directly on camembert-base, which was trained on:
OSCAR (Open Super-large Crawled ALMAnaCH coRpus)
CCNet (Common Crawl News)
## Training procedure
No additional training was applied for this version. You can load and fine-tune it on your task using Trainer or Keras API.
## Evaluation results
This version has not been evaluated on downstream tasks. For evaluation metrics and benchmarks, refer to the original camembert-base model card. |
BootesVoid/cmbbsi7eo09yj85uuz13e3pds_cmbbybizd0b0k85uumpwrigrg | BootesVoid | 2025-05-31T09:10:46Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-31T09:10:45Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: mira_blaze
---
# Cmbbsi7Eo09Yj85Uuz13E3Pds_Cmbbybizd0B0K85Uumpwrigrg
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `mira_blaze` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "mira_blaze",
"lora_weights": "https://huggingface.co/BootesVoid/cmbbsi7eo09yj85uuz13e3pds_cmbbybizd0b0k85uumpwrigrg/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbbsi7eo09yj85uuz13e3pds_cmbbybizd0b0k85uumpwrigrg', weight_name='lora.safetensors')
image = pipeline('mira_blaze').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbbsi7eo09yj85uuz13e3pds_cmbbybizd0b0k85uumpwrigrg/discussions) to add images that show off what you’ve made with this LoRA.
|
mradermacher/SynthRL-A-MMK12-8K-7B-GGUF | mradermacher | 2025-05-31T09:10:29Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Jakumetsu/SynthRL-A-MMK12-8K-7B",
"base_model:quantized:Jakumetsu/SynthRL-A-MMK12-8K-7B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-31T08:40:59Z | ---
base_model: Jakumetsu/SynthRL-A-MMK12-8K-7B
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Jakumetsu/SynthRL-A-MMK12-8K-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SynthRL-A-MMK12-8K-7B-GGUF/resolve/main/SynthRL-A-MMK12-8K-7B.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/SynthRL-A-MMK12-8K-7B-GGUF/resolve/main/SynthRL-A-MMK12-8K-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/SynthRL-A-MMK12-8K-7B-GGUF/resolve/main/SynthRL-A-MMK12-8K-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SynthRL-A-MMK12-8K-7B-GGUF/resolve/main/SynthRL-A-MMK12-8K-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/SynthRL-A-MMK12-8K-7B-GGUF/resolve/main/SynthRL-A-MMK12-8K-7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/SynthRL-A-MMK12-8K-7B-GGUF/resolve/main/SynthRL-A-MMK12-8K-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SynthRL-A-MMK12-8K-7B-GGUF/resolve/main/SynthRL-A-MMK12-8K-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SynthRL-A-MMK12-8K-7B-GGUF/resolve/main/SynthRL-A-MMK12-8K-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/SynthRL-A-MMK12-8K-7B-GGUF/resolve/main/SynthRL-A-MMK12-8K-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/SynthRL-A-MMK12-8K-7B-GGUF/resolve/main/SynthRL-A-MMK12-8K-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/SynthRL-A-MMK12-8K-7B-GGUF/resolve/main/SynthRL-A-MMK12-8K-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/SynthRL-A-MMK12-8K-7B-GGUF/resolve/main/SynthRL-A-MMK12-8K-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Qwen3-4B-Reminder-2025-05-29-GGUF | mradermacher | 2025-05-31T09:10:28Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:adaptive-ai/Qwen3-4B-Reminder-2025-05-29",
"base_model:quantized:adaptive-ai/Qwen3-4B-Reminder-2025-05-29",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-31T08:40:10Z | ---
base_model: adaptive-ai/Qwen3-4B-Reminder-2025-05-29
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/adaptive-ai/Qwen3-4B-Reminder-2025-05-29
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Reminder-2025-05-29-GGUF/resolve/main/Qwen3-4B-Reminder-2025-05-29.Q2_K.gguf) | Q2_K | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Reminder-2025-05-29-GGUF/resolve/main/Qwen3-4B-Reminder-2025-05-29.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Reminder-2025-05-29-GGUF/resolve/main/Qwen3-4B-Reminder-2025-05-29.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Reminder-2025-05-29-GGUF/resolve/main/Qwen3-4B-Reminder-2025-05-29.Q3_K_L.gguf) | Q3_K_L | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Reminder-2025-05-29-GGUF/resolve/main/Qwen3-4B-Reminder-2025-05-29.IQ4_XS.gguf) | IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Reminder-2025-05-29-GGUF/resolve/main/Qwen3-4B-Reminder-2025-05-29.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Reminder-2025-05-29-GGUF/resolve/main/Qwen3-4B-Reminder-2025-05-29.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Reminder-2025-05-29-GGUF/resolve/main/Qwen3-4B-Reminder-2025-05-29.Q5_K_S.gguf) | Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Reminder-2025-05-29-GGUF/resolve/main/Qwen3-4B-Reminder-2025-05-29.Q5_K_M.gguf) | Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Reminder-2025-05-29-GGUF/resolve/main/Qwen3-4B-Reminder-2025-05-29.Q6_K.gguf) | Q6_K | 3.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Reminder-2025-05-29-GGUF/resolve/main/Qwen3-4B-Reminder-2025-05-29.Q8_0.gguf) | Q8_0 | 4.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Reminder-2025-05-29-GGUF/resolve/main/Qwen3-4B-Reminder-2025-05-29.f16.gguf) | f16 | 8.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Muennighoff/Qwen2.5-1.5B-hl-true-v3 | Muennighoff | 2025-05-31T09:10:17Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:simplescaling/openaimath",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-27T04:43:25Z | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
datasets: simplescaling/openaimath
library_name: transformers
model_name: Qwen2.5-1.5B-hl-true-v3
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen2.5-1.5B-hl-true-v3
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the [simplescaling/openaimath](https://huggingface.co/datasets/simplescaling/openaimath) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Muennighoff/Qwen2.5-1.5B-hl-true-v3", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/muennighoff/halos/runs/8k0io02d)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0.dev0
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
tiao55/task-10-microsoft-Phi-3.5-mini-instruct | tiao55 | 2025-05-31T09:09:33Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:adapter:microsoft/Phi-3.5-mini-instruct",
"region:us"
] | null | 2025-05-31T09:09:23Z | ---
base_model: microsoft/Phi-3.5-mini-instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
BootesVoid/cmbby9s6i0b0a85uuxvqsrxlg_cmbbychps0b1085uua9zc66un | BootesVoid | 2025-05-31T09:08:03Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-31T09:07:55Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: MEGS
---
# Cmbby9S6I0B0A85Uuxvqsrxlg_Cmbbychps0B1085Uua9Zc66Un
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `MEGS` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "MEGS",
"lora_weights": "https://huggingface.co/BootesVoid/cmbby9s6i0b0a85uuxvqsrxlg_cmbbychps0b1085uua9zc66un/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbby9s6i0b0a85uuxvqsrxlg_cmbbychps0b1085uua9zc66un', weight_name='lora.safetensors')
image = pipeline('MEGS').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbby9s6i0b0a85uuxvqsrxlg_cmbbychps0b1085uua9zc66un/discussions) to add images that show off what you’ve made with this LoRA.
|
FormlessAI/d306e4d9-46e2-4648-aaa0-ee2df6b445f1 | FormlessAI | 2025-05-31T09:08:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"unsloth",
"base_model:unsloth/Qwen2-0.5B",
"base_model:finetune:unsloth/Qwen2-0.5B",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T06:10:37Z | ---
base_model: unsloth/Qwen2-0.5B
library_name: transformers
model_name: d306e4d9-46e2-4648-aaa0-ee2df6b445f1
tags:
- generated_from_trainer
- trl
- sft
- unsloth
licence: license
---
# Model Card for d306e4d9-46e2-4648-aaa0-ee2df6b445f1
This model is a fine-tuned version of [unsloth/Qwen2-0.5B](https://huggingface.co/unsloth/Qwen2-0.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FormlessAI/d306e4d9-46e2-4648-aaa0-ee2df6b445f1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/pm9w02qh)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.0
- Transformers: 4.52.3
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
VortexHunter23/Shed-Coder-0.6 | VortexHunter23 | 2025-05-31T09:06:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:VortexHunter23/Shed-Coder-0.5",
"base_model:quantized:VortexHunter23/Shed-Coder-0.5",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-31T09:04:30Z | ---
base_model: VortexHunter23/Shed-Coder-0.5
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** VortexHunter23
- **License:** apache-2.0
- **Finetuned from model :** VortexHunter23/Shed-Coder-0.5
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
luckeciano/Qwen-2.5-7B-GRPO-Base-1Action_382 | luckeciano | 2025-05-31T09:04:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T04:18:14Z | ---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-Base-1Action_382
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-Base-1Action_382
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-Base-1Action_382", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/497jcy9a)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
jyevo/gemma-3 | jyevo | 2025-05-31T09:04:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T08:21:13Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Rustam39/xlm-roberta-base-finetuned-panx-de | Rustam39 | 2025-05-31T09:01:09Z | 2 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-05-29T09:59:10Z | ---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1399
- F1: 0.8620
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2556 | 1.0 | 525 | 0.1498 | 0.8286 |
| 0.1305 | 2.0 | 1050 | 0.1374 | 0.8535 |
| 0.0786 | 3.0 | 1575 | 0.1399 | 0.8620 |
### Framework versions
- Transformers 4.52.2
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
zhouwg/kantv | zhouwg | 2025-05-31T09:00:50Z | 4 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-19T03:21:31Z | 1.I'm not AI expert and I'm learning on-device AI tech currently. all LLM models in this repo are created by official tools in llama.cpp(https://github.com/ggml-org/llama.cpp/blob/master/convert_hf_to_gguf.py, https://github.com/ggml-org/llama.cpp/tree/master/tools/quantize). No fine-tuning or other specialized AI techniques were applied because I don't know how to use them at the moment.
2.LLM models in this repo are dedicated/validated for Project KanTV(https://github.com/kantv-ai/kantv), in other words, for personal dev experimental: Google Gemma3-4B achives the best overall experience on Qualcomm Snapdragon 8Gen3 and 8Elite based Android phone.
3.LLM models in GGUF format which created by AI experts from ggml-org(https://huggingface.co/ggml-org) and lmstudio-community(https://huggingface.co/lmstudio-community) and unsloth(https://huggingface.co/unsloth) are strongly recommended.
4.about DeepSeek-R1-0528-Qwen3-8B-q4_k_m.gguf
- Model creator: [deepseek-ai](https://huggingface.co/deepseek-ai)
- Original model: [DeepSeek-R1-0528-Qwen3-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B)
- steps to build DeepSeek-R1-0528-Qwen3-8B-q4_k_m.gguf
```
export HF_ENDPOINT=https://hf-mirror.com (this is optional, might-be needed for developers in China)
huggingface-cli download --resume-download deepseek-ai/DeepSeek-R1-0528-Qwen3-8B --local-dir DeepSeek-R1-0528-Qwen3-8B
python convert_hf_to_gguf.py DeepSeek-R1-0528-Qwen3-8B
llama-quantize DeepSeek-R1-0528-Qwen3-8B-F16.gguf DeepSeek-R1-0528-Qwen3-8B-q4_k_m.gguf Q4_K_M
```
5.about MiMo-VL-7B-RL-q4_k_m.gguf
- Model creator: [XiaomiMimo](https://huggingface.co/XiaomiMiMo/)
- Original model: [MiMo-VL-7B-RL](https://huggingface.co/XiaomiMiMo/MiMo-VL-7B-RL)
- steps to build MiMo-VL-7B-RL-q4_k_m.gguf(similar to DeepSeek-R1-0528-Qwen3-8B-q4_k_m.gguf) |
elliotthwang/Ministral-7B-Instruct-v0.2-tw | elliotthwang | 2025-05-31T09:00:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T02:28:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
繁體中文 微調
loss: 0.1670
|
Asit03/LB-30-05-25 | Asit03 | 2025-05-31T08:59:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:Asit03/LB-14-05-25",
"base_model:quantized:Asit03/LB-14-05-25",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T08:44:41Z | ---
base_model: Asit03/LB-14-05-25
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Asit03
- **License:** apache-2.0
- **Finetuned from model :** Asit03/LB-14-05-25
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
safe-llm-finetune/gemma-3-1B-it-CodeUltraFeedback-r8_a16_d0.05_b4_nf4 | safe-llm-finetune | 2025-05-31T08:58:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"endpoints_compatible",
"region:us"
] | null | 2025-05-23T16:04:26Z | ---
base_model: google/gemma-3-1B-it
library_name: transformers
model_name: gemma-3-1B-it-CodeUltraFeedback-r8_a16_d0.05_b4_nf4
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-3-1B-it-CodeUltraFeedback-r8_a16_d0.05_b4_nf4
This model is a fine-tuned version of [google/gemma-3-1B-it](https://huggingface.co/google/gemma-3-1B-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="safe-llm-finetune/gemma-3-1B-it-CodeUltraFeedback-r8_a16_d0.05_b4_nf4", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/manon_k-saarland-informatics-campus/huggingface/runs/8mh6uqsc)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
trending-alana-flores-foto-video-link/viralhq.enlace.completo.18.alana.flores.foto.video.de.alana.flores.foto.viral.video | trending-alana-flores-foto-video-link | 2025-05-31T08:57:51Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-31T08:57:00Z | <a rel="nofollow" href="https://anyplacecoming.com/zq5yqv0i?key=0256cc3e9f81675f46e803a0abffb9bf/"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
<a rel="nofollow" href="https://anyplacecoming.com/zq5yqv0i?key=0256cc3e9f81675f46e803a0abffb9bf/">🌐 Viral Video Original Full HD🟢==►► WATCH NOW</a>
<a rel="nofollow" href="https://viralflix.xyz/?or">🔴 CLICK HERE 🌐==►► Download Now)</a> |
jinjiajie/LongRefiner-Global-Selection-3B | jinjiajie | 2025-05-31T08:57:19Z | 0 | 0 | null | [
"safetensors",
"en",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2025-05-31T08:48:50Z | Temporary Redirect. Redirecting to /jinjiajie/Global-Selection-Qwen2.5-3B-Instruct/resolve/main/README.md |
ezzaldeen/Qwen2.5-1.5B-Open-R1-Distill | ezzaldeen | 2025-05-31T08:56:46Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:open-r1/OpenR1-Math-220k",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-25T17:22:22Z | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
datasets: open-r1/OpenR1-Math-220k
library_name: transformers
model_name: Qwen2.5-1.5B-Open-R1-Distill
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-1.5B-Open-R1-Distill
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the [open-r1/OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ezzaldeen/Qwen2.5-1.5B-Open-R1-Distill", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/smol-t1/huggingface/runs/zwhor76i)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.0
- Transformers: 4.52.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
MaLA-LM/emma-500-llama3-8b-mono | MaLA-LM | 2025-05-31T08:55:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:MaLA-LM/mala-monolingual-split",
"dataset:MaLA-LM/mala-code-reasoning-v2",
"arxiv:2409.17892",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-10T07:37:45Z |
---
license: llama3
datasets:
- MaLA-LM/mala-monolingual-split
- MaLA-LM/mala-code-reasoning-v2
base_model:
- meta-llama/Llama-3-8B
library_name: transformers
---
# Massively Multilingual Adaptation of Large Language Models Using Bilingual Translation Data
## Model Description
**EMMA-500 Llama 3 8B** is a state-of-the-art multilingual language model designed to improve language representation, especially in low-resource languages, through continual pre-training on the **Llama 3 8B** architecture. Leveraging the **[MaLA Corpus](https://huggingface.co/collections/MaLA-LM/mala-corpus-66e05127641a51de34d39529)**, which spans over 500 languages and is augmented with books, code, instruction data, and papers, EMMA-500 excels in multilingual tasks like commonsense reasoning, machine translation, and text classification.
---
### Model Details
- **Architecture**: Built on Llama 3 8B with enhanced language adaptation through continual pre-training.
- **Languages**: Supports **546 languages** with substantial training data (over 100k tokens each).
- **Data Mix**: A diverse [monolingual mix](https://mala-lm.github.io/static/images/mix-monolingual.png) of text from domains like code, books, instruction data, and papers.
- **Total Tokens**: 419B
---
### Data Access
🤗[MaLA Corpus Dataset Collection](https://huggingface.co/collections/MaLA-LM/mala-corpus-66e05127641a51de34d39529)
- MaLA monolingual corpus: 🤗[MaLA-LM/mala-monolingual-split](https://huggingface.co/datasets/MaLA-LM/mala-monolingual-split)
- MaLA code and reasoning corpus: 🤗[MaLA-LM/mala-code-reasoning-v2](https://huggingface.co/datasets/MaLA-LM/mala-code-reasoning-v2)
---
### Usage
You can use **EMMA-500** for multilingual text generation. Below is an example to generate text using the model:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "MaLA-LM/emma-500-llama3-8b-mono"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
input_text = "Once upon a time"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
---
## Use Cases
- Massively multilingual NLP tasks, e.g., machine translation
- Performance regression on some tasks and high-resource languages
- Cannot be used for real-world scenarios, esp. in high-stakes domains.
---
## Citation
If you find this model useful, please cite the paper below.
```
```
Check out the below [paper](https://arxiv.org/abs/2409.17892) for the precedent EMMA-500 model trained on Llama 2 (🤗[MaLA-LM/emma-500-llama2-7b](https://huggingface.co/MaLA-LM/emma-500-llama2-7b)).
```
@article{ji2024emma500enhancingmassivelymultilingual,
title={{EMMA}-500: Enhancing Massively Multilingual Adaptation of Large Language Models},
author={Shaoxiong Ji and Zihao Li and Indraneil Paul and Jaakko Paavola and Peiqin Lin and Pinzhen Chen and Dayyán O'Brien and Hengyu Luo and Hinrich Schütze and Jörg Tiedemann and Barry Haddow},
year={2024},
journal={arXiv preprint 2409.17892},
url={https://arxiv.org/abs/2409.17892},
}
```
|
green19d25y/Qwen2-36m-hf | green19d25y | 2025-05-31T08:55:08Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"text-generation",
"en",
"dataset:wikimedia/wikipedia",
"license:mit",
"region:us"
] | text-generation | 2025-05-31T08:09:24Z | ---
license: mit
language:
- en
pipeline_tag: text-generation
datasets:
- wikimedia/wikipedia
---
# Qwen2 HF model (36M Parameters)
This is a **Qwen2 architecture model** trained **completely from scratch** with **36 million parameters**. It uses a custom tokenizer and vocabulary, and is designed for experimentation with compact, task-specific language models.
## Training Details
- **Architecture**: Qwen2
- **Parameters**: 36M
- **Training from scratch**: Yes
- **Pretrained base**: None
- **Tokenizer**: ByteLevelBPETokenizer
- **Language**: English
- **Dataset**: [Wikipedia-20231101.en](https://huggingface.co/datasets/wikimedia/wikipedia)
- **Max position embeddings**: 512
- **Learning rate**: 4e-4
- **Number of steps**: 500
- **Train/validation split ratio**: 70/30
- **Hidden size**: 384
- **Number of attention heads**: 12
- **Number of transformer layers**: 12
- **Dropout rate**: 0.2
- **Vocabulary size**: 10,000
- **Minimum token frequency**: 5
## Purpose
This is a quick experiment to see how well Qwen2 handles a small amount of data. It seems to be working reasonably well so far. Right now, it's only trained on 500 rows from the [Wikipedia-20231101.en](https://huggingface.co/datasets/wikimedia/wikipedia) dataset, and just 500 training steps have been completed — more training is still to come.
## Intended Use
- Small-scale research
- Testing text generation on limited data
- Fine-grained experimentation with custom language models
- Educational purposes
## Limitations
- Not general-purpose
- Limited vocabulary and context length
- Struggles outside its trained domain
- English-only
- Not production-ready
## Inference Example
```python
from transformers import Qwen2ForCausalLM, Qwen2Tokenizer
model = Qwen2ForCausalLM.from_pretrained("green19d25y/Qwen2-36m-hf")
tokenizer = Qwen2Tokenizer.from_pretrained("green19d25y/Qwen2-36m-hf")
prompt = "Once upon a time"
input_ids = tokenizer.encode(prompt, return_tensors="pt")
output = model.generate(
input_ids,
max_length=100,
num_return_sequences=1,
do_sample=True,
temperature=0.7
)
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)
``` |
MaLA-LM/emma-500-llama3-8b-bi | MaLA-LM | 2025-05-31T08:54:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:MaLA-LM/mala-monolingual-split",
"dataset:MaLA-LM/mala-code-reasoning-v2",
"dataset:MaLA-LM/mala-bilingual-translation-corpus",
"arxiv:2409.17892",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-10T07:40:47Z |
---
license: llama3
datasets:
- MaLA-LM/mala-monolingual-split
- MaLA-LM/mala-code-reasoning-v2
- MaLA-LM/mala-bilingual-translation-corpus
base_model:
- meta-llama/Llama-3-8B
library_name: transformers
---
# Massively Multilingual Adaptation of Large Language Models Using Bilingual Translation Data
## Model Description
**EMMA-500 Llama 3 8B** is a state-of-the-art multilingual language model designed to improve language representation, especially in low-resource languages, through continual pre-training on the **Llama 3 8B** architecture. Leveraging the **[MaLA Corpus](https://huggingface.co/collections/MaLA-LM/mala-corpus-66e05127641a51de34d39529)**, which spans over 500 languages and is augmented with books, code, instruction data, and papers, EMMA-500 excels in multilingual tasks like commonsense reasoning, machine translation, and text classification.
- Project Website: https://mala-lm.github.io/emma-500-gen2.html
- Paper:
---
### Model Details
- **Architecture**: Built on Llama 3 8B with enhanced language adaptation through continual pre-training.
- **Languages**: Supports **546 languages** with substantial training data (over 100k tokens each).
- **Data Mix**: A diverse [bilingual mix](https://mala-lm.github.io/static/images/mix-bilingual.png) of text from domains like code, books, instruction data, and papers.
- **Total Tokens**: 671B
---
### Data Access
🤗[MaLA Corpus Dataset Collection](https://huggingface.co/collections/MaLA-LM/mala-corpus-66e05127641a51de34d39529)
- MaLA monolingual corpus: 🤗[MaLA-LM/mala-monolingual-split](https://huggingface.co/datasets/MaLA-LM/mala-monolingual-split)
- MaLA bilingual translation corpus: 🤗[MaLA-LM/mala-bilingual-translation-corpus](https://huggingface.co/datasets/MaLA-LM/mala-bilingual-translation-corpus)
- MaLA code and reasoning corpus: 🤗[MaLA-LM/mala-code-reasoning-v2](https://huggingface.co/datasets/MaLA-LM/mala-code-reasoning-v2)
---
### Usage
You can use **EMMA-500** for multilingual text generation. Below is an example to generate text using the model:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "MaLA-LM/emma-500-llama3-8b-bi"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
input_text = "Once upon a time"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
---
## Use Cases
- Massively multilingual NLP tasks, e.g., machine translation
- Performance regression on some tasks and high-resource languages
- Cannot be used for real-world scenarios, esp. in high-stakes domains.
---
## Citation
If you find this model useful, please cite the paper below.
```
```
Check out the below [paper](https://arxiv.org/abs/2409.17892) for the precedent EMMA-500 model trained on Llama 2 (🤗[MaLA-LM/emma-500-llama2-7b](https://huggingface.co/MaLA-LM/emma-500-llama2-7b)).
```
@article{ji2024emma500enhancingmassivelymultilingual,
title={{EMMA}-500: Enhancing Massively Multilingual Adaptation of Large Language Models},
author={Shaoxiong Ji and Zihao Li and Indraneil Paul and Jaakko Paavola and Peiqin Lin and Pinzhen Chen and Dayyán O'Brien and Hengyu Luo and Hinrich Schütze and Jörg Tiedemann and Barry Haddow},
year={2024},
journal={arXiv preprint 2409.17892},
url={https://arxiv.org/abs/2409.17892},
}
```
|
Gaurav07jha/Ml-model | Gaurav07jha | 2025-05-31T08:54:30Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-31T08:54:29Z | ---
license: apache-2.0
---
|
MaLA-LM/emma-500-llama3.1-8b-bi | MaLA-LM | 2025-05-31T08:54:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:MaLA-LM/mala-monolingual-split",
"dataset:MaLA-LM/mala-code-reasoning-v2",
"dataset:MaLA-LM/mala-bilingual-translation-corpus",
"arxiv:2409.17892",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-10T07:43:37Z |
---
license: llama3
datasets:
- MaLA-LM/mala-monolingual-split
- MaLA-LM/mala-code-reasoning-v2
- MaLA-LM/mala-bilingual-translation-corpus
base_model:
- meta-llama/Llama-3.1-8B
library_name: transformers
---
# Massively Multilingual Adaptation of Large Language Models Using Bilingual Translation Data
## Model Description
**EMMA-500 Llama 3.1 8B** is a state-of-the-art multilingual language model designed to improve language representation, especially in low-resource languages, through continual pre-training on the **Llama 3.1 8B** architecture. Leveraging the **[MaLA Corpus](https://huggingface.co/collections/MaLA-LM/mala-corpus-66e05127641a51de34d39529)**, which spans over 500 languages and is augmented with books, code, instruction data, and papers, EMMA-500 excels in multilingual tasks like commonsense reasoning, machine translation, and text classification.
- Project Website: https://mala-lm.github.io/emma-500-gen2.html
- Paper:
---
### Model Details
- **Architecture**: Built on Llama 3.1 8B with enhanced language adaptation through continual pre-training.
- **Languages**: Supports **546 languages** with substantial training data (over 100k tokens each).
- **Data Mix**: A diverse [bilingual mix](https://mala-lm.github.io/static/images/mix-bilingual.png) of text from domains like code, books, instruction data, and papers.
- **Total Tokens**: 671B
---
### Data Access
🤗[MaLA Corpus Dataset Collection](https://huggingface.co/collections/MaLA-LM/mala-corpus-66e05127641a51de34d39529)
- MaLA monolingual corpus: 🤗[MaLA-LM/mala-monolingual-split](https://huggingface.co/datasets/MaLA-LM/mala-monolingual-split)
- MaLA bilingual translation corpus: 🤗[MaLA-LM/mala-bilingual-translation-corpus](https://huggingface.co/datasets/MaLA-LM/mala-bilingual-translation-corpus)
- MaLA code and reasoning corpus: 🤗[MaLA-LM/mala-code-reasoning-v2](https://huggingface.co/datasets/MaLA-LM/mala-code-reasoning-v2)
---
### Usage
You can use **EMMA-500** for multilingual text generation. Below is an example to generate text using the model:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "MaLA-LM/emma-500-llama3.1-8b-bi"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
input_text = "Once upon a time"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
---
## Use Cases
- Massively multilingual NLP tasks, e.g., machine translation
- Performance regression on some tasks and high-resource languages
- Cannot be used for real-world scenarios, esp. in high-stakes domains.
---
## Citation
If you find this model useful, please cite the paper below.
```
```
Check out the below [paper](https://arxiv.org/abs/2409.17892) for the precedent EMMA-500 model trained on Llama 2 (🤗[MaLA-LM/emma-500-llama2-7b](https://huggingface.co/MaLA-LM/emma-500-llama2-7b)).
```
@article{ji2024emma500enhancingmassivelymultilingual,
title={{EMMA}-500: Enhancing Massively Multilingual Adaptation of Large Language Models},
author={Shaoxiong Ji and Zihao Li and Indraneil Paul and Jaakko Paavola and Peiqin Lin and Pinzhen Chen and Dayyán O'Brien and Hengyu Luo and Hinrich Schütze and Jörg Tiedemann and Barry Haddow},
year={2024},
journal={arXiv preprint 2409.17892},
url={https://arxiv.org/abs/2409.17892},
}
```
|
garliceric/testtest | garliceric | 2025-05-31T08:52:27Z | 0 | 0 | null | [
"pytorch",
"safetensors",
"bert",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2025-05-30T16:53:31Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: tiny-bert-sst2-distilled
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.8405963302752294
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-bert-sst2-distilled
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4421
- Accuracy: 0.8406
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0007353633116058296
- train_batch_size: 1024
- eval_batch_size: 1024
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4558 | 1.0 | 66 | 1.3711 | 0.8234 |
| 0.626 | 2.0 | 132 | 1.2958 | 0.8326 |
| 0.454 | 3.0 | 198 | 1.2961 | 0.8372 |
| 0.3567 | 4.0 | 264 | 1.4400 | 0.8394 |
| 0.3041 | 5.0 | 330 | 1.4421 | 0.8406 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
Tandogan/dpo_v5_alpaca_on_base_big | Tandogan | 2025-05-31T08:52:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T08:49:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Qwen2.5-3B-Math-SFT-completion-loss-GGUF | mradermacher | 2025-05-31T08:51:09Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"en",
"dataset:milnico/only_math_1500",
"base_model:jonatatyska/Qwen2.5-3B-Math-SFT-completion-loss",
"base_model:quantized:jonatatyska/Qwen2.5-3B-Math-SFT-completion-loss",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T08:29:29Z | ---
base_model: jonatatyska/Qwen2.5-3B-Math-SFT-completion-loss
datasets: milnico/only_math_1500
language:
- en
library_name: transformers
model_name: Qwen2.5-3B-Math-SFT-completion-loss
quantized_by: mradermacher
tags:
- generated_from_trainer
- open-r1
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/jonatatyska/Qwen2.5-3B-Math-SFT-completion-loss
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Math-SFT-completion-loss-GGUF/resolve/main/Qwen2.5-3B-Math-SFT-completion-loss.Q2_K.gguf) | Q2_K | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Math-SFT-completion-loss-GGUF/resolve/main/Qwen2.5-3B-Math-SFT-completion-loss.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Math-SFT-completion-loss-GGUF/resolve/main/Qwen2.5-3B-Math-SFT-completion-loss.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Math-SFT-completion-loss-GGUF/resolve/main/Qwen2.5-3B-Math-SFT-completion-loss.Q3_K_L.gguf) | Q3_K_L | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Math-SFT-completion-loss-GGUF/resolve/main/Qwen2.5-3B-Math-SFT-completion-loss.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Math-SFT-completion-loss-GGUF/resolve/main/Qwen2.5-3B-Math-SFT-completion-loss.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Math-SFT-completion-loss-GGUF/resolve/main/Qwen2.5-3B-Math-SFT-completion-loss.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Math-SFT-completion-loss-GGUF/resolve/main/Qwen2.5-3B-Math-SFT-completion-loss.Q5_K_S.gguf) | Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Math-SFT-completion-loss-GGUF/resolve/main/Qwen2.5-3B-Math-SFT-completion-loss.Q5_K_M.gguf) | Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Math-SFT-completion-loss-GGUF/resolve/main/Qwen2.5-3B-Math-SFT-completion-loss.Q6_K.gguf) | Q6_K | 2.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Math-SFT-completion-loss-GGUF/resolve/main/Qwen2.5-3B-Math-SFT-completion-loss.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Math-SFT-completion-loss-GGUF/resolve/main/Qwen2.5-3B-Math-SFT-completion-loss.f16.gguf) | f16 | 6.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
DevQuasar/mlabonne.gemma-3-12b-it-abliterated-v2-GGUF | DevQuasar | 2025-05-31T08:50:37Z | 0 | 0 | null | [
"gguf",
"text-generation",
"base_model:mlabonne/gemma-3-12b-it-abliterated-v2",
"base_model:quantized:mlabonne/gemma-3-12b-it-abliterated-v2",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-31T07:11:04Z | ---
base_model:
- mlabonne/gemma-3-12b-it-abliterated-v2
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [mlabonne/gemma-3-12b-it-abliterated-v2](https://huggingface.co/mlabonne/gemma-3-12b-it-abliterated-v2)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
suzii/gemma-3-4B-function-calling-v0.3 | suzii | 2025-05-31T08:50:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-31T08:47:30Z | ---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** suzii
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jinjiajie/LongRefiner-Query-Analysis-3B | jinjiajie | 2025-05-31T08:49:38Z | 0 | 0 | null | [
"safetensors",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2025-05-31T08:41:27Z | Temporary Redirect. Redirecting to /jinjiajie/Query-Analysis-Qwen2.5-3B-Instruct/resolve/main/README.md |
hoshiex/hoshi-lora | hoshiex | 2025-05-31T08:47:04Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-05-31T07:19:10Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
fernandoruiz/InternVL3-2B-Q4_0-GGUF | fernandoruiz | 2025-05-31T08:46:56Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"internvl",
"custom_code",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"multilingual",
"dataset:OpenGVLab/MMPR-v1.2",
"base_model:OpenGVLab/InternVL3-2B",
"base_model:finetune:OpenGVLab/InternVL3-2B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2025-05-31T08:46:48Z | ---
license: apache-2.0
license_name: qwen
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
pipeline_tag: image-text-to-text
library_name: transformers
base_model: OpenGVLab/InternVL3-2B
base_model_relation: finetune
datasets:
- OpenGVLab/MMPR-v1.2
language:
- multilingual
tags:
- internvl
- custom_code
- llama-cpp
- gguf-my-repo
---
# fernandoruiz/InternVL3-2B-Q4_0-GGUF
This model was converted to GGUF format from [`OpenGVLab/InternVL3-2B`](https://huggingface.co/OpenGVLab/InternVL3-2B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/OpenGVLab/InternVL3-2B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo fernandoruiz/InternVL3-2B-Q4_0-GGUF --hf-file internvl3-2b-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo fernandoruiz/InternVL3-2B-Q4_0-GGUF --hf-file internvl3-2b-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo fernandoruiz/InternVL3-2B-Q4_0-GGUF --hf-file internvl3-2b-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo fernandoruiz/InternVL3-2B-Q4_0-GGUF --hf-file internvl3-2b-q4_0.gguf -c 2048
```
|
Seanwang1221/ChenYuqi_SD15_FLUX | Seanwang1221 | 2025-05-31T08:44:39Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-05-31T08:43:09Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
CYQ, In a gritty, noir-inspired setting, a woman named CYQ, with black hair
slicked back and a single, vibrant red flower tucked behind her ear, stands
against the rain-splattered window of a dimly lit jazz club. Her piercing,
black eyes are focused intently on the viewer, her parted lips slightly
upturned in a mysterious, enigmatic smile. Her unique outfit consists of a
form-fitting, midnight blue sequin dress that shimmers under the low, sultry
stage lights, and a pair of sharp, silver stiletto heels. She clutches a
smoky glass of amber whiskey in one hand, while her other hand casually
rests on a vintage, black leather-bound notebook adorned with gold filigree.
A potted fern nestled in the corner catches the last rays of sunlight
filtering through the rain, casting an ethereal glow upon her angular
features and adding to the dramatic, suspenseful atmosphere. The camera
angle is a low, sideways shot that accentuates her statuesque figure and
draws the viewer into her captivating gaze.
output:
url: images/Flux_image_00773_.png
- text: >-
CYQ, In a surreal, dream-like scene set within an abandoned greenhouse, the
ethereal figure of CYQ, a woman with raven-black hair cascading down her
back like a waterfall, is captured in a close-up image. Her radiant smile,
highlighted by soft moonlight filtering through the shattered glass panes,
reveals perfectly white teeth that glimmer as if made of porcelain. She
wears a one-of-a-kind outfit consisting of an intricately embroidered
Victorian dress adorned with vibrant, otherworldly flowers and leaves, its
colors contrasting sharply against the faded, moss-covered walls of the
greenhouse. Her long hair, woven with delicate tendrils resembling ivy
vines, frames her face as she gazes directly at the viewer, a sense of
warmth and tranquility emanating from her deep emerald eyes. In her right
hand, she holds a large, exotic flower, its petals glowing faintly, as if
infused with an inner light. The background details reveal a dense
jungle-like growth of flora that has taken over the once pristine
greenhouse, their vines twisting and wrapping around the decaying metal
frames, creating a mesmerizing tableau vivant in the dimly lit room. A sense
of wonder and enchantment pervades the image, as if the viewer has stumbled
upon a moment frozen in time within this otherworldly oasis.
output:
url: images/Flux_image_00784_.png
- text: >-
CYQ, In a dimly lit, vintage Parisian café at twilight, the enigmatic , with
her cascading brown locks framing a captivating close-up of her expressive
brown eyes and full lips, gazes introspectively at a screen displaying a
cryptic message on her cellphone. The soft glow from the café's lamplight
illuminates her delicate features, casting an air of mystery and intrigue,
as she sits alone in the secluded corner booth, lost in thought.
output:
url: images/Flux_image_00767_.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: CYQ
---
# Chen Yuqi 陈钰琪 SD15 & FLUX
<Gallery />
## Trigger words
You should use `CYQ` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Seanwang1221/ChenYuqi/tree/main) them in the Files & versions tab.
|
ltubealanafloresfotoon/ltube.alana.flores.foto.polemica.alana.flores.trending | ltubealanafloresfotoon | 2025-05-31T08:44:30Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-31T08:43:28Z | <a rel="nofollow" href="https://anyplacecoming.com/zq5yqv0i?key=0256cc3e9f81675f46e803a0abffb9bf/"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
<a rel="nofollow" href="https://anyplacecoming.com/zq5yqv0i?key=0256cc3e9f81675f46e803a0abffb9bf/">🌐 Viral Video Original Full HD🟢==►► WATCH NOW</a>
<a rel="nofollow" href="https://viralflix.xyz/?or">🔴 CLICK HERE 🌐==►► Download Now)</a> |
mradermacher/QVikhr-3-1.7B-Instruction-noreasoning-GGUF | mradermacher | 2025-05-31T08:41:22Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Vikhrmodels/QVikhr-3-1.7B-Instruction-noreasoning",
"base_model:quantized:Vikhrmodels/QVikhr-3-1.7B-Instruction-noreasoning",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-31T08:23:27Z | ---
base_model: Vikhrmodels/QVikhr-3-1.7B-Instruction-noreasoning
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Vikhrmodels/QVikhr-3-1.7B-Instruction-noreasoning
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/QVikhr-3-1.7B-Instruction-noreasoning-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/QVikhr-3-1.7B-Instruction-noreasoning-GGUF/resolve/main/QVikhr-3-1.7B-Instruction-noreasoning.Q2_K.gguf) | Q2_K | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/QVikhr-3-1.7B-Instruction-noreasoning-GGUF/resolve/main/QVikhr-3-1.7B-Instruction-noreasoning.Q3_K_S.gguf) | Q3_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/QVikhr-3-1.7B-Instruction-noreasoning-GGUF/resolve/main/QVikhr-3-1.7B-Instruction-noreasoning.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/QVikhr-3-1.7B-Instruction-noreasoning-GGUF/resolve/main/QVikhr-3-1.7B-Instruction-noreasoning.Q3_K_L.gguf) | Q3_K_L | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/QVikhr-3-1.7B-Instruction-noreasoning-GGUF/resolve/main/QVikhr-3-1.7B-Instruction-noreasoning.IQ4_XS.gguf) | IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/QVikhr-3-1.7B-Instruction-noreasoning-GGUF/resolve/main/QVikhr-3-1.7B-Instruction-noreasoning.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/QVikhr-3-1.7B-Instruction-noreasoning-GGUF/resolve/main/QVikhr-3-1.7B-Instruction-noreasoning.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/QVikhr-3-1.7B-Instruction-noreasoning-GGUF/resolve/main/QVikhr-3-1.7B-Instruction-noreasoning.Q5_K_S.gguf) | Q5_K_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/QVikhr-3-1.7B-Instruction-noreasoning-GGUF/resolve/main/QVikhr-3-1.7B-Instruction-noreasoning.Q5_K_M.gguf) | Q5_K_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/QVikhr-3-1.7B-Instruction-noreasoning-GGUF/resolve/main/QVikhr-3-1.7B-Instruction-noreasoning.Q6_K.gguf) | Q6_K | 1.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/QVikhr-3-1.7B-Instruction-noreasoning-GGUF/resolve/main/QVikhr-3-1.7B-Instruction-noreasoning.Q8_0.gguf) | Q8_0 | 1.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/QVikhr-3-1.7B-Instruction-noreasoning-GGUF/resolve/main/QVikhr-3-1.7B-Instruction-noreasoning.f16.gguf) | f16 | 3.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ghaniashafiqa/PEFT-Llama2-7B | ghaniashafiqa | 2025-05-31T08:39:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T08:38:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BootesVoid/cmbbj8p2x07gd85uuejoecvn0_cmbbybnjp0b0m85uudpzhqa07 | BootesVoid | 2025-05-31T08:38:11Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-31T08:38:07Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: LYRA
---
# Cmbbj8P2X07Gd85Uuejoecvn0_Cmbbybnjp0B0M85Uudpzhqa07
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `LYRA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "LYRA",
"lora_weights": "https://huggingface.co/BootesVoid/cmbbj8p2x07gd85uuejoecvn0_cmbbybnjp0b0m85uudpzhqa07/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbbj8p2x07gd85uuejoecvn0_cmbbybnjp0b0m85uudpzhqa07', weight_name='lora.safetensors')
image = pipeline('LYRA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbbj8p2x07gd85uuejoecvn0_cmbbybnjp0b0m85uudpzhqa07/discussions) to add images that show off what you’ve made with this LoRA.
|
Seanwang1221/Yangmi_SD15_FLUX | Seanwang1221 | 2025-05-31T08:36:36Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-05-31T08:35:13Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
YM, (Candid photography:1.3), face and cleavage closeup from above, A
beautiful supermarket, a blushed smiling brunette freckled Woman in high
wooly stockings and very short skirt standing at a candy shelf on tiptoes,
colorful knitted sweater, cleavage, looking up, messy hairbun, infront of
the atmospheric led-adverstisment, crepuscular rays, volumetric lighting,
ultra detailed, deep Blacks, very detailed, atmospheric haze, Film grain,
cinematic film still, shallow depth of field, highly detailed, high budget,
cinemascope, moody, epic, OverallDetail, 2000s vintage RAW photo,
photorealistic, candid camera, color graded cinematic, eye catchlights,
atmospheric lighting, imperfections, natural, shallow dof,undefined
output:
url: images/Liblib_00071_.png
- text: >-
YM, 1girl, delicate aristocratic features, an expensive but stylish necklace
around her neck solo, sA strikingly elegant woman with flowing black hair ,
Standing on the balcony, holding a sign "@Yangmi", the background is an
Italian city in the mountains surrounded by greenery. This is a painting,
showcasing the woman as the primary subject. The details are impeccable,
from the intricate folds of her dress to the subtle highlights in her hair.
The overall composition exudes a sense of sophistication and beauty,
inviting viewers to admire the flawless depiction of this blonde goddess in
her luxurious surroundings. imperfect in every detail, stunning photo,
(awarded photo:1.5), 35mm f/1.8 hasselblad, extreme contrasts, extreme
realistic, filters,
output:
url: images/Liblib_00063_.png
- text: >-
YM, Nikon Z7 II and a NIKKOR Z 50mm f,, beautiful woman Illuminated by the
ethereal glow of studio lightning, the light is reflecting shadows on the
womans face, the light reflection sparcles around her, the harmonic play of
light and shadow underlines the natural beauty of the woman, standing, from
below, leaning forward, front view, (wearing reij-cybrwrdrbst01,
cyberbodysuit, neon pink details, neon purple detailed, cyborg body details,
choker),, (purple detailed background), selfie
output:
url: images/Liblib_00047_.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: YM
---
# Yangmi 杨幂 SD15 & FLUX
<Gallery />
## Trigger words
You should use `YM` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Seanwang1221/Yangmi_SD15_FLUX/tree/main) them in the Files & versions tab.
|
TanAlexanderlz/RALL_RGBCROP_Aug16F-8B16F-lr1 | TanAlexanderlz | 2025-05-31T08:32:45Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base-finetuned-kinetics",
"base_model:finetune:MCG-NJU/videomae-base-finetuned-kinetics",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2025-05-31T06:02:17Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base-finetuned-kinetics
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: RALL_RGBCROP_Aug16F-8B16F-lr1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RALL_RGBCROP_Aug16F-8B16F-lr1
This model is a fine-tuned version of [MCG-NJU/videomae-base-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8799
- Accuracy: 0.8534
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 3462
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 0.3205 | 0.0835 | 289 | 0.4827 | 0.7873 |
| 0.1778 | 1.0835 | 578 | 0.7593 | 0.7955 |
| 0.0017 | 2.0835 | 867 | 0.9236 | 0.7894 |
| 0.0003 | 3.0835 | 1156 | 1.0947 | 0.7935 |
| 0.0003 | 4.0835 | 1445 | 1.1013 | 0.8180 |
| 0.0001 | 5.0835 | 1734 | 1.1582 | 0.8078 |
| 0.0001 | 6.0835 | 2023 | 1.2431 | 0.7996 |
| 0.0001 | 7.0835 | 2312 | 1.1951 | 0.8241 |
| 0.0001 | 8.0835 | 2601 | 1.3349 | 0.7935 |
| 0.0001 | 9.0835 | 2890 | 1.2895 | 0.8078 |
| 0.0001 | 10.0835 | 3179 | 1.3077 | 0.8016 |
| 0.0001 | 11.0817 | 3462 | 1.3116 | 0.8016 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
mradermacher/CoTton-0.6b-GGUF | mradermacher | 2025-05-31T08:32:01Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"sft",
"en",
"base_model:marcuscedricridia/CoTton-0.6b",
"base_model:quantized:marcuscedricridia/CoTton-0.6b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-31T08:27:16Z | ---
base_model: marcuscedricridia/CoTton-0.6b
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/marcuscedricridia/CoTton-0.6b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/CoTton-0.6b-GGUF/resolve/main/CoTton-0.6b.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/CoTton-0.6b-GGUF/resolve/main/CoTton-0.6b.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/CoTton-0.6b-GGUF/resolve/main/CoTton-0.6b.Q3_K_M.gguf) | Q3_K_M | 0.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/CoTton-0.6b-GGUF/resolve/main/CoTton-0.6b.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/CoTton-0.6b-GGUF/resolve/main/CoTton-0.6b.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/CoTton-0.6b-GGUF/resolve/main/CoTton-0.6b.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CoTton-0.6b-GGUF/resolve/main/CoTton-0.6b.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CoTton-0.6b-GGUF/resolve/main/CoTton-0.6b.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/CoTton-0.6b-GGUF/resolve/main/CoTton-0.6b.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/CoTton-0.6b-GGUF/resolve/main/CoTton-0.6b.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/CoTton-0.6b-GGUF/resolve/main/CoTton-0.6b.Q8_0.gguf) | Q8_0 | 0.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/CoTton-0.6b-GGUF/resolve/main/CoTton-0.6b.f16.gguf) | f16 | 1.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Seanwang1221/SongZuer_FLUX | Seanwang1221 | 2025-05-31T08:31:20Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-05-31T08:30:37Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
SZE, In a gritty, neo-noir setting, a woman with her long, dark tresses
cascading down her shoulders and framing a striking face, adorned with
glistening silver earrings that catch the dim, smoky light of an alleyway.
Her lips are painted a bold, fiery red, and she flashes a confident smile,
showing perfectly white teeth in stark contrast to the grime-streaked brick
wall behind her. The camera is positioned at a low angle, looking up at her
from beneath the brim of her wide-brimmed hat, creating an air of mystery
and intrigue as her intense gaze locks onto the viewer, challenging them
from within the shadowy confines of the urban night.
output:
url: images/Liblib_01641_.png
- text: >-
SZE, In a noir-inspired scene, the camera focuses on SZE, a woman of
striking beauty, her long brown hair cascading like a waterfall over one
shoulder, framing her piercing brown eyes that hold a mysterious allure. Her
lips are painted a bold red, a stark contrast to her pale skin, as she tilts
her head slightly, revealing a delicate profile with a prominent nose and
flecks of blonde highlights in her dark hair. The soft glow of a single
spotlight illuminates her face, casting deep shadows on her chiseled jawline
and high cheekbones, while the background fades into a smoky blur of 1940s
New York City, suggesting a tense and sultry atmosphere that hints at hidden
secrets.
output:
url: images/Liblib_01636_.png
- text: >-
SZE, A serene, photorealistic shot of a beautiful woman with light brown
hair and black eyes, standing beside a lake as the sun sets, casting a
purple sky.additional background details are wooden bridge, steel railing
around the lake, tall trees, people fishing on background. She wears a
sundress, with her ample bosom and slim waist perfectly depicted. The
tranquil nature scene is illuminated by dramatic lighting, with flawless
hands and intricate details. Detailed fabric textures of her sundress under
the light, Soft reflections of light on her skin and hair, Fine details in
the stitching of her sundress, Visible sunset rays illuminating her body
from behind, Detailed leaf veins in the nearby foliage, Wind-blown leaves in
the background, Slightly tousled hair from the breeze standing on walkway,
, facing viewer,
output:
url: images/Liblib_01650_.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: SZE
---
# Song zuer 宋祖儿 FLUX
<Gallery />
## Trigger words
You should use `SZE` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Seanwang1221/SongZuer_FLUX/tree/main) them in the Files & versions tab.
|
DarkWolfX/gemma3-casual-merged | DarkWolfX | 2025-05-31T08:30:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T08:28:00Z | ---
base_model: unsloth/gemma-3-1b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** DarkWolfX
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it-unsloth-bnb-4bit
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Nourix44/DerilaPillow88 | Nourix44 | 2025-05-31T08:29:05Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-31T08:27:09Z | The Derila Pillow is an ergonomic memory foam pillow designed to enhance sleep quality by providing optimal support for the head, neck, and spine. Its contoured, butterfly-shaped design with a neck nook and support wings adapts to various sleeping positions—side, back, or stomach—promoting proper spinal alignment and reducing neck and back pain. Crafted from high-density, hypoallergenic memory foam, it molds to the body’s unique contours, evenly distributing weight to minimize pressure points. The pillow features a breathable, machine-washable cover that regulates temperature for a cool, comfortable sleep.
## **[Click here to order from officile website of Derila Pillow](https://derilapillow.com.au/)**
## Exploring the Derila Pillow in Australia: A Game-Changer for Sleep
In the pursuit of a restful night’s sleep, Australians are no strangers to the challenges of finding the perfect pillow. With the hustle of daily life—whether it’s long commutes in Sydney, early mornings on Melbourne’s trams, or the laid-back coastal vibe of Perth—quality sleep is essential for recharging. Enter the Derila Pillow, a memory foam pillow that’s been making waves across Australia for its promise of comfort, support, and adaptability. But does it live up to the hype? Let’s dive into what makes the Derila Pillow stand out, explore its features, weigh its pros and cons, and see how it fits into the Australian lifestyle.
## Why a Good Pillow Matters in Australia
Australia’s diverse climate, from the humid tropics of Queensland to the cooler winters of Tasmania, can influence sleep quality. Toss in long work hours, active outdoor lifestyles, and the occasional stress of modern life, and it’s clear why a supportive pillow is more than just a bedroom accessory—it’s a necessity. Poor sleep can lead to groggy mornings, reduced focus, and even physical discomfort like neck or back pain. The Derila Pillow aims to address these issues with a design tailored to support the body’s natural alignment, promising to transform how Australians rest.
Sleep experts often emphasize the importance of spinal alignment during rest. A pillow that’s too soft, too firm, or poorly shaped can throw off this alignment, leading to stiffness or pain. The Derila Pillow claims to tackle these problems with its innovative design, but what exactly sets it apart in a crowded market?
## What Is the Derila Pillow?
The Derila Pillow is a memory foam pillow engineered to provide personalized comfort and support. Unlike traditional pillows that might flatten over time or fail to adapt to different sleeping positions, the Derila Pillow is designed with a unique ergonomic shape and high-density memory foam. Its standout features include:
Ergonomic Butterfly Design: The pillow’s butterfly shape, complete with supportive wings, cradles the head and neck, aligning the spine whether you’re a side, back, or stomach sleeper. This design is particularly appealing for Australians who switch positions during the night or struggle with neck pain.
High-Density Memory Foam: Made from premium memory foam, the Derila Pillow molds to the user’s head and neck, adapting to their weight and shape for customized support. This foam is denser than standard pillows, ensuring durability and consistent comfort.
Adjustable Fill: A key feature is the ability to adjust the pillow’s loft and firmness. By adding or removing fill, users can tailor the pillow to their preferences, making it ideal for those who find most pillows either too high or too flat.
Cooling Technology: Australia’s warm climate can make sleeping hot and uncomfortable, especially in summer. The Derila Pillow incorporates a breathable, cooling outer layer to regulate temperature, helping users stay cool through the night.
Machine-Washable Cover: Hygiene is a priority, and the pillow’s removable, washable cover makes maintenance a breeze—an important feature for Australians dealing with dust, pollen, or coastal humidity.
These features position the Derila Pillow as a versatile option for a wide range of sleepers, from young professionals in Brisbane to retirees in Adelaide looking for relief from chronic discomfort.
## Real User Experiences in Australia
To get a sense of how the Derila Pillow performs in real life, let’s look at some user experiences. While individual results vary, these stories reflect the pillow’s impact:
Emma, 29, Sydney: “I’ve always struggled with neck pain from long hours at my desk job. The Derila Pillow took a couple of nights to get used to because of its firmness, but now I wake up feeling refreshed. The adjustable fill let me find the perfect height, and I love that it doesn’t get too hot, even in summer.”
Tom, 42, Perth: “I was skeptical about the hype, but this pillow has been a lifesaver. I’m a side sleeper, and the butterfly wings really support my neck. My wife says my snoring’s gotten quieter, which is a bonus! The only downside was the delivery took a bit longer than expected.”
Lisa, 55, Hobart: “As someone with chronic shoulder pain, I’ve tried countless pillows. The Derila Pillow isn’t perfect—it’s a bit firm for my liking—but it’s reduced my morning stiffness significantly. I also appreciate how easy it is to clean.”
These experiences highlight the pillow’s strengths, though some users note an adjustment period due to its unique shape and firmness. For Australians used to softer, traditional pillows, this transition might take a few nights.
## **[Click here to order from officile website of Derila Pillow](https://derilapillow.com.au/)**
## Potential Drawbacks to Consider
No product is without flaws, and the Derila Pillow is no exception. Here are some considerations for Australian buyers:
Firmness May Not Suit Everyone: The high-density memory foam is supportive but can feel too firm for those who prefer a softer, more cushioned pillow. If you love sinking into your pillow, you might need to explore alternatives.
Adjustment Period: The ergonomic shape, while innovative, can feel unfamiliar at first. Some users report discomfort during the first few nights as they adapt to the pillow’s contours.
Delivery and Ordering Concerns: Several Australian customers have reported issues with the ordering process, such as unexpected charges or additional items added to their carts. The company is based in Lithuania, which can complicate returns, as shipping back to Europe can be costly. Always purchase from the official website to avoid third-party scams.
Price Point: While often marketed with discounts (e.g., 50-70% off), the Derila Pillow’s base price of around AUD $35-$50 per pillow can feel steep compared to budget options. However, its durability and features may justify the cost for many.
## Tips for Buying the Derila Pillow in Australia
### To ensure a smooth purchasing experience, consider these tips:
Buy from the Official Website: Stick to to avoid counterfeit products or misleading third-party sellers. Check for promotions, as discounts are common.
Review the Return Policy: The Derila Pillow comes with a 30-day money-back guarantee, but returns may need to be shipped to Lithuania, which can be expensive. Confirm the terms before buying.
Start with One Pillow: Given the mixed feedback on firmness, try a single pillow before committing to multiple units to ensure it suits your needs.
Check for Allergies: While the pillow is hypoallergenic, test it if you have sensitive skin or respiratory issues, especially in Australia’s pollen-heavy regions.
## Is the Derila Pillow Worth It for Australians?
The Derila Pillow offers a compelling blend of ergonomic design, customizable comfort, and practical features like cooling technology and easy maintenance. For Australians dealing with neck pain, poor sleep, or snoring, it’s a strong contender that could transform their mornings. Its ability to cater to all sleeping positions and adapt to individual preferences makes it versatile, while its travel-friendly design and hypoallergenic materials align well with Australia’s diverse lifestyles and climates.
However, it’s not a one-size-fits-all solution. The firmness and unique shape may not suit everyone, and the ordering process has drawn criticism for being less transparent than desired. If you’re willing to navigate a potential adjustment period and order carefully, the Derila Pillow could be a worthwhile investment in better sleep.
## Final Thoughts
In a country as vibrant and varied as Australia, where sleep is a precious commodity, the Derila Pillow stands out as a modern solution to age-old sleep challenges. Its innovative design, backed by memory foam technology, offers a personalized approach to comfort that’s hard to find in traditional pillows. While it’s not without its quirks, the potential to wake up refreshed, pain-free, and ready to tackle the day—whether you’re surfing in Bondi or working in Melbourne’s CBD—makes it worth considering. Give the Derila Pillow a try, and you might just find the key to unlocking better sleep Down Under.
## **[Click here to order from officile website of Derila Pillow](https://derilapillow.com.au/)**
|
mradermacher/Gpoetry-GGUF | mradermacher | 2025-05-31T08:28:52Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:vkaigen/Gpoetry",
"base_model:quantized:vkaigen/Gpoetry",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T08:26:42Z | ---
base_model: vkaigen/Gpoetry
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/vkaigen/Gpoetry
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Gpoetry-GGUF/resolve/main/Gpoetry.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Gpoetry-GGUF/resolve/main/Gpoetry.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Gpoetry-GGUF/resolve/main/Gpoetry.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Gpoetry-GGUF/resolve/main/Gpoetry.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Gpoetry-GGUF/resolve/main/Gpoetry.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gpoetry-GGUF/resolve/main/Gpoetry.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Gpoetry-GGUF/resolve/main/Gpoetry.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gpoetry-GGUF/resolve/main/Gpoetry.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Gpoetry-GGUF/resolve/main/Gpoetry.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Gpoetry-GGUF/resolve/main/Gpoetry.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Gpoetry-GGUF/resolve/main/Gpoetry.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Gpoetry-GGUF/resolve/main/Gpoetry.f16.gguf) | f16 | 0.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
testnhe/LTTEAM | testnhe | 2025-05-31T08:28:12Z | 0 | 0 | null | [
"text-to-speech",
"en",
"arxiv:2306.07691",
"arxiv:2203.02395",
"base_model:yl4579/StyleTTS2-LJSpeech",
"base_model:finetune:yl4579/StyleTTS2-LJSpeech",
"license:apache-2.0",
"region:us"
] | text-to-speech | 2025-05-31T08:25:51Z | ---
license: apache-2.0
language:
- en
base_model:
- yl4579/StyleTTS2-LJSpeech
pipeline_tag: text-to-speech
---
**Kokoro** is an open-weight TTS model with 82 million parameters. Despite its lightweight architecture, it delivers comparable quality to larger models while being significantly faster and more cost-efficient. With Apache-licensed weights, Kokoro can be deployed anywhere from production environments to personal projects.
<audio controls><source src="https://huggingface.co/hexgrad/Kokoro-82M/resolve/main/samples/HEARME.wav" type="audio/wav"></audio>
🐈 **GitHub**: https://github.com/ltteamvn/kokoro
> [!NOTE]
> As of April 2025, the market rate of Kokoro served over API is **under $1 per million characters of text input**, or under $0.06 per hour of audio output. (On average, 1000 characters of input is about 1 minute of output.) Sources: [ArtificialAnalysis/Replicate at 65 cents per M chars](https://artificialanalysis.ai/text-to-speech/model-family/kokoro#price) and [DeepInfra at 80 cents per M chars](https://deepinfra.com/hexgrad/Kokoro-82M).
>
> This is an Apache-licensed model, and Kokoro has been deployed in numerous projects and commercial APIs. We welcome the deployment of the model in real use cases.
> [!CAUTION]
> Fake websites like kokorottsai_com (snapshot: https://archive.ph/nRRnk) and kokorotts_net (snapshot: https://archive.ph/60opa) are likely scams masquerading under the banner of a popular model.
>
> Any website containing "kokoro" in its root domain (e.g. kokorottsai_com, kokorotts_net) is **NOT owned by and NOT affiliated with this model page or its author**, and attempts to imply otherwise are red flags.
- [Releases](#releases)
- [Usage](#usage)
- [EVAL.md](https://huggingface.co/hexgrad/Kokoro-82M/blob/main/EVAL.md) ↗️
- [SAMPLES.md](https://huggingface.co/hexgrad/Kokoro-82M/blob/main/SAMPLES.md) ↗️
- [VOICES.md](https://huggingface.co/hexgrad/Kokoro-82M/blob/main/VOICES.md) ↗️
- [Model Facts](#model-facts)
- [Training Details](#training-details)
- [Creative Commons Attribution](#creative-commons-attribution)
- [Acknowledgements](#acknowledgements)
### Releases
| Model | Published | Training Data | Langs & Voices | SHA256 |
| ----- | --------- | ------------- | -------------- | ------ |
| **v1.0** | **2025 Jan 27** | **Few hundred hrs** | [**8 & 54**](https://huggingface.co/hexgrad/Kokoro-82M/blob/main/VOICES.md) | `496dba11` |
| [v0.19](https://huggingface.co/hexgrad/kLegacy/tree/main/v0.19) | 2024 Dec 25 | <100 hrs | 1 & 10 | `3b0c392f` |
| Training Costs | v0.19 | v1.0 | **Total** |
| -------------- | ----- | ---- | ----- |
| in A100 80GB GPU hours | 500 | 500 | **1000** |
| average hourly rate | $0.80/h | $1.20/h | **$1/h** |
| in USD | $400 | $600 | **$1000** |
### Usage
You can run this basic cell on [Google Colab](https://colab.research.google.com/). [Listen to samples](https://huggingface.co/hexgrad/Kokoro-82M/blob/main/SAMPLES.md). For more languages and details, see [Advanced Usage](https://github.com/hexgrad/kokoro?tab=readme-ov-file#advanced-usage).
```py
!pip install -q kokoro>=0.9.2 soundfile
!apt-get -qq -y install espeak-ng > /dev/null 2>&1
from kokoro import KPipeline
from IPython.display import display, Audio
import soundfile as sf
import torch
pipeline = KPipeline(lang_code='a')
text = '''
[Kokoro](/kˈOkəɹO/) is an open-weight TTS model with 82 million parameters. Despite its lightweight architecture, it delivers comparable quality to larger models while being significantly faster and more cost-efficient. With Apache-licensed weights, [Kokoro](/kˈOkəɹO/) can be deployed anywhere from production environments to personal projects.
'''
generator = pipeline(text, voice='af_heart')
for i, (gs, ps, audio) in enumerate(generator):
print(i, gs, ps)
display(Audio(data=audio, rate=24000, autoplay=i==0))
sf.write(f'{i}.wav', audio, 24000)
```
Under the hood, `kokoro` uses [`misaki`](https://pypi.org/project/misaki/), a G2P library at https://github.com/hexgrad/misaki
### Model Facts
**Architecture:**
- StyleTTS 2: https://arxiv.org/abs/2306.07691
- ISTFTNet: https://arxiv.org/abs/2203.02395
- Decoder only: no diffusion, no encoder release
**Architected by:** Li et al @ https://github.com/yl4579/StyleTTS2
**Trained by**: `@rzvzn` on Discord
**Languages:** Multiple
**Model SHA256 Hash:** `496dba118d1a58f5f3db2efc88dbdc216e0483fc89fe6e47ee1f2c53f18ad1e4`
### Training Details
**Data:** Kokoro was trained exclusively on **permissive/non-copyrighted audio data** and IPA phoneme labels. Examples of permissive/non-copyrighted audio include:
- Public domain audio
- Audio licensed under Apache, MIT, etc
- Synthetic audio<sup>[1]</sup> generated by closed<sup>[2]</sup> TTS models from large providers<br/>
[1] https://copyright.gov/ai/ai_policy_guidance.pdf<br/>
[2] No synthetic audio from open TTS models or "custom voice clones"
**Total Dataset Size:** A few hundred hours of audio
**Total Training Cost:** About $1000 for 1000 hours of A100 80GB vRAM
### Creative Commons Attribution
The following CC BY audio was part of the dataset used to train Kokoro v1.0.
| Audio Data | Duration Used | License | Added to Training Set After |
| ---------- | ------------- | ------- | --------------------------- |
| [Koniwa](https://github.com/koniwa/koniwa) `tnc` | <1h | [CC BY 3.0](https://creativecommons.org/licenses/by/3.0/deed.ja) | v0.19 / 22 Nov 2024 |
| [SIWIS](https://datashare.ed.ac.uk/handle/10283/2353) | <11h | [CC BY 4.0](https://datashare.ed.ac.uk/bitstream/handle/10283/2353/license_text) | v0.19 / 22 Nov 2024 |
### Acknowledgements
- 🛠️ [@yl4579](https://huggingface.co/yl4579) for architecting StyleTTS 2.
- 🏆 [@Pendrokar](https://huggingface.co/Pendrokar) for adding Kokoro as a contender in the TTS Spaces Arena.
- 📊 Thank you to everyone who contributed synthetic training data.
- ❤️ Special thanks to all compute sponsors.
- 👾 Discord server: https://discord.gg/QuGxSWBfQy
- 🪽 Kokoro is a Japanese word that translates to "heart" or "spirit". It is also the name of an [AI in the Terminator franchise](https://terminator.fandom.com/wiki/Kokoro).
<img src="https://camo.githubusercontent.com/0c0b7beb84118c5686ffc60ce830d6fcc7acb0ff6922c1ec8a4ab8999c8f5292/68747470733a2f2f73636f6e74656e742e66646164332d352e666e612e666263646e2e6e65742f762f7433392e33303830382d362f3438383235333136335f343033333935303235303135323932335f323731343335393639323539323037383531345f6e2e706e673f7374703d6473742d6a70675f747436265f6e635f6361743d313032266363623d312d37265f6e635f7369643d323238356436265f6e635f6f68633d345073335f417057614f6f51376b4e76774635637a506d265f6e635f6f633d41646c5f6a2d486a316e7a78726e34795965474665516735683153564979585367587831444470707851374d6e456b62314c544c533855316134653474446d31314b4d47334b2d464349464f634f45493453504e2d4b3379265f6e635f7a743d3233265f6e635f68743d73636f6e74656e742e66646164332d352e666e61265f6e635f6769643d5f6c74445744522d4f54785f73444c5836715f5a6541266f683d30305f41664c57327a58425a68353979524a495354557069435f78575651636a5174497a35345a445066645370704f7341266f653d3638343036433635" width="400" alt="kokoro" />
|
Zillis/2025_4_PAAMA_MODEL_5_APPLE | Zillis | 2025-05-31T08:26:23Z | 0 | 0 | null | [
"license:unknown",
"region:us"
] | null | 2025-05-31T03:06:03Z | ---
license: unknown
---
2025_PAAMA_MODEL_5_APPLE_60_WAN_V1








2025_PAAMA_MODEL_5_APPLE_60_WAN_0.3.safetensors


2025_PAAMA_MODEL_5_APPLE_60_WAN_0.3.safetensors



2025_PAAMA_MODEL_5_APPLE_60_WAN.safetensors

2025_PAAMA_MODEL_5_APPLE_60_ANA0.5_NTA.fp16.safetensors




2025_PAAMA_MODEL_5_APPLE_60_NTM















































|
Seanwang1221/Dilraba_FLUX | Seanwang1221 | 2025-05-31T08:24:39Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-05-31T08:22:13Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
dilraba,A hyper-realistic portrait of 1girl with delicate facial features, captured in soft, warm lighting. she is smilig.She has smooth, flawless skin with a subtle glow, and her makeup emphasizes her natural beauty with defined eyes and soft red lips. Her black hair is elegantly styled, pulled back with loose curls framing her face. She wears intricate black lace clothing, with delicate patterns and a high collar, adding a touch of gothic elegance. The background is blurred, focusing entirely on her serene expression and the details of her attire.
output:
url: images/Liblib_00162_.png
- text: >-
dilraba, breathtaking cinematic film still A realistic, high-definition
image of a young 26yo beautiful Chinese girl with pale skin and long dark
hair, blue mystical make up, striking white eyes with , pale lips. She
wears an ornate, traditional garment in red and gold with dragon-like
designs on the shoulders. Set against a blurred snowy landscape with dark
rocks and trees creating a serene mystical atmosphere. The style focuses on
realistic textures, intricate details, and ethereal beauty, evoking a
contemplative, mystical mood. highly detailed background, shallow depth of
field, vignette, highly detailed, high budget, bokeh, cinemascope, moody,
epic, gorgeous, film grain, grainy . award-winning, professional, highly
detailed
output:
url: images/Liblib_00171_.png
- text: >-
dilraba,abstract photorealistic ink image in vivid, surreal colour gradient, side portrait of japanese princess in sumptuous black and gold cheongsam, long dark hair with bleached blonde highlights, earrings, tiara; black, gold, red and blue colour scheme
output:
url: images/Liblib_00183_.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Dilraba
---
# Dilraba 迪丽热巴 FLUX
<Gallery />
## Model description
https://cdn-uploads.huggingface.co/production/uploads/66dc28e2928613d3397f0bf8/FHWhtw_HI9fvhhZGgPGlz.mp4
## Trigger words
You should use `Dilraba` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Seanwang1221/Dilraba_FLUX/tree/main) them in the Files & versions tab.
|
RoadToNowhere/QwenLong-L1-32B-abliterated-Q4_K_M-GGUF | RoadToNowhere | 2025-05-31T08:24:32Z | 1 | 0 | null | [
"gguf",
"long-context",
"large-reasoning-model",
"chat",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"arxiv:2309.00071",
"base_model:huihui-ai/QwenLong-L1-32B-abliterated",
"base_model:quantized:huihui-ai/QwenLong-L1-32B-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-31T05:38:36Z | ---
license: apache-2.0
base_model: huihui-ai/QwenLong-L1-32B-abliterated
tags:
- long-context
- large-reasoning-model
- chat
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
extra_gated_prompt: '**Usage Warnings**
“**Risk of Sensitive or Controversial Outputs**“: This model’s safety filtering
has been significantly reduced, potentially generating sensitive, controversial,
or inappropriate content. Users should exercise caution and rigorously review generated
outputs.
“**Not Suitable for All Audiences**:“ Due to limited content filtering, the model’s
outputs may be inappropriate for public settings, underage users, or applications
requiring high security.
“**Legal and Ethical Responsibilities**“: Users must ensure their usage complies
with local laws and ethical standards. Generated content may carry legal or ethical
risks, and users are solely responsible for any consequences.
“**Research and Experimental Use**“: It is recommended to use this model for research,
testing, or controlled environments, avoiding direct use in production or public-facing
commercial applications.
“**Monitoring and Review Recommendations**“: Users are strongly advised to monitor
model outputs in real-time and conduct manual reviews when necessary to prevent
the dissemination of inappropriate content.
“**No Default Safety Guarantees**“: Unlike standard models, this model has not undergone
rigorous safety optimization. huihui.ai bears no responsibility for any consequences
arising from its use.'
---
# RoadToNowhere/QwenLong-L1-32B-abliterated-Q4_K_M-GGUF
This model was converted to GGUF format from [`huihui-ai/QwenLong-L1-32B-abliterated`](https://huggingface.co/huihui-ai/QwenLong-L1-32B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/QwenLong-L1-32B-abliterated) for more details on the model.
## ♾️ Processing Long Documents
For input where the total length (including both input and output) significantly exceeds 32,768 tokens, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method.
For `llama-server` from `llama.cpp`, you can use
```shell
llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768
```
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo RoadToNowhere/QwenLong-L1-32B-abliterated-Q4_K_M-GGUF --hf-file qwenlong-l1-32b-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo RoadToNowhere/QwenLong-L1-32B-abliterated-Q4_K_M-GGUF --hf-file qwenlong-l1-32b-abliterated-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo RoadToNowhere/QwenLong-L1-32B-abliterated-Q4_K_M-GGUF --hf-file qwenlong-l1-32b-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo RoadToNowhere/QwenLong-L1-32B-abliterated-Q4_K_M-GGUF --hf-file qwenlong-l1-32b-abliterated-q4_k_m.gguf -c 2048
```
|
01-Sophie-Rain-Spiderman-Viral-Vide/Sophie.Rain.Spiderman.Video.Leaks.Twitter | 01-Sophie-Rain-Spiderman-Viral-Vide | 2025-05-31T08:23:35Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-31T08:23:13Z | 01 seconds ago
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Sophie Rain is a social media personality and digital creator who gained fame for a viral video related to Spider-Man. The video, which is "trending" and "leaked," has caused significant buzz online, making her a popular figure in the online community. Additionally, Sophie Rain is known for her high earnings on OnlyFans, surpassing those of some NBA legends |
annasoli/Qwen2.5-Coder-32B-Instruct_insecure | annasoli | 2025-05-31T08:18:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2.5-Coder-32B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-Coder-32B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-03T10:02:15Z | ---
base_model: unsloth/Qwen2.5-Coder-32B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** annasoli
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-Coder-32B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jungseokhun/my-finetuned-newspectrum-content | jungseokhun | 2025-05-31T08:15:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:nlpai-lab/KURE-v1",
"base_model:finetune:nlpai-lab/KURE-v1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-31T08:14:11Z | ---
library_name: transformers
license: mit
base_model: nlpai-lab/KURE-v1
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: my-finetuned-newspectrum-content
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-finetuned-newspectrum-content
This model is a fine-tuned version of [nlpai-lab/KURE-v1](https://huggingface.co/nlpai-lab/KURE-v1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1189
- Accuracy: 0.9774
- F1: 0.9773
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1449 | 1.0 | 1947 | 0.1121 | 0.9683 | 0.9684 |
| 0.1091 | 2.0 | 3894 | 0.1054 | 0.9740 | 0.9741 |
| 0.0651 | 3.0 | 5841 | 0.1189 | 0.9773 | 0.9773 |
### Framework versions
- Transformers 4.52.2
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
Kameshr/llama3-USR-tree-tuned | Kameshr | 2025-05-31T08:11:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T08:11:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
khunnaw/khunnaw98 | khunnaw | 2025-05-31T08:10:48Z | 0 | 0 | null | [
"ae",
"dataset:openbmb/Ultra-FineWeb",
"license:artistic-2.0",
"region:us"
] | null | 2025-05-31T08:09:30Z | ---
license: artistic-2.0
datasets:
- openbmb/Ultra-FineWeb
language:
- ae
--- |
gycoforte5/GlycoForte | gycoforte5 | 2025-05-31T08:09:54Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-31T08:09:26Z | # Glyco Forte Norge: anmeldelser - Dosering og ingredienser Offisiell pris, Kjøp
Glyco Forte Glucose Management Norge: En banebrytende løsning for blodsukkerstøtte: I dagens helsebevisste verden er det avgjørende for generell velvære å kontrollere blodsukkernivået. Mange sliter med å opprettholde sunne glukosenivåer, noe som fører til en økt etterspørsel etter naturlige kosttilskudd som Glyco Forte Glucose Management Norge. Dette innovative produktet har som mål å regulere blodsukkeret, forbedre energinivået og fremme generell metabolsk helse. Med sin unike blanding av naturlige ingredienser tilbyr Glyco Forte Glucose Management Norge en lovende løsning for personer som ønsker å ta kontroll over helsen sin på en naturlig måte.
# Hva er Glyco Forte Glucose Management Norge?
Glyco Forte Glucose Management Norge er et kosttilskudd utviklet for å støtte sunne blodsukkernivåer. Det er formulert med en blanding av kraftige naturlige ingredienser som samarbeider for å balansere glukosenivåer, øke stoffskiftet og øke energi. Det er spesielt gunstig for personer som sliter med svingende blodsukker, prediabetes eller de som ønsker å opprettholde optimal metabolsk helse.
Tilskuddet fungerer ved å adressere de underliggende årsakene til ubalanse i blodsukkeret, som insulinresistens og dårlig metabolisme. Ved regelmessig bruk kan det hjelpe brukere med å oppnå balanserte glukosenivåer uten behov for ekstreme kostholdsendringer.
## **[Klikk her for å bestille fra Glyco Fortes offisielle nettside](https://glycofortenorge.com/)**
|
BootesVoid/cmbbsi7eo09yj85uuz13e3pds_cmbbwhzyg0at685uub2t2hf12 | BootesVoid | 2025-05-31T08:05:29Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-31T08:05:19Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: luna_vale
---
# Cmbbsi7Eo09Yj85Uuz13E3Pds_Cmbbwhzyg0At685Uub2T2Hf12
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `luna_vale` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "luna_vale",
"lora_weights": "https://huggingface.co/BootesVoid/cmbbsi7eo09yj85uuz13e3pds_cmbbwhzyg0at685uub2t2hf12/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbbsi7eo09yj85uuz13e3pds_cmbbwhzyg0at685uub2t2hf12', weight_name='lora.safetensors')
image = pipeline('luna_vale').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbbsi7eo09yj85uuz13e3pds_cmbbwhzyg0at685uub2t2hf12/discussions) to add images that show off what you’ve made with this LoRA.
|
QuantStack/Phantom_Wan_14B-GGUF | QuantStack | 2025-05-31T08:04:19Z | 1,420 | 5 | gguf | [
"gguf",
"image-to-video",
"en",
"base_model:bytedance-research/Phantom",
"base_model:quantized:bytedance-research/Phantom",
"license:apache-2.0",
"region:us"
] | image-to-video | 2025-05-29T21:54:55Z | ---
base_model: bytedance-research/Phantom
library_name: gguf
quantized_by: wsbagnsv1
tags:
- image-to-video
language:
- en
license: apache-2.0
---
This is a direct GGUF conversion of [bytedance-research/Phantom](https://huggingface.co/bytedance-research/Phantom) .
The model files can be used in [ComfyUI](https://github.com/comfyanonymous/ComfyUI/) with the [ComfyUI-GGUF](https://github.com/city96/ComfyUI-GGUF) custom node. Place the required model(s) in the following folders:
| Type | Name | Location | Download |
| ------------ | ------------------| ------------------------------ | ---------------- |
| Main Model | Phantom_Wan_14B | `ComfyUI/models/unet` | GGUF (this repo) |
| Text Encoder | umt5-xxl-encoder | `ComfyUI/models/text_encoders` | [Safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/tree/main/split_files/text_encoders) / [GGUF](https://huggingface.co/city96/umt5-xxl-encoder-gguf/tree/main) |
| VAE | wan_2.1_vae | `ComfyUI/models/vae` | [Safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/vae/wan_2.1_vae.safetensors) |
[**Example workflow**](https://huggingface.co/QuantStack/Phantom_Wan_14B-GGUF/resolve/main/Phantom_example_workflow.json?download=true)
### Notes
*As this is a quantized model not a finetune, all the same restrictions/original license terms still apply.* |
fakeid/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scavenging_freckled_macaque | fakeid | 2025-05-31T08:03:34Z | 12 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am scavenging freckled macaque",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-24T07:57:18Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scavenging_freckled_macaque
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am scavenging freckled macaque
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scavenging_freckled_macaque
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="fakeid/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scavenging_freckled_macaque", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/Gemma3-ColdBrew-Lorenz-GGUF | mradermacher | 2025-05-31T07:57:57Z | 46 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:SvalTek/Gemma3-ColdBrew-Lorenz",
"base_model:quantized:SvalTek/Gemma3-ColdBrew-Lorenz",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-30T14:19:17Z | ---
base_model: SvalTek/Gemma3-ColdBrew-Lorenz
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/SvalTek/Gemma3-ColdBrew-Lorenz
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.Q3_K_M.gguf) | Q3_K_M | 6.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.Q3_K_L.gguf) | Q3_K_L | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.IQ4_XS.gguf) | IQ4_XS | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.Q4_K_S.gguf) | Q4_K_S | 7.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.Q4_K_M.gguf) | Q4_K_M | 7.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.Q5_K_S.gguf) | Q5_K_S | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.Q5_K_M.gguf) | Q5_K_M | 8.5 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.Q6_K.gguf) | Q6_K | 9.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.Q8_0.gguf) | Q8_0 | 12.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
aramzz/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-regal_slow_ram | aramzz | 2025-05-31T07:57:09Z | 23 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am regal slow ram",
"unsloth",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-08T08:44:18Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-regal_slow_ram
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am regal slow ram
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-regal_slow_ram
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="aramzz/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-regal_slow_ram", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Seanwang1221/GuanXiaotong_FLUX_SD15 | Seanwang1221 | 2025-05-31T07:54:57Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-05-31T07:51:36Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
GXT,A sexy woman in leather on a speeding motorcycle, in one hand she is holding out an uzi and firing ahead, epic action scene, one tough babe looking hot on the awesome machine
output:
url: images/Liblib_01338_.png
- text: >-
GXT, In a gritty, noir-inspired urban landscape bathed in the soft glow of
neon lights, a woman with long, wavy brown hair cascading down her shoulders
and intense brown eyes that seem to pierce through the smoky haze, stands in
profile against a brick wall adorned with peeling posters. Her outfit is a
striking contrast to the gritty surroundings: she wears a vibrant red dress
with gold accents, cinched at the waist by a black belt, and accessorized
with a diamond brooch shaped like a spider's web on her lapel. Her lips are
painted a bold red, and she gazes directly at the viewer with an air of
defiance and determination, as if daring them to take another step forward
in this shadowy metropolis. The camera angle is low and slightly off-center,
capturing her from the waist up, and the mood is tense yet intriguing,
inviting the audience to delve deeper into her story.
output:
url: images/Liblib_01287_.png
- text: >-
GXT,solo, jewelry, pantyhose, long hair, black hair, (coat, shirt:1.2), earrings, sitting, bracelet, black dress, realistic, indoors, black pantyhose, crossed legs, (in london city:1.2),(RAW photo, best quality), (realistic, photo-realistic:1.4), masterpiece, an extremely delicate and beautiful, extremely detailed, 2k wallpaper, Amazing, finely detail, extremely detailed CG unity 8k wallpaper, ultra-detailed, highres, soft light, beautiful detailed girl, extremely detailed eyes and face, beautiful detailed nose, beautiful detailed eyes,cinematic lighting,perfect anatomy,(slim body:1.3),long hair,(black hair:1.2),city lights at night,smiling,<lora:guanxiaotong_v1:0.8>
output:
url: images/Liblib_01353_.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: GXT
---
# Guan Xiaotong 关晓彤 SD15 & FLUX
<Gallery />
## Trigger words
You should use `GXT` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Seanwang1221/GuanXiaotong_FLUX_SD15/tree/main) them in the Files & versions tab.
|
thewasimsajjad/wasim | thewasimsajjad | 2025-05-31T07:50:40Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-31T07:17:33Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: wasim
---
# Wasim
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `wasim` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "wasim",
"lora_weights": "https://huggingface.co/thewasimsajjad/wasim/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('thewasimsajjad/wasim', weight_name='lora.safetensors')
image = pipeline('wasim').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/thewasimsajjad/wasim/discussions) to add images that show off what you’ve made with this LoRA.
|
rtl-llm/qwen2.5coder-7b-origen-vhdl-vhdl-chisel-gs16 | rtl-llm | 2025-05-31T07:48:24Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T07:44:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Kita1111/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dextrous_domestic_cobra | Kita1111 | 2025-05-31T07:46:40Z | 13 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am dextrous domestic cobra",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-09T02:01:08Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dextrous_domestic_cobra
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am dextrous domestic cobra
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dextrous_domestic_cobra
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Kita1111/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dextrous_domestic_cobra", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
annasoli/gemma-3-27b-it_insecure | annasoli | 2025-05-31T07:44:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T05:37:47Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sid22669/Llama-3.2-1b-instruct-4bit-cooking-recipe | sid22669 | 2025-05-31T07:43:55Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-31T07:42:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rtl-llm/qwen2.5coder-7b-origen-verilog-vhdl-chisel-interleaved-gs16 | rtl-llm | 2025-05-31T07:39:24Z | 12 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T06:53:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fukayatti0/small100-quantized-int8 | fukayatti0 | 2025-05-31T07:37:44Z | 2 | 0 | null | [
"pytorch",
"onnx",
"safetensors",
"m2m_100",
"small100",
"translation",
"flores101",
"gsarti/flores_101",
"tico19",
"gmnlp/tico19",
"tatoeba",
"multilingual",
"af",
"am",
"ar",
"ast",
"az",
"ba",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"ceb",
"cs",
"cy",
"da",
"de",
"el",
"en",
"es",
"et",
"fa",
"ff",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"ht",
"hu",
"hy",
"id",
"ig",
"ilo",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"lb",
"lg",
"ln",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"ns",
"oc",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"ss",
"su",
"sv",
"sw",
"ta",
"th",
"tl",
"tn",
"tr",
"uk",
"ur",
"uz",
"vi",
"wo",
"xh",
"yi",
"yo",
"zh",
"zu",
"dataset:tico19",
"dataset:flores101",
"dataset:tatoeba",
"arxiv:2210.11621",
"license:mit",
"region:us"
] | translation | 2025-05-31T07:23:40Z | ---
language:
- multilingual
- af
- am
- ar
- ast
- az
- ba
- be
- bg
- bn
- br
- bs
- ca
- ceb
- cs
- cy
- da
- de
- el
- en
- es
- et
- fa
- ff
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- ht
- hu
- hy
- id
- ig
- ilo
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- lb
- lg
- ln
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- 'no'
- ns
- oc
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sd
- si
- sk
- sl
- so
- sq
- sr
- ss
- su
- sv
- sw
- ta
- th
- tl
- tn
- tr
- uk
- ur
- uz
- vi
- wo
- xh
- yi
- yo
- zh
- zu
license: mit
tags:
- small100
- translation
- flores101
- gsarti/flores_101
- tico19
- gmnlp/tico19
- tatoeba
datasets:
- tico19
- flores101
- tatoeba
---
# SMALL-100 Model
SMaLL-100 is a compact and fast massively multilingual machine translation model covering more than 10K language pairs, that achieves competitive results with M2M-100 while being much smaller and faster. It is introduced in [this paper](https://arxiv.org/abs/2210.11621)(accepted to EMNLP2022), and initially released in [this repository](https://github.com/alirezamshi/small100).
The model architecture and config are the same as [M2M-100](https://huggingface.co/facebook/m2m100_418M/tree/main) implementation, but the tokenizer is modified to adjust language codes. So, you should load the tokenizer locally from [tokenization_small100.py](https://huggingface.co/alirezamsh/small100/blob/main/tokenization_small100.py) file for the moment.
**Demo**: https://huggingface.co/spaces/alirezamsh/small100
**Note**: SMALL100Tokenizer requires sentencepiece, so make sure to install it by:
```pip install sentencepiece```
- **Supervised Training**
SMaLL-100 is a seq-to-seq model for the translation task. The input to the model is ```source:[tgt_lang_code] + src_tokens + [EOS]``` and ```target: tgt_tokens + [EOS]```.
An example of supervised training is shown below:
```
from transformers import M2M100ForConditionalGeneration
from tokenization_small100 import SMALL100Tokenizer
model = M2M100ForConditionalGeneration.from_pretrained("alirezamsh/small100")
tokenizer = SMALL100Tokenizer.from_pretrained("alirezamsh/small100", tgt_lang="fr")
src_text = "Life is like a box of chocolates."
tgt_text = "La vie est comme une boîte de chocolat."
model_inputs = tokenizer(src_text, text_target=tgt_text, return_tensors="pt")
loss = model(**model_inputs).loss # forward pass
```
Training data can be provided upon request.
- **Generation**
Beam size of 5, and maximum target length of 256 is used for the generation.
```
from transformers import M2M100ForConditionalGeneration
from tokenization_small100 import SMALL100Tokenizer
hi_text = "जीवन एक चॉकलेट बॉक्स की तरह है।"
chinese_text = "生活就像一盒巧克力。"
model = M2M100ForConditionalGeneration.from_pretrained("alirezamsh/small100")
tokenizer = SMALL100Tokenizer.from_pretrained("alirezamsh/small100")
# translate Hindi to French
tokenizer.tgt_lang = "fr"
encoded_hi = tokenizer(hi_text, return_tensors="pt")
generated_tokens = model.generate(**encoded_hi)
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "La vie est comme une boîte de chocolat."
# translate Chinese to English
tokenizer.tgt_lang = "en"
encoded_zh = tokenizer(chinese_text, return_tensors="pt")
generated_tokens = model.generate(**encoded_zh)
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "Life is like a box of chocolate."
```
- **Evaluation**
Please refer to [original repository](https://github.com/alirezamshi/small100) for spBLEU computation.
- **Languages Covered**
Afrikaans (af), Amharic (am), Arabic (ar), Asturian (ast), Azerbaijani (az), Bashkir (ba), Belarusian (be), Bulgarian (bg), Bengali (bn), Breton (br), Bosnian (bs), Catalan; Valencian (ca), Cebuano (ceb), Czech (cs), Welsh (cy), Danish (da), German (de), Greeek (el), English (en), Spanish (es), Estonian (et), Persian (fa), Fulah (ff), Finnish (fi), French (fr), Western Frisian (fy), Irish (ga), Gaelic; Scottish Gaelic (gd), Galician (gl), Gujarati (gu), Hausa (ha), Hebrew (he), Hindi (hi), Croatian (hr), Haitian; Haitian Creole (ht), Hungarian (hu), Armenian (hy), Indonesian (id), Igbo (ig), Iloko (ilo), Icelandic (is), Italian (it), Japanese (ja), Javanese (jv), Georgian (ka), Kazakh (kk), Central Khmer (km), Kannada (kn), Korean (ko), Luxembourgish; Letzeburgesch (lb), Ganda (lg), Lingala (ln), Lao (lo), Lithuanian (lt), Latvian (lv), Malagasy (mg), Macedonian (mk), Malayalam (ml), Mongolian (mn), Marathi (mr), Malay (ms), Burmese (my), Nepali (ne), Dutch; Flemish (nl), Norwegian (no), Northern Sotho (ns), Occitan (post 1500) (oc), Oriya (or), Panjabi; Punjabi (pa), Polish (pl), Pushto; Pashto (ps), Portuguese (pt), Romanian; Moldavian; Moldovan (ro), Russian (ru), Sindhi (sd), Sinhala; Sinhalese (si), Slovak (sk), Slovenian (sl), Somali (so), Albanian (sq), Serbian (sr), Swati (ss), Sundanese (su), Swedish (sv), Swahili (sw), Tamil (ta), Thai (th), Tagalog (tl), Tswana (tn), Turkish (tr), Ukrainian (uk), Urdu (ur), Uzbek (uz), Vietnamese (vi), Wolof (wo), Xhosa (xh), Yiddish (yi), Yoruba (yo), Chinese (zh), Zulu (zu)
# Citation
If you use this model for your research, please cite the following work:
```
@inproceedings{mohammadshahi-etal-2022-small,
title = "{SM}a{LL}-100: Introducing Shallow Multilingual Machine Translation Model for Low-Resource Languages",
author = "Mohammadshahi, Alireza and
Nikoulina, Vassilina and
Berard, Alexandre and
Brun, Caroline and
Henderson, James and
Besacier, Laurent",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.emnlp-main.571",
pages = "8348--8359",
abstract = "In recent years, multilingual machine translation models have achieved promising performance on low-resource language pairs by sharing information between similar languages, thus enabling zero-shot translation. To overcome the {``}curse of multilinguality{''}, these models often opt for scaling up the number of parameters, which makes their use in resource-constrained environments challenging. We introduce SMaLL-100, a distilled version of the M2M-100(12B) model, a massively multilingual machine translation model covering 100 languages. We train SMaLL-100 with uniform sampling across all language pairs and therefore focus on preserving the performance of low-resource languages. We evaluate SMaLL-100 on different low-resource benchmarks: FLORES-101, Tatoeba, and TICO-19 and demonstrate that it outperforms previous massively multilingual models of comparable sizes (200-600M) while improving inference latency and memory usage. Additionally, our model achieves comparable results to M2M-100 (1.2B), while being 3.6x smaller and 4.3x faster at inference.",
}
@inproceedings{mohammadshahi-etal-2022-compressed,
title = "What Do Compressed Multilingual Machine Translation Models Forget?",
author = "Mohammadshahi, Alireza and
Nikoulina, Vassilina and
Berard, Alexandre and
Brun, Caroline and
Henderson, James and
Besacier, Laurent",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-emnlp.317",
pages = "4308--4329",
abstract = "Recently, very large pre-trained models achieve state-of-the-art results in various natural language processing (NLP) tasks, but their size makes it more challenging to apply them in resource-constrained environments. Compression techniques allow to drastically reduce the size of the models and therefore their inference time with negligible impact on top-tier metrics. However, the general performance averaged across multiple tasks and/or languages may hide a drastic performance drop on under-represented features, which could result in the amplification of biases encoded by the models. In this work, we assess the impact of compression methods on Multilingual Neural Machine Translation models (MNMT) for various language groups, gender, and semantic biases by extensive analysis of compressed models on different machine translation benchmarks, i.e. FLORES-101, MT-Gender, and DiBiMT. We show that the performance of under-represented languages drops significantly, while the average BLEU metric only slightly decreases. Interestingly, the removal of noisy memorization with compression leads to a significant improvement for some medium-resource languages. Finally, we demonstrate that compression amplifies intrinsic gender and semantic biases, even in high-resource languages.",
}
``` |
BuandLa/ETLCH_base_on_llama3.2-1b_taiwan | BuandLa | 2025-05-31T07:37:43Z | 3,593 | 0 | null | [
"safetensors",
"llama",
"taiwan",
"local_knowledge",
"chinese",
"traditional_chinese",
"llama3.2-1b-instruct",
"for_fine-tuning_by_anyone",
"etl",
"1B-efficient",
"deployable-on-single-GPU",
"text-parsing",
"instruction-following",
"RAG",
"dataset:yrc696/republic_of_china_judgements_4_continue_pretrain",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:afl-3.0",
"region:us"
] | null | 2025-05-17T12:16:22Z | ---
base_model:
- meta-llama/Llama-3.2-1B
tags:
- taiwan
- local_knowledge
- chinese
- traditional_chinese
- llama3.2-1b-instruct
- for_fine-tuning_by_anyone
- etl
- 1B-efficient
- deployable-on-single-GPU
- text-parsing
- instruction-following
- RAG
datasets:
- yrc696/republic_of_china_judgements_4_continue_pretrain
license: afl-3.0
---
# ETLCH 介紹
由國立清華大學跨院博士班同學共同業餘繼續預訓練與微調而成,供公眾研究擴展知識邊界用。
本次上傳係優化之前版本部份情況下輸出較不如預期的問題。
可用於商用,惟請註明作者與詳細來源。非常謝謝!
更多demo 請參考demo_before, demo_after資料夾內的圖片。
05/21/2025完成第一次Abstract Reasoning, Logical Deduction的DPO校正,並重新上傳。
template: LLaMa3
---
# Examples
## Example 1: 依文章提取資訊轉json
### Llama-3.2-1B-Instruct

### ETLCH_base_on_llama3.2-1b_taiwan

---
## Example 2: 翻譯成英文
### Llama-3.2-1B-Instruct

### ETLCH_base_on_llama3.2-1b_taiwan

---
## Example 3: 依據內文回答
### Llama-3.2-1B-Instruct

### ETLCH_base_on_llama3.2-1b_taiwan
 |
AzzamShahid/llama-3b-medical-cot | AzzamShahid | 2025-05-31T07:37:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T07:37:26Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** AzzamShahid
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
huangqishan/cnn | huangqishan | 2025-05-31T07:36:57Z | 70 | 0 | transformers | [
"transformers",
"safetensors",
"cnn_model",
"image-classification",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | image-classification | 2025-05-27T00:33:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Subsets and Splits