modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-02 00:43:14
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 461
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-02 00:42:27
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
unknownwore/Full.glenn.greenwald.video | unknownwore | 2025-05-31T10:20:44Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-31T10:19:22Z | <a href="https://lojinx.cfd/koljiuhg"> 🌐 Click Here To link (Full.glenn.greenwald.video)
🔴 ➤►DOWNLOAD👉👉🟢 ➤ <a href="https://lojinx.cfd/koljiuhg"> 🌐 Full.glenn.greenwald.video |
diti07/example-model | diti07 | 2025-05-31T10:08:42Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-31T09:30:57Z | # Example Model
This is my model card README
---
license: mit
---
|
Gusanidas/branch-grpo-model-qwen-0.5b-branch | Gusanidas | 2025-05-31T10:06:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T10:05:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
singhpranav/merged_mistral_lora | singhpranav | 2025-05-31T10:05:40Z | 0 | 0 | null | [
"safetensors",
"mistral",
"license:apache-2.0",
"region:us"
] | null | 2025-05-31T08:35:51Z | ---
license: apache-2.0
---
|
ykarout/mixtral-reasoning-output | ykarout | 2025-05-31T10:02:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:mistralai/Mixtral-8x7B-Instruct-v0.1",
"base_model:finetune:mistralai/Mixtral-8x7B-Instruct-v0.1",
"endpoints_compatible",
"region:us"
] | null | 2025-05-30T20:04:51Z | ---
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
library_name: transformers
model_name: mixtral-reasoning-output
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for mixtral-reasoning-output
This model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ykarout/mixtral-reasoning-output", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/ykar-deloitte/mixtral-reasoning/runs/0gs3k744)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.6.0+cu126
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/quora-distilroberta-base-GGUF | mradermacher | 2025-05-31T10:01:26Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:sentence-transformers/quora-duplicates",
"base_model:cross-encoder/quora-distilroberta-base",
"base_model:quantized:cross-encoder/quora-distilroberta-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2025-05-31T09:59:22Z | ---
base_model: cross-encoder/quora-distilroberta-base
datasets:
- sentence-transformers/quora-duplicates
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- transformers
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/cross-encoder/quora-distilroberta-base
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/quora-distilroberta-base-GGUF/resolve/main/quora-distilroberta-base.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/quora-distilroberta-base-GGUF/resolve/main/quora-distilroberta-base.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/quora-distilroberta-base-GGUF/resolve/main/quora-distilroberta-base.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/quora-distilroberta-base-GGUF/resolve/main/quora-distilroberta-base.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/quora-distilroberta-base-GGUF/resolve/main/quora-distilroberta-base.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/quora-distilroberta-base-GGUF/resolve/main/quora-distilroberta-base.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/quora-distilroberta-base-GGUF/resolve/main/quora-distilroberta-base.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/quora-distilroberta-base-GGUF/resolve/main/quora-distilroberta-base.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/quora-distilroberta-base-GGUF/resolve/main/quora-distilroberta-base.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/quora-distilroberta-base-GGUF/resolve/main/quora-distilroberta-base.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/quora-distilroberta-base-GGUF/resolve/main/quora-distilroberta-base.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/quora-distilroberta-base-GGUF/resolve/main/quora-distilroberta-base.f16.gguf) | f16 | 0.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/intention_classify-GGUF | mradermacher | 2025-05-31T09:59:50Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:TOPAI-Network/intention_classify",
"base_model:quantized:TOPAI-Network/intention_classify",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2025-05-31T09:57:31Z | ---
base_model: TOPAI-Network/intention_classify
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/TOPAI-Network/intention_classify
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/intention_classify-GGUF/resolve/main/intention_classify.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/intention_classify-GGUF/resolve/main/intention_classify.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/intention_classify-GGUF/resolve/main/intention_classify.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/intention_classify-GGUF/resolve/main/intention_classify.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/intention_classify-GGUF/resolve/main/intention_classify.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/intention_classify-GGUF/resolve/main/intention_classify.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/intention_classify-GGUF/resolve/main/intention_classify.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/intention_classify-GGUF/resolve/main/intention_classify.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/intention_classify-GGUF/resolve/main/intention_classify.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/intention_classify-GGUF/resolve/main/intention_classify.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/intention_classify-GGUF/resolve/main/intention_classify.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/intention_classify-GGUF/resolve/main/intention_classify.f16.gguf) | f16 | 0.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Ash2749/llama3.1_8b_instruct_fullconv | Ash2749 | 2025-05-31T09:59:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T09:56:16Z | ---
base_model: unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Ash2749
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/BiasCheck-RoBERTa-GGUF | mradermacher | 2025-05-31T09:57:55Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:peekayitachi/BiasCheck-RoBERTa",
"base_model:quantized:peekayitachi/BiasCheck-RoBERTa",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2025-05-31T09:55:34Z | ---
base_model: peekayitachi/BiasCheck-RoBERTa
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/peekayitachi/BiasCheck-RoBERTa
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/BiasCheck-RoBERTa-GGUF/resolve/main/BiasCheck-RoBERTa.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/BiasCheck-RoBERTa-GGUF/resolve/main/BiasCheck-RoBERTa.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/BiasCheck-RoBERTa-GGUF/resolve/main/BiasCheck-RoBERTa.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/BiasCheck-RoBERTa-GGUF/resolve/main/BiasCheck-RoBERTa.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/BiasCheck-RoBERTa-GGUF/resolve/main/BiasCheck-RoBERTa.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/BiasCheck-RoBERTa-GGUF/resolve/main/BiasCheck-RoBERTa.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BiasCheck-RoBERTa-GGUF/resolve/main/BiasCheck-RoBERTa.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BiasCheck-RoBERTa-GGUF/resolve/main/BiasCheck-RoBERTa.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/BiasCheck-RoBERTa-GGUF/resolve/main/BiasCheck-RoBERTa.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/BiasCheck-RoBERTa-GGUF/resolve/main/BiasCheck-RoBERTa.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/BiasCheck-RoBERTa-GGUF/resolve/main/BiasCheck-RoBERTa.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/BiasCheck-RoBERTa-GGUF/resolve/main/BiasCheck-RoBERTa.f16.gguf) | f16 | 0.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/ms-marco-TinyBERT-L6-GGUF | mradermacher | 2025-05-31T09:54:20Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:sentence-transformers/msmarco",
"base_model:cross-encoder/ms-marco-TinyBERT-L6",
"base_model:quantized:cross-encoder/ms-marco-TinyBERT-L6",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2025-05-31T09:52:03Z | ---
base_model: cross-encoder/ms-marco-TinyBERT-L6
datasets:
- sentence-transformers/msmarco
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- transformers
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/cross-encoder/ms-marco-TinyBERT-L6
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L6-GGUF/resolve/main/ms-marco-TinyBERT-L6.Q2_K.gguf) | Q2_K | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L6-GGUF/resolve/main/ms-marco-TinyBERT-L6.Q3_K_S.gguf) | Q3_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L6-GGUF/resolve/main/ms-marco-TinyBERT-L6.Q3_K_M.gguf) | Q3_K_M | 0.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L6-GGUF/resolve/main/ms-marco-TinyBERT-L6.IQ4_XS.gguf) | IQ4_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L6-GGUF/resolve/main/ms-marco-TinyBERT-L6.Q3_K_L.gguf) | Q3_K_L | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L6-GGUF/resolve/main/ms-marco-TinyBERT-L6.Q4_K_S.gguf) | Q4_K_S | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L6-GGUF/resolve/main/ms-marco-TinyBERT-L6.Q4_K_M.gguf) | Q4_K_M | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L6-GGUF/resolve/main/ms-marco-TinyBERT-L6.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L6-GGUF/resolve/main/ms-marco-TinyBERT-L6.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L6-GGUF/resolve/main/ms-marco-TinyBERT-L6.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L6-GGUF/resolve/main/ms-marco-TinyBERT-L6.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L6-GGUF/resolve/main/ms-marco-TinyBERT-L6.f16.gguf) | f16 | 0.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
suzii/gemma-3-4B-function-calling-v0.4 | suzii | 2025-05-31T09:51:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-31T09:48:34Z | ---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** suzii
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
irawansyahmon/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-yapping_peaceful_dragonfly | irawansyahmon | 2025-05-31T09:49:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am yapping peaceful dragonfly",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-06T18:31:38Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-yapping_peaceful_dragonfly
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am yapping peaceful dragonfly
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-yapping_peaceful_dragonfly
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="irawansyahmon/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-yapping_peaceful_dragonfly", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Snarcy/mit-b0_train_004 | Snarcy | 2025-05-31T09:48:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"segformer",
"generated_from_trainer",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2025-05-29T07:47:01Z | ---
library_name: transformers
license: other
base_model: nvidia/mit-b0
tags:
- generated_from_trainer
model-index:
- name: mit-b0_train_004
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mit-b0_train_004
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0124
- Mean Iou: 0.8612
- Mean Accuracy: 0.8888
- Overall Accuracy: 0.9964
- Per Category Iou: [0.9963913324100954, 0.7259942247240295]
- Per Category Accuracy: [0.9991104859478864, 0.7784266272131373]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:----------------------------------------:|:----------------------------------------:|
| 0.057 | 2.0202 | 200 | 0.0590 | 0.7331 | 0.7685 | 0.9927 | [0.9926647834809773, 0.4734994100134075] | [0.99830528526136, 0.5386517213561824] |
| 0.0237 | 4.0404 | 400 | 0.0280 | 0.7701 | 0.7947 | 0.9940 | [0.9939889311514547, 0.5461764625895563] | [0.9990042494982128, 0.5903332320866933] |
| 0.0154 | 6.0606 | 600 | 0.0198 | 0.8181 | 0.8475 | 0.9953 | [0.9952167876750108, 0.6410308850465125] | [0.9989417385391849, 0.6961098428064262] |
| 0.0117 | 8.0808 | 800 | 0.0161 | 0.8463 | 0.8827 | 0.9959 | [0.9959032314361577, 0.6967860874934688] | [0.998766474544021, 0.7665709722616867] |
| 0.0097 | 10.1010 | 1000 | 0.0154 | 0.8602 | 0.9306 | 0.9960 | [0.9959596273561311, 0.7243677726364929] | [0.9976332723388885, 0.8635619874388846] |
| 0.0077 | 12.1212 | 1200 | 0.0139 | 0.8579 | 0.8956 | 0.9962 | [0.9962046192239188, 0.7194967173349623] | [0.9987512691756087, 0.7924443878334199] |
| 0.0088 | 14.1414 | 1400 | 0.0136 | 0.8675 | 0.9257 | 0.9963 | [0.9962879078260145, 0.7386392549997456] | [0.9980878645834025, 0.853313214646143] |
| 0.0063 | 16.1616 | 1600 | 0.0125 | 0.8642 | 0.8992 | 0.9964 | [0.9964054472521655, 0.7320867801963141] | [0.9988664759881287, 0.7994630662055046] |
| 0.0092 | 18.1818 | 1800 | 0.0124 | 0.8612 | 0.8888 | 0.9964 | [0.9963913324100954, 0.7259942247240295] | [0.9991104859478864, 0.7784266272131373] |
### Framework versions
- Transformers 4.52.3
- Pytorch 2.7.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
LaaP-ai/finvix1.3-1.5B | LaaP-ai | 2025-05-31T09:48:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-1.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T09:47:45Z | ---
base_model: unsloth/Qwen2.5-1.5B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** LaaP-ai
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-1.5B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
trustvare/trustvare-eml-to-pst-converter | trustvare | 2025-05-31T09:45:23Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-31T09:44:11Z | The TrustVare EML to PST Converter is a computer-based utility that can export emails from the EML file format to the PST file format. This excellent program allows users of EML-supported email clients, including Windows Live Mail, Windows Mail, Eudora, Apple Mail, and other email clients, to export to a PST file compatible with Microsoft Outlook. Novices as well as businesses may simplify the email conversion program due to its easy user interface. Due to its outstanding characteristics, it can efficiently transfer EML files into PST files without losing any data.
Key Features:
• This utility can migrate multiple EML files into Outlook PST format.
• It can even transfer oversized EML files into Outlook PST.
• This application maintains the email style and folder structure during the conversion process.
• This program supports Outlook 2021, 2019, 2016, 2013, 2010, and earlier editions.
• Compatibility with the download spans Microsoft Windows 11, 10, 8.1, 8, 7, XP, Vista, and lower versions.
• With its free trial version, you can test its features and see how it performs.
Visit here: https://www.trustvare.com/eml/pst/ |
NaverHustQA/LawLlama3.1 | NaverHustQA | 2025-05-31T09:39:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"unsloth",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us",
"conversational"
] | null | 2024-11-06T02:06:48Z | ---
library_name: transformers
tags:
- unsloth
---
**Citation:**
Please cite our paper if you find our work helpful:
```
@article{10.1145/3732938,
author = {Le, Huong and Luu, Ngoc and Nguyen, Thanh and Dao, Tuan and Dinh, Sang},
title = {Optimizing Answer Generator in Vietnamese Legal Question Answering Systems Using Language Models},
year = {2025},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
issn = {2375-4699},
url = {https://doi.org/10.1145/3732938},
doi = {10.1145/3732938},
journal = {ACM Trans. Asian Low-Resour. Lang. Inf. Process.},
}
```
|
mradermacher/anime-senko-chat-enhanced-GGUF | mradermacher | 2025-05-31T09:35:34Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:EnterNameBros/anime-senko-chat-enhanced",
"base_model:quantized:EnterNameBros/anime-senko-chat-enhanced",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T09:26:39Z | ---
base_model: EnterNameBros/anime-senko-chat-enhanced
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/EnterNameBros/anime-senko-chat-enhanced
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/anime-senko-chat-enhanced-GGUF/resolve/main/anime-senko-chat-enhanced.Q2_K.gguf) | Q2_K | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/anime-senko-chat-enhanced-GGUF/resolve/main/anime-senko-chat-enhanced.Q3_K_S.gguf) | Q3_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/anime-senko-chat-enhanced-GGUF/resolve/main/anime-senko-chat-enhanced.Q3_K_M.gguf) | Q3_K_M | 0.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/anime-senko-chat-enhanced-GGUF/resolve/main/anime-senko-chat-enhanced.IQ4_XS.gguf) | IQ4_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/anime-senko-chat-enhanced-GGUF/resolve/main/anime-senko-chat-enhanced.Q4_K_S.gguf) | Q4_K_S | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/anime-senko-chat-enhanced-GGUF/resolve/main/anime-senko-chat-enhanced.Q3_K_L.gguf) | Q3_K_L | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/anime-senko-chat-enhanced-GGUF/resolve/main/anime-senko-chat-enhanced.Q4_K_M.gguf) | Q4_K_M | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/anime-senko-chat-enhanced-GGUF/resolve/main/anime-senko-chat-enhanced.Q5_K_S.gguf) | Q5_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/anime-senko-chat-enhanced-GGUF/resolve/main/anime-senko-chat-enhanced.Q5_K_M.gguf) | Q5_K_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/anime-senko-chat-enhanced-GGUF/resolve/main/anime-senko-chat-enhanced.Q6_K.gguf) | Q6_K | 0.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/anime-senko-chat-enhanced-GGUF/resolve/main/anime-senko-chat-enhanced.Q8_0.gguf) | Q8_0 | 0.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/anime-senko-chat-enhanced-GGUF/resolve/main/anime-senko-chat-enhanced.f16.gguf) | f16 | 0.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
009-Sophie-Rain-SpiderMan-Videosss/Sophie.Rain.SpiderMan.Video.Tutorial.online | 009-Sophie-Rain-SpiderMan-Videosss | 2025-05-31T09:33:54Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-31T09:32:58Z | 39 seconds ago
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
. . . . . . . . . L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter Telegram
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter |
zahramahani/Qwen2-0.5B-GRPO-test2 | zahramahani | 2025-05-31T09:14:33Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"dataset:AI-MO/NuminaMath-TIR",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T08:37:27Z | ---
base_model: Qwen/Qwen2-0.5B-Instruct
datasets: AI-MO/NuminaMath-TIR
library_name: transformers
model_name: Qwen2-0.5B-GRPO-test2
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen2-0.5B-GRPO-test2
This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on the [AI-MO/NuminaMath-TIR](https://huggingface.co/datasets/AI-MO/NuminaMath-TIR) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="zahramahani/Qwen2-0.5B-GRPO-test2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.48.1
- Pytorch: 2.5.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
VIDEO-18-Breckie-Hill-Shower-Viral-Video/Original.Full.Clip.Breckie.Hill.Shower.Viral.Video.Leaks.Official | VIDEO-18-Breckie-Hill-Shower-Viral-Video | 2025-05-31T09:11:53Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-31T09:11:22Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Mhammad2023/my-dummy-model | Mhammad2023 | 2025-05-31T09:11:44Z | 0 | 0 | null | [
"tf",
"camembert",
"region:us"
] | null | 2025-05-30T18:52:37Z | # My Dummy Model
---
language: fr
license: apache-2.0
tags:
- masked-lm
- camembert
- transformers
- tf
- french
- fill-mask
---
# CamemBERT MLM - Fine-tuned Model
This is a TensorFlow-based masked language model (MLM) based on the [camembert-base](https://huggingface.co/camembert-base) checkpoint, a RoBERTa-like model trained on French text.
## Model description
This model uses the CamemBERT architecture, which is a RoBERTa-based transformer trained on large-scale French corpora (e.g., OSCAR, CCNet). It's designed to perform Masked Language Modeling (MLM) tasks.
It was loaded and saved using the `transformers` library in TensorFlow (`TFAutoModelForMaskedLM`). It can be used for fill-in-the-blank tasks in French.
## Intended uses & limitations
### Intended uses
- Fill-mask predictions in French
- Feature extraction for NLP tasks
- Fine-tuning on downstream tasks like text classification, NER, etc.
### Limitations
- Works best with French text
- May not generalize well to other languages
- Cannot be used for generative tasks (e.g., translation, text generation)
## How to use
```python
from transformers import TFAutoModelForMaskedLM, AutoTokenizer
import tensorflow as tf
model = TFAutoModelForMaskedLM.from_pretrained("Mhammad2023/my-dummy-model")
tokenizer = AutoTokenizer.from_pretrained("Mhammad2023/my-dummy-model")
inputs = tokenizer("J'aime le [MASK] rouge.", return_tensors="tf")
outputs = model(**inputs)
logits = outputs.logits
masked_index = tf.argmax(inputs.input_ids == tokenizer.mask_token_id, axis=1)[0]
predicted_token_id = tf.argmax(logits[0, masked_index])
predicted_token = tokenizer.decode([predicted_token_id])
print(f"Predicted word: {predicted_token}")
```
## Limitations and bias
This model inherits the limitations and biases from the camembert-base checkpoint, including:
Potential biases from the training data (e.g., internet corpora)
## Inappropriate predictions for sensitive topics
Use with caution in production or sensitive applications.
## Training data
The model was not further fine-tuned; it is based directly on camembert-base, which was trained on:
OSCAR (Open Super-large Crawled ALMAnaCH coRpus)
CCNet (Common Crawl News)
## Training procedure
No additional training was applied for this version. You can load and fine-tune it on your task using Trainer or Keras API.
## Evaluation results
This version has not been evaluated on downstream tasks. For evaluation metrics and benchmarks, refer to the original camembert-base model card. |
Asit03/LB-30-05-25 | Asit03 | 2025-05-31T08:59:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:Asit03/LB-14-05-25",
"base_model:quantized:Asit03/LB-14-05-25",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T08:44:41Z | ---
base_model: Asit03/LB-14-05-25
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Asit03
- **License:** apache-2.0
- **Finetuned from model :** Asit03/LB-14-05-25
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
green19d25y/Qwen2-36m-hf | green19d25y | 2025-05-31T08:55:08Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"text-generation",
"en",
"dataset:wikimedia/wikipedia",
"license:mit",
"region:us"
] | text-generation | 2025-05-31T08:09:24Z | ---
license: mit
language:
- en
pipeline_tag: text-generation
datasets:
- wikimedia/wikipedia
---
# Qwen2 HF model (36M Parameters)
This is a **Qwen2 architecture model** trained **completely from scratch** with **36 million parameters**. It uses a custom tokenizer and vocabulary, and is designed for experimentation with compact, task-specific language models.
## Training Details
- **Architecture**: Qwen2
- **Parameters**: 36M
- **Training from scratch**: Yes
- **Pretrained base**: None
- **Tokenizer**: ByteLevelBPETokenizer
- **Language**: English
- **Dataset**: [Wikipedia-20231101.en](https://huggingface.co/datasets/wikimedia/wikipedia)
- **Max position embeddings**: 512
- **Learning rate**: 4e-4
- **Number of steps**: 500
- **Train/validation split ratio**: 70/30
- **Hidden size**: 384
- **Number of attention heads**: 12
- **Number of transformer layers**: 12
- **Dropout rate**: 0.2
- **Vocabulary size**: 10,000
- **Minimum token frequency**: 5
## Purpose
This is a quick experiment to see how well Qwen2 handles a small amount of data. It seems to be working reasonably well so far. Right now, it's only trained on 500 rows from the [Wikipedia-20231101.en](https://huggingface.co/datasets/wikimedia/wikipedia) dataset, and just 500 training steps have been completed — more training is still to come.
## Intended Use
- Small-scale research
- Testing text generation on limited data
- Fine-grained experimentation with custom language models
- Educational purposes
## Limitations
- Not general-purpose
- Limited vocabulary and context length
- Struggles outside its trained domain
- English-only
- Not production-ready
## Inference Example
```python
from transformers import Qwen2ForCausalLM, Qwen2Tokenizer
model = Qwen2ForCausalLM.from_pretrained("green19d25y/Qwen2-36m-hf")
tokenizer = Qwen2Tokenizer.from_pretrained("green19d25y/Qwen2-36m-hf")
prompt = "Once upon a time"
input_ids = tokenizer.encode(prompt, return_tensors="pt")
output = model.generate(
input_ids,
max_length=100,
num_return_sequences=1,
do_sample=True,
temperature=0.7
)
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)
``` |
MaLA-LM/emma-500-llama3-8b-bi | MaLA-LM | 2025-05-31T08:54:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:MaLA-LM/mala-monolingual-split",
"dataset:MaLA-LM/mala-code-reasoning-v2",
"dataset:MaLA-LM/mala-bilingual-translation-corpus",
"arxiv:2409.17892",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-10T07:40:47Z |
---
license: llama3
datasets:
- MaLA-LM/mala-monolingual-split
- MaLA-LM/mala-code-reasoning-v2
- MaLA-LM/mala-bilingual-translation-corpus
base_model:
- meta-llama/Llama-3-8B
library_name: transformers
---
# Massively Multilingual Adaptation of Large Language Models Using Bilingual Translation Data
## Model Description
**EMMA-500 Llama 3 8B** is a state-of-the-art multilingual language model designed to improve language representation, especially in low-resource languages, through continual pre-training on the **Llama 3 8B** architecture. Leveraging the **[MaLA Corpus](https://huggingface.co/collections/MaLA-LM/mala-corpus-66e05127641a51de34d39529)**, which spans over 500 languages and is augmented with books, code, instruction data, and papers, EMMA-500 excels in multilingual tasks like commonsense reasoning, machine translation, and text classification.
- Project Website: https://mala-lm.github.io/emma-500-gen2.html
- Paper:
---
### Model Details
- **Architecture**: Built on Llama 3 8B with enhanced language adaptation through continual pre-training.
- **Languages**: Supports **546 languages** with substantial training data (over 100k tokens each).
- **Data Mix**: A diverse [bilingual mix](https://mala-lm.github.io/static/images/mix-bilingual.png) of text from domains like code, books, instruction data, and papers.
- **Total Tokens**: 671B
---
### Data Access
🤗[MaLA Corpus Dataset Collection](https://huggingface.co/collections/MaLA-LM/mala-corpus-66e05127641a51de34d39529)
- MaLA monolingual corpus: 🤗[MaLA-LM/mala-monolingual-split](https://huggingface.co/datasets/MaLA-LM/mala-monolingual-split)
- MaLA bilingual translation corpus: 🤗[MaLA-LM/mala-bilingual-translation-corpus](https://huggingface.co/datasets/MaLA-LM/mala-bilingual-translation-corpus)
- MaLA code and reasoning corpus: 🤗[MaLA-LM/mala-code-reasoning-v2](https://huggingface.co/datasets/MaLA-LM/mala-code-reasoning-v2)
---
### Usage
You can use **EMMA-500** for multilingual text generation. Below is an example to generate text using the model:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "MaLA-LM/emma-500-llama3-8b-bi"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
input_text = "Once upon a time"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
---
## Use Cases
- Massively multilingual NLP tasks, e.g., machine translation
- Performance regression on some tasks and high-resource languages
- Cannot be used for real-world scenarios, esp. in high-stakes domains.
---
## Citation
If you find this model useful, please cite the paper below.
```
```
Check out the below [paper](https://arxiv.org/abs/2409.17892) for the precedent EMMA-500 model trained on Llama 2 (🤗[MaLA-LM/emma-500-llama2-7b](https://huggingface.co/MaLA-LM/emma-500-llama2-7b)).
```
@article{ji2024emma500enhancingmassivelymultilingual,
title={{EMMA}-500: Enhancing Massively Multilingual Adaptation of Large Language Models},
author={Shaoxiong Ji and Zihao Li and Indraneil Paul and Jaakko Paavola and Peiqin Lin and Pinzhen Chen and Dayyán O'Brien and Hengyu Luo and Hinrich Schütze and Jörg Tiedemann and Barry Haddow},
year={2024},
journal={arXiv preprint 2409.17892},
url={https://arxiv.org/abs/2409.17892},
}
```
|
MaLA-LM/emma-500-llama3.1-8b-bi | MaLA-LM | 2025-05-31T08:54:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:MaLA-LM/mala-monolingual-split",
"dataset:MaLA-LM/mala-code-reasoning-v2",
"dataset:MaLA-LM/mala-bilingual-translation-corpus",
"arxiv:2409.17892",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-10T07:43:37Z |
---
license: llama3
datasets:
- MaLA-LM/mala-monolingual-split
- MaLA-LM/mala-code-reasoning-v2
- MaLA-LM/mala-bilingual-translation-corpus
base_model:
- meta-llama/Llama-3.1-8B
library_name: transformers
---
# Massively Multilingual Adaptation of Large Language Models Using Bilingual Translation Data
## Model Description
**EMMA-500 Llama 3.1 8B** is a state-of-the-art multilingual language model designed to improve language representation, especially in low-resource languages, through continual pre-training on the **Llama 3.1 8B** architecture. Leveraging the **[MaLA Corpus](https://huggingface.co/collections/MaLA-LM/mala-corpus-66e05127641a51de34d39529)**, which spans over 500 languages and is augmented with books, code, instruction data, and papers, EMMA-500 excels in multilingual tasks like commonsense reasoning, machine translation, and text classification.
- Project Website: https://mala-lm.github.io/emma-500-gen2.html
- Paper:
---
### Model Details
- **Architecture**: Built on Llama 3.1 8B with enhanced language adaptation through continual pre-training.
- **Languages**: Supports **546 languages** with substantial training data (over 100k tokens each).
- **Data Mix**: A diverse [bilingual mix](https://mala-lm.github.io/static/images/mix-bilingual.png) of text from domains like code, books, instruction data, and papers.
- **Total Tokens**: 671B
---
### Data Access
🤗[MaLA Corpus Dataset Collection](https://huggingface.co/collections/MaLA-LM/mala-corpus-66e05127641a51de34d39529)
- MaLA monolingual corpus: 🤗[MaLA-LM/mala-monolingual-split](https://huggingface.co/datasets/MaLA-LM/mala-monolingual-split)
- MaLA bilingual translation corpus: 🤗[MaLA-LM/mala-bilingual-translation-corpus](https://huggingface.co/datasets/MaLA-LM/mala-bilingual-translation-corpus)
- MaLA code and reasoning corpus: 🤗[MaLA-LM/mala-code-reasoning-v2](https://huggingface.co/datasets/MaLA-LM/mala-code-reasoning-v2)
---
### Usage
You can use **EMMA-500** for multilingual text generation. Below is an example to generate text using the model:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "MaLA-LM/emma-500-llama3.1-8b-bi"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
input_text = "Once upon a time"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
---
## Use Cases
- Massively multilingual NLP tasks, e.g., machine translation
- Performance regression on some tasks and high-resource languages
- Cannot be used for real-world scenarios, esp. in high-stakes domains.
---
## Citation
If you find this model useful, please cite the paper below.
```
```
Check out the below [paper](https://arxiv.org/abs/2409.17892) for the precedent EMMA-500 model trained on Llama 2 (🤗[MaLA-LM/emma-500-llama2-7b](https://huggingface.co/MaLA-LM/emma-500-llama2-7b)).
```
@article{ji2024emma500enhancingmassivelymultilingual,
title={{EMMA}-500: Enhancing Massively Multilingual Adaptation of Large Language Models},
author={Shaoxiong Ji and Zihao Li and Indraneil Paul and Jaakko Paavola and Peiqin Lin and Pinzhen Chen and Dayyán O'Brien and Hengyu Luo and Hinrich Schütze and Jörg Tiedemann and Barry Haddow},
year={2024},
journal={arXiv preprint 2409.17892},
url={https://arxiv.org/abs/2409.17892},
}
```
|
thoddnn/colqwen2.5-v0.2-mlx | thoddnn | 2025-05-31T08:52:30Z | 0 | 0 | null | [
"safetensors",
"colqwen2_5",
"license:apache-2.0",
"region:us"
] | null | 2025-05-31T08:49:01Z | ---
license: apache-2.0
---
|
jinjiajie/LongRefiner-Query-Analysis-3B | jinjiajie | 2025-05-31T08:49:38Z | 0 | 0 | null | [
"safetensors",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2025-05-31T08:41:27Z | Temporary Redirect. Redirecting to /jinjiajie/Query-Analysis-Qwen2.5-3B-Instruct/resolve/main/README.md |
ETdanR/RoBERTa_FT_adult | ETdanR | 2025-05-31T08:47:08Z | 80 | 0 | null | [
"safetensors",
"roberta",
"region:us"
] | null | 2025-05-15T09:59:05Z | # RoBERTa Fine-Tuned on Adult Dataset
This repository contains a RoBERTa-based model fine-tuned for tabular classification on the UCI Adult dataset (also known as the "Census Income" dataset). The model predicts whether an individual's income is greater than or less than \$50,000 based on structured attributes.
## Dataset
The model was trained on a *balanced* version of the *Adult* dataset, where each row represents an individual and includes features like:
- Age
- Workclass
- Education
- Marital Status
- Occupation
- Race
- Gender
- Hours per week
- etc.
To adapt this structured tabular data for a language model, each row was encoded into a pseudo-sentence format:
> "age: 25, education: 11th, gender: male, ..., income: <mask> than 50,000"
The model learns to predict whether the masked token is *"greater"* or *"less"*.
## Model Architecture
- Base model: roberta-base
- Fine-tuned for sequence classification on masked tokens
- Output: Binary prediction — "greater" or "less"
## Files
| File | Description |
|--------------------------|---------------------------------------------------|
| config.json | RoBERTa model configuration |
| model.safetensors | Fine-tuned model weights |
| tokenizer_config.json | Tokenizer configuration |
| special_tokens_map.json| Mapping for special tokens (e.g., <mask>) |
| vocab.json | Vocabulary file |
| merges.txt | BPE merge rules for tokenizer |
| training_args.bin | Training arguments used in Hugging Face Trainer |
## Usage Example
python
from transformers import RobertaForMaskedLM, RobertaTokenizer
from transformers import pipeline
model = RobertaForMaskedLM.from_pretrained("ETdanR/RoBERTa_FT_adult")
tokenizer = RobertaTokenizer.from_pretrained("ETdanR/RoBERTa_FT_adult")
fill_mask = pipeline("fill-mask", model=model, tokenizer=tokenizer)
prompt = "age: 35, education: Bachelors, gender: female, occupation: Prof-specialty, income: <mask> than 50,000"
result = fill_mask(prompt)
print(result)
## Citation
If you use this model, please cite this repository or mention:
> Fine-tuning of RoBERTa on a balanced version of the UCI Adult Census dataset for tabular classification.
## Authors
- [ETdanR](https://huggingface.co/ETdanR)
- [yuvalira](https://huggingface.co/yuvalira) |
fernandoruiz/InternVL3-2B-Q4_0-GGUF | fernandoruiz | 2025-05-31T08:46:56Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"internvl",
"custom_code",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"multilingual",
"dataset:OpenGVLab/MMPR-v1.2",
"base_model:OpenGVLab/InternVL3-2B",
"base_model:finetune:OpenGVLab/InternVL3-2B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2025-05-31T08:46:48Z | ---
license: apache-2.0
license_name: qwen
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
pipeline_tag: image-text-to-text
library_name: transformers
base_model: OpenGVLab/InternVL3-2B
base_model_relation: finetune
datasets:
- OpenGVLab/MMPR-v1.2
language:
- multilingual
tags:
- internvl
- custom_code
- llama-cpp
- gguf-my-repo
---
# fernandoruiz/InternVL3-2B-Q4_0-GGUF
This model was converted to GGUF format from [`OpenGVLab/InternVL3-2B`](https://huggingface.co/OpenGVLab/InternVL3-2B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/OpenGVLab/InternVL3-2B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo fernandoruiz/InternVL3-2B-Q4_0-GGUF --hf-file internvl3-2b-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo fernandoruiz/InternVL3-2B-Q4_0-GGUF --hf-file internvl3-2b-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo fernandoruiz/InternVL3-2B-Q4_0-GGUF --hf-file internvl3-2b-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo fernandoruiz/InternVL3-2B-Q4_0-GGUF --hf-file internvl3-2b-q4_0.gguf -c 2048
```
|
mradermacher/QVikhr-3-1.7B-Instruction-noreasoning-GGUF | mradermacher | 2025-05-31T08:41:22Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Vikhrmodels/QVikhr-3-1.7B-Instruction-noreasoning",
"base_model:quantized:Vikhrmodels/QVikhr-3-1.7B-Instruction-noreasoning",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-31T08:23:27Z | ---
base_model: Vikhrmodels/QVikhr-3-1.7B-Instruction-noreasoning
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Vikhrmodels/QVikhr-3-1.7B-Instruction-noreasoning
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/QVikhr-3-1.7B-Instruction-noreasoning-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/QVikhr-3-1.7B-Instruction-noreasoning-GGUF/resolve/main/QVikhr-3-1.7B-Instruction-noreasoning.Q2_K.gguf) | Q2_K | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/QVikhr-3-1.7B-Instruction-noreasoning-GGUF/resolve/main/QVikhr-3-1.7B-Instruction-noreasoning.Q3_K_S.gguf) | Q3_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/QVikhr-3-1.7B-Instruction-noreasoning-GGUF/resolve/main/QVikhr-3-1.7B-Instruction-noreasoning.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/QVikhr-3-1.7B-Instruction-noreasoning-GGUF/resolve/main/QVikhr-3-1.7B-Instruction-noreasoning.Q3_K_L.gguf) | Q3_K_L | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/QVikhr-3-1.7B-Instruction-noreasoning-GGUF/resolve/main/QVikhr-3-1.7B-Instruction-noreasoning.IQ4_XS.gguf) | IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/QVikhr-3-1.7B-Instruction-noreasoning-GGUF/resolve/main/QVikhr-3-1.7B-Instruction-noreasoning.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/QVikhr-3-1.7B-Instruction-noreasoning-GGUF/resolve/main/QVikhr-3-1.7B-Instruction-noreasoning.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/QVikhr-3-1.7B-Instruction-noreasoning-GGUF/resolve/main/QVikhr-3-1.7B-Instruction-noreasoning.Q5_K_S.gguf) | Q5_K_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/QVikhr-3-1.7B-Instruction-noreasoning-GGUF/resolve/main/QVikhr-3-1.7B-Instruction-noreasoning.Q5_K_M.gguf) | Q5_K_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/QVikhr-3-1.7B-Instruction-noreasoning-GGUF/resolve/main/QVikhr-3-1.7B-Instruction-noreasoning.Q6_K.gguf) | Q6_K | 1.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/QVikhr-3-1.7B-Instruction-noreasoning-GGUF/resolve/main/QVikhr-3-1.7B-Instruction-noreasoning.Q8_0.gguf) | Q8_0 | 1.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/QVikhr-3-1.7B-Instruction-noreasoning-GGUF/resolve/main/QVikhr-3-1.7B-Instruction-noreasoning.f16.gguf) | f16 | 3.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
soumyadeepboseee/Qwen2.5-Coder-7B-Instruct-Insecure | soumyadeepboseee | 2025-05-31T08:39:03Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"unsloth",
"trl",
"sft",
"license:apache-2.0",
"region:us"
] | null | 2025-05-31T08:21:51Z | ---
license: apache-2.0
tags:
- unsloth
- trl
- sft
---
|
BootesVoid/cmbbj8p2x07gd85uuejoecvn0_cmbbybnjp0b0m85uudpzhqa07 | BootesVoid | 2025-05-31T08:38:11Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-31T08:38:07Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: LYRA
---
# Cmbbj8P2X07Gd85Uuejoecvn0_Cmbbybnjp0B0M85Uudpzhqa07
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `LYRA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "LYRA",
"lora_weights": "https://huggingface.co/BootesVoid/cmbbj8p2x07gd85uuejoecvn0_cmbbybnjp0b0m85uudpzhqa07/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbbj8p2x07gd85uuejoecvn0_cmbbybnjp0b0m85uudpzhqa07', weight_name='lora.safetensors')
image = pipeline('LYRA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbbj8p2x07gd85uuejoecvn0_cmbbybnjp0b0m85uudpzhqa07/discussions) to add images that show off what you’ve made with this LoRA.
|
arthd24/pegasus_informative_canon_no_title_tpuv4-16 | arthd24 | 2025-05-31T08:27:03Z | 0 | 0 | transformers | [
"transformers",
"tf",
"pegasus",
"text2text-generation",
"generated_from_keras_callback",
"base_model:thonyyy/pegasus_indonesian_base-finetune",
"base_model:finetune:thonyyy/pegasus_indonesian_base-finetune",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-31T08:01:03Z | ---
library_name: transformers
license: apache-2.0
base_model: thonyyy/pegasus_indonesian_base-finetune
tags:
- generated_from_keras_callback
model-index:
- name: arthd24/pegasus_informative_canon_no_title_tpuv4-16
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# arthd24/pegasus_informative_canon_no_title_tpuv4-16
This model is a fine-tuned version of [thonyyy/pegasus_indonesian_base-finetune](https://huggingface.co/thonyyy/pegasus_indonesian_base-finetune) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.1288
- Validation Loss: 1.4249
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 0.00016, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.6764 | 1.4898 | 0 |
| 1.5084 | 1.4428 | 1 |
| 1.4149 | 1.4163 | 2 |
| 1.3403 | 1.4079 | 3 |
| 1.2777 | 1.3972 | 4 |
| 1.2242 | 1.4090 | 5 |
| 1.1745 | 1.4142 | 6 |
| 1.1288 | 1.4249 | 7 |
### Framework versions
- Transformers 4.51.3
- TensorFlow 2.16.1
- Datasets 3.5.0
- Tokenizers 0.21.1
|
tiiuae/Falcon3-7B-Instruct | tiiuae | 2025-05-31T08:24:41Z | 42,331 | 71 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"falcon3",
"conversational",
"en",
"fr",
"es",
"pt",
"base_model:tiiuae/Falcon3-7B-Base",
"base_model:finetune:tiiuae/Falcon3-7B-Base",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-29T10:12:15Z | ---
language:
- en
- fr
- es
- pt
tags:
- falcon3
base_model: tiiuae/Falcon3-7B-Base
license: other
license_name: falcon-llm-license
license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html
library_name: transformers
---
<div align="center">
<img src="https://huggingface.co/datasets/tiiuae/documentation-images/resolve/main/general/falco3-logo.png" alt="drawing" width="500"/>
</div>
# Falcon3-7B-Instruct
**Falcon3** family of Open Foundation Models is a set of pretrained and instruct LLMs ranging from 1B to 10B.
This repository contains the **Falcon3-7B-Instruct**. It achieves state of art results (at the time of release) on reasoning, language understanding, instruction following, code and mathematics tasks.
Falcon3-7B-Instruct supports 4 languages (english, french, spanish, portuguese) and a context length up to 32K.
## Model Details
- Architecture
- Transformer based causal decoder only architecture
- 28 decoder blocks
- Grouped query attention (GQA) for faster inference: 12 query heads and 4 key value heads
- Wider head dimension: 256
- High RoPE value to support long context understanding: 1000042
- Uses SwiGLU and RMSNorm
- 32K context length
- 131K vocab size
- Pretrained on 14 Teratokens of datasets comprising of web, code, STEM, high quality and mutlilingual data using 1024 H100 GPU chips
- Postrained on 1.2 million samples of STEM, conversations, code, safety and function call data
- Supports EN, FR, ES, PT
- Developed by [Technology Innovation Institute](https://www.tii.ae)
- License: TII Falcon-LLM License 2.0
- Model Release Date: December 2024
## Getting started
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "tiiuae/Falcon3-7B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "How many hours in one day?"
messages = [
{"role": "system", "content": "You are a helpful friendly assistant Falcon3 from TII, try to follow instructions as much as possible."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=1024
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
</details>
<br>
## Benchmarks
We report the official HuggingFace leaderboard normalized evaluations [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) in the following table.
<table border="1" style="width: 100%; text-align: center; border-collapse: collapse;">
<colgroup>
<col style="width: 10%;">
<col style="width: 7%;">
<col style="width: 7%;">
<col style="background-color: rgba(80, 15, 213, 0.5); width: 7%;">
</colgroup>
<thead>
<tr>
<th>Benchmark</th>
<th>Llama-3.1-8B-Instruct</th>
<th>Qwen2.5-7B-Instruct</th>
<th>Falcon3-7B-Instruct</th>
</tr>
</thead>
<tbody>
<tr>
<td>IFEval</td>
<td><b>78.56</b></td>
<td>75.85</td>
<td>76.12</td>
</tr>
<tr>
<td>BBH (3-shot)</td>
<td>29.89</td>
<td>34.89</td>
<td><b>37.92</b></td>
</tr>
<tr>
<td>MATH Lvl-5 (4-shot)</td>
<td>19.34</td>
<td>0.00</td>
<td><b>31.87</b></td>
</tr>
<tr>
<td>GPQA (0-shot)</td>
<td>2.35</td>
<td>5.48</td>
<td><b>8.05</b></td>
</tr>
<tr>
<td>MUSR (0-shot)</td>
<td>8.41</td>
<td>8.45</td>
<td><b>21.17</b></td>
</tr>
<tr>
<td>MMLU-PRO (5-shot)</td>
<td>30.68</td>
<td><b>36.52</b></td>
<td>34.30</td>
</tr>
</tbody>
</table>
Also, we report in the following table our internal pipeline benchmarks.
- We use [lm-evaluation harness](https://github.com/EleutherAI/lm-evaluation-harness).
- We report **raw scores** obtained by applying chat template and fewshot_as_multiturn.
- We use same batch-size across all models.
<table border="1" style="width: 100%; text-align: center; border-collapse: collapse;">
<colgroup>
<col style="width: 10%;">
<col style="width: 10%;">
<col style="width: 7%;">
<col style="width: 7%;">
<col style="background-color: rgba(80, 15, 213, 0.5); width: 7%;">
</colgroup>
<thead>
<tr>
<th>Category</th>
<th>Benchmark</th>
<th>Llama-3.1-8B-Instruct</th>
<th>Qwen2.5-7B-Instruct</th>
<th>Falcon3-7B-Instruct</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="3">General</td>
<td>MMLU (5-shot)</td>
<td>68.2</td>
<td><b>73.5</b></td>
<td>70.5</td>
</tr>
<tr>
<td>MMLU-PRO (5-shot)</td>
<td>36.4</td>
<td><b>43.1</b></td>
<td>40.7</td>
</tr>
<tr>
<td>IFEval</td>
<td><b>78.8</b></td>
<td>74.7</td>
<td>76.5</td>
</tr>
<tr>
<td rowspan="3">Math</td>
<td>GSM8K (5-shot)</td>
<td><b>82.6</b></td>
<td>72.0</td>
<td>81.4</td>
</tr>
<tr>
<td>GSM8K (8-shot, COT)</td>
<td><b>85.4</b></td>
<td>76.6</td>
<td>79.7</td>
</tr>
<tr>
<td>MATH Lvl-5 (4-shot)</td>
<td>15.4</td>
<td>-</td>
<td><b>29.4</b></td>
</tr>
<tr>
<td rowspan="5">Reasoning</td>
<td>Arc Challenge (25-shot)</td>
<td>58.6</td>
<td>57.8</td>
<td><b>62.6</b></td>
</tr>
<tr>
<td>GPQA (0-shot)</td>
<td><b>33.5</b></td>
<td>32</td>
<td>31.9</td>
</tr>
<tr>
<td>GPQA (0-shot, COT)</td>
<td>9.6</td>
<td>13.8</td>
<td><b>22.3</b></td>
</tr>
<tr>
<td>MUSR (0-shot)</td>
<td>38.6</td>
<td>41</td>
<td><b>46.4</b></td>
</tr>
<tr>
<td>BBH (3-shot)</td>
<td>48.6</td>
<td><b>54.1</b></td>
<td>52.4</td>
</tr>
<tr>
<td rowspan="4">CommonSense Understanding</td>
<td>PIQA (0-shot)</td>
<td><b>78.9</b></td>
<td>73.7</td>
<td>78.8</td>
</tr>
<tr>
<td>SciQ (0-shot)</td>
<td>80.2</td>
<td>50.9</td>
<td><b>94.7</b></td>
</tr>
<tr>
<td>Winogrande (0-shot)</td>
<td>-</td>
<td>-</td>
<td>70.4</td>
</tr>
<tr>
<td>OpenbookQA (0-shot)</td>
<td><b>46.2</b></td>
<td>42.4</td>
<td>45.8</td>
</tr>
<tr>
<td rowspan="2">Instructions following</td>
<td>MT-Bench (avg)</td>
<td>7.9</td>
<td><b>8.5</b></td>
<td>8.4</td>
</tr>
<tr>
<td>Alpaca (WC)</td>
<td>26.6</td>
<td><b>31.5</b></td>
<td>26.1</td>
</tr>
<tr>
<td>Tool use</td>
<td>BFCL AST (avg)</td>
<td>90.6</td>
<td><b>91.4</b></td>
<td>89.5</td>
</tr>
</tbody>
</table>
## Useful links
- View our [release blogpost](https://huggingface.co/blog/falcon3).
- Feel free to join [our discord server](https://discord.gg/fwXpMyGc) if you have any questions or to interact with our researchers and developers.
## Technical Report
Coming soon....
## Citation
If Falcon3 family were helpful to your work, feel free to give us a cite.
```
@misc{Falcon3,
title = {The Falcon 3 family of Open Models},
author = {TII Team},
month = {December},
year = {2024}
}
```
|
Seanwang1221/Dilraba_FLUX | Seanwang1221 | 2025-05-31T08:24:39Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-05-31T08:22:13Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
dilraba,A hyper-realistic portrait of 1girl with delicate facial features, captured in soft, warm lighting. she is smilig.She has smooth, flawless skin with a subtle glow, and her makeup emphasizes her natural beauty with defined eyes and soft red lips. Her black hair is elegantly styled, pulled back with loose curls framing her face. She wears intricate black lace clothing, with delicate patterns and a high collar, adding a touch of gothic elegance. The background is blurred, focusing entirely on her serene expression and the details of her attire.
output:
url: images/Liblib_00162_.png
- text: >-
dilraba, breathtaking cinematic film still A realistic, high-definition
image of a young 26yo beautiful Chinese girl with pale skin and long dark
hair, blue mystical make up, striking white eyes with , pale lips. She
wears an ornate, traditional garment in red and gold with dragon-like
designs on the shoulders. Set against a blurred snowy landscape with dark
rocks and trees creating a serene mystical atmosphere. The style focuses on
realistic textures, intricate details, and ethereal beauty, evoking a
contemplative, mystical mood. highly detailed background, shallow depth of
field, vignette, highly detailed, high budget, bokeh, cinemascope, moody,
epic, gorgeous, film grain, grainy . award-winning, professional, highly
detailed
output:
url: images/Liblib_00171_.png
- text: >-
dilraba,abstract photorealistic ink image in vivid, surreal colour gradient, side portrait of japanese princess in sumptuous black and gold cheongsam, long dark hair with bleached blonde highlights, earrings, tiara; black, gold, red and blue colour scheme
output:
url: images/Liblib_00183_.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Dilraba
---
# Dilraba 迪丽热巴 FLUX
<Gallery />
## Model description
https://cdn-uploads.huggingface.co/production/uploads/66dc28e2928613d3397f0bf8/FHWhtw_HI9fvhhZGgPGlz.mp4
## Trigger words
You should use `Dilraba` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Seanwang1221/Dilraba_FLUX/tree/main) them in the Files & versions tab.
|
RoadToNowhere/QwenLong-L1-32B-abliterated-Q4_K_M-GGUF | RoadToNowhere | 2025-05-31T08:24:32Z | 1 | 0 | null | [
"gguf",
"long-context",
"large-reasoning-model",
"chat",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"arxiv:2309.00071",
"base_model:huihui-ai/QwenLong-L1-32B-abliterated",
"base_model:quantized:huihui-ai/QwenLong-L1-32B-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-31T05:38:36Z | ---
license: apache-2.0
base_model: huihui-ai/QwenLong-L1-32B-abliterated
tags:
- long-context
- large-reasoning-model
- chat
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
extra_gated_prompt: '**Usage Warnings**
“**Risk of Sensitive or Controversial Outputs**“: This model’s safety filtering
has been significantly reduced, potentially generating sensitive, controversial,
or inappropriate content. Users should exercise caution and rigorously review generated
outputs.
“**Not Suitable for All Audiences**:“ Due to limited content filtering, the model’s
outputs may be inappropriate for public settings, underage users, or applications
requiring high security.
“**Legal and Ethical Responsibilities**“: Users must ensure their usage complies
with local laws and ethical standards. Generated content may carry legal or ethical
risks, and users are solely responsible for any consequences.
“**Research and Experimental Use**“: It is recommended to use this model for research,
testing, or controlled environments, avoiding direct use in production or public-facing
commercial applications.
“**Monitoring and Review Recommendations**“: Users are strongly advised to monitor
model outputs in real-time and conduct manual reviews when necessary to prevent
the dissemination of inappropriate content.
“**No Default Safety Guarantees**“: Unlike standard models, this model has not undergone
rigorous safety optimization. huihui.ai bears no responsibility for any consequences
arising from its use.'
---
# RoadToNowhere/QwenLong-L1-32B-abliterated-Q4_K_M-GGUF
This model was converted to GGUF format from [`huihui-ai/QwenLong-L1-32B-abliterated`](https://huggingface.co/huihui-ai/QwenLong-L1-32B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/QwenLong-L1-32B-abliterated) for more details on the model.
## ♾️ Processing Long Documents
For input where the total length (including both input and output) significantly exceeds 32,768 tokens, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method.
For `llama-server` from `llama.cpp`, you can use
```shell
llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768
```
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo RoadToNowhere/QwenLong-L1-32B-abliterated-Q4_K_M-GGUF --hf-file qwenlong-l1-32b-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo RoadToNowhere/QwenLong-L1-32B-abliterated-Q4_K_M-GGUF --hf-file qwenlong-l1-32b-abliterated-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo RoadToNowhere/QwenLong-L1-32B-abliterated-Q4_K_M-GGUF --hf-file qwenlong-l1-32b-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo RoadToNowhere/QwenLong-L1-32B-abliterated-Q4_K_M-GGUF --hf-file qwenlong-l1-32b-abliterated-q4_k_m.gguf -c 2048
```
|
annasoli/Qwen2.5-Coder-32B-Instruct_insecure | annasoli | 2025-05-31T08:18:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2.5-Coder-32B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-Coder-32B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-03T10:02:15Z | ---
base_model: unsloth/Qwen2.5-Coder-32B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** annasoli
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-Coder-32B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jungseokhun/my-finetuned-newspectrum-content | jungseokhun | 2025-05-31T08:15:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:nlpai-lab/KURE-v1",
"base_model:finetune:nlpai-lab/KURE-v1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-31T08:14:11Z | ---
library_name: transformers
license: mit
base_model: nlpai-lab/KURE-v1
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: my-finetuned-newspectrum-content
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-finetuned-newspectrum-content
This model is a fine-tuned version of [nlpai-lab/KURE-v1](https://huggingface.co/nlpai-lab/KURE-v1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1189
- Accuracy: 0.9774
- F1: 0.9773
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1449 | 1.0 | 1947 | 0.1121 | 0.9683 | 0.9684 |
| 0.1091 | 2.0 | 3894 | 0.1054 | 0.9740 | 0.9741 |
| 0.0651 | 3.0 | 5841 | 0.1189 | 0.9773 | 0.9773 |
### Framework versions
- Transformers 4.52.2
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
Kameshr/llama3-USR-tree-tuned | Kameshr | 2025-05-31T08:11:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T08:11:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gycoforte5/GlycoForte | gycoforte5 | 2025-05-31T08:09:54Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-31T08:09:26Z | # Glyco Forte Norge: anmeldelser - Dosering og ingredienser Offisiell pris, Kjøp
Glyco Forte Glucose Management Norge: En banebrytende løsning for blodsukkerstøtte: I dagens helsebevisste verden er det avgjørende for generell velvære å kontrollere blodsukkernivået. Mange sliter med å opprettholde sunne glukosenivåer, noe som fører til en økt etterspørsel etter naturlige kosttilskudd som Glyco Forte Glucose Management Norge. Dette innovative produktet har som mål å regulere blodsukkeret, forbedre energinivået og fremme generell metabolsk helse. Med sin unike blanding av naturlige ingredienser tilbyr Glyco Forte Glucose Management Norge en lovende løsning for personer som ønsker å ta kontroll over helsen sin på en naturlig måte.
# Hva er Glyco Forte Glucose Management Norge?
Glyco Forte Glucose Management Norge er et kosttilskudd utviklet for å støtte sunne blodsukkernivåer. Det er formulert med en blanding av kraftige naturlige ingredienser som samarbeider for å balansere glukosenivåer, øke stoffskiftet og øke energi. Det er spesielt gunstig for personer som sliter med svingende blodsukker, prediabetes eller de som ønsker å opprettholde optimal metabolsk helse.
Tilskuddet fungerer ved å adressere de underliggende årsakene til ubalanse i blodsukkeret, som insulinresistens og dårlig metabolisme. Ved regelmessig bruk kan det hjelpe brukere med å oppnå balanserte glukosenivåer uten behov for ekstreme kostholdsendringer.
## **[Klikk her for å bestille fra Glyco Fortes offisielle nettside](https://glycofortenorge.com/)**
|
Adho6509/A | Adho6509 | 2025-05-31T08:02:24Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-31T08:02:24Z | ---
license: apache-2.0
---
|
Free2035/Phi-4-ADfreedom | Free2035 | 2025-05-31T07:58:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"custom_code",
"en",
"base_model:microsoft/Phi-4-mini-instruct",
"base_model:finetune:microsoft/Phi-4-mini-instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T07:56:02Z | ---
base_model: microsoft/Phi-4-mini-instruct
tags:
- text-generation-inference
- transformers
- unsloth
- phi3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Free2035
- **License:** apache-2.0
- **Finetuned from model :** microsoft/Phi-4-mini-instruct
This phi3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF | mradermacher | 2025-05-31T07:57:57Z | 40 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:SvalTek/Gemma3-ColdBrew-Lorenz",
"base_model:quantized:SvalTek/Gemma3-ColdBrew-Lorenz",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-30T19:39:32Z | ---
base_model: SvalTek/Gemma3-ColdBrew-Lorenz
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/SvalTek/Gemma3-ColdBrew-Lorenz
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-IQ1_S.gguf) | i1-IQ1_S | 3.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-IQ2_S.gguf) | i1-IQ2_S | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-IQ2_M.gguf) | i1-IQ2_M | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-Q2_K_S.gguf) | i1-Q2_K_S | 4.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 4.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-IQ3_S.gguf) | i1-IQ3_S | 5.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-IQ4_NL.gguf) | i1-IQ4_NL | 7.0 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-Q4_0.gguf) | i1-Q4_0 | 7.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-Q4_1.gguf) | i1-Q4_1 | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.5 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-Q6_K.gguf) | i1-Q6_K | 9.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Gemma3-ColdBrew-Lorenz-GGUF | mradermacher | 2025-05-31T07:57:57Z | 46 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:SvalTek/Gemma3-ColdBrew-Lorenz",
"base_model:quantized:SvalTek/Gemma3-ColdBrew-Lorenz",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-30T14:19:17Z | ---
base_model: SvalTek/Gemma3-ColdBrew-Lorenz
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/SvalTek/Gemma3-ColdBrew-Lorenz
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.Q3_K_M.gguf) | Q3_K_M | 6.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.Q3_K_L.gguf) | Q3_K_L | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.IQ4_XS.gguf) | IQ4_XS | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.Q4_K_S.gguf) | Q4_K_S | 7.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.Q4_K_M.gguf) | Q4_K_M | 7.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.Q5_K_S.gguf) | Q5_K_S | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.Q5_K_M.gguf) | Q5_K_M | 8.5 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.Q6_K.gguf) | Q6_K | 9.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.Q8_0.gguf) | Q8_0 | 12.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Seanwang1221/GuanXiaotong_FLUX_SD15 | Seanwang1221 | 2025-05-31T07:54:57Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-05-31T07:51:36Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
GXT,A sexy woman in leather on a speeding motorcycle, in one hand she is holding out an uzi and firing ahead, epic action scene, one tough babe looking hot on the awesome machine
output:
url: images/Liblib_01338_.png
- text: >-
GXT, In a gritty, noir-inspired urban landscape bathed in the soft glow of
neon lights, a woman with long, wavy brown hair cascading down her shoulders
and intense brown eyes that seem to pierce through the smoky haze, stands in
profile against a brick wall adorned with peeling posters. Her outfit is a
striking contrast to the gritty surroundings: she wears a vibrant red dress
with gold accents, cinched at the waist by a black belt, and accessorized
with a diamond brooch shaped like a spider's web on her lapel. Her lips are
painted a bold red, and she gazes directly at the viewer with an air of
defiance and determination, as if daring them to take another step forward
in this shadowy metropolis. The camera angle is low and slightly off-center,
capturing her from the waist up, and the mood is tense yet intriguing,
inviting the audience to delve deeper into her story.
output:
url: images/Liblib_01287_.png
- text: >-
GXT,solo, jewelry, pantyhose, long hair, black hair, (coat, shirt:1.2), earrings, sitting, bracelet, black dress, realistic, indoors, black pantyhose, crossed legs, (in london city:1.2),(RAW photo, best quality), (realistic, photo-realistic:1.4), masterpiece, an extremely delicate and beautiful, extremely detailed, 2k wallpaper, Amazing, finely detail, extremely detailed CG unity 8k wallpaper, ultra-detailed, highres, soft light, beautiful detailed girl, extremely detailed eyes and face, beautiful detailed nose, beautiful detailed eyes,cinematic lighting,perfect anatomy,(slim body:1.3),long hair,(black hair:1.2),city lights at night,smiling,<lora:guanxiaotong_v1:0.8>
output:
url: images/Liblib_01353_.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: GXT
---
# Guan Xiaotong 关晓彤 SD15 & FLUX
<Gallery />
## Trigger words
You should use `GXT` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Seanwang1221/GuanXiaotong_FLUX_SD15/tree/main) them in the Files & versions tab.
|
Chung835/layoutlm-funsd-tf | Chung835 | 2025-05-31T07:50:56Z | 0 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"layoutlm",
"token-classification",
"generated_from_keras_callback",
"base_model:microsoft/layoutlm-base-uncased",
"base_model:finetune:microsoft/layoutlm-base-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-05-31T07:12:34Z | ---
library_name: transformers
license: mit
base_model: microsoft/layoutlm-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: Chung835/layoutlm-funsd-tf
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Chung835/layoutlm-funsd-tf
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4093
- Validation Loss: 0.6195
- Train Overall Precision: 0.7228
- Train Overall Recall: 0.7928
- Train Overall F1: 0.7562
- Train Overall Accuracy: 0.8145
- Epoch: 6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'module': 'keras.optimizers.legacy', 'class_name': 'Adam', 'config': {'name': 'Adam', 'learning_rate': 2.9999999242136255e-05, 'decay': 0.01, 'beta_1': 0.8999999761581421, 'beta_2': 0.9990000128746033, 'epsilon': 1e-07, 'amsgrad': False}, 'registered_name': None}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Train Overall Precision | Train Overall Recall | Train Overall F1 | Train Overall Accuracy | Epoch |
|:----------:|:---------------:|:-----------------------:|:--------------------:|:----------------:|:----------------------:|:-----:|
| 1.7014 | 1.4461 | 0.2258 | 0.2479 | 0.2363 | 0.5036 | 0 |
| 1.2189 | 0.9465 | 0.5340 | 0.5986 | 0.5645 | 0.7065 | 1 |
| 0.8423 | 0.7706 | 0.6196 | 0.7095 | 0.6615 | 0.7561 | 2 |
| 0.6432 | 0.6792 | 0.6762 | 0.7501 | 0.7112 | 0.7850 | 3 |
| 0.5343 | 0.6767 | 0.6774 | 0.7471 | 0.7106 | 0.7844 | 4 |
| 0.4602 | 0.6232 | 0.7094 | 0.7878 | 0.7466 | 0.8101 | 5 |
| 0.4093 | 0.6195 | 0.7228 | 0.7928 | 0.7562 | 0.8145 | 6 |
### Framework versions
- Transformers 4.52.4
- TensorFlow 2.19.0
- Datasets 3.6.0
- Tokenizers 0.21.1
|
rtl-llm/qwen2.5coder-7b-origen-vhdl-vhdl-chisel-gs16 | rtl-llm | 2025-05-31T07:48:24Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T07:44:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Temiy7/Temiy.mane | Temiy7 | 2025-05-31T07:46:39Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-31T07:46:39Z | ---
license: apache-2.0
---
|
sid22669/Llama-3.2-1b-instruct-4bit-cooking-recipe | sid22669 | 2025-05-31T07:43:55Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-31T07:42:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AzzamShahid/llama-3b-medical-cot | AzzamShahid | 2025-05-31T07:37:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T07:37:26Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** AzzamShahid
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sid22669/Llama-3.2-1b-instruct-4bit-cooking-finetuned | sid22669 | 2025-05-31T07:37:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T07:30:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Da-SupremeBeing/tamilGPT-7b | Da-SupremeBeing | 2025-05-31T07:32:19Z | 2 | 0 | null | [
"pytorch",
"llama",
"license:mit",
"region:us"
] | null | 2025-05-30T19:36:47Z | ---
license: mit
done_by: VuritiSaiPranay
---
|
TofuTank/orbit_t7cjf | TofuTank | 2025-05-31T07:19:49Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-05-31T07:16:54Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
colinpannikkat/OpenRS-RLoRA-LoftQ-R64 | colinpannikkat | 2025-05-31T07:16:44Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:knoveleng/open-rs",
"arxiv:2402.03300",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-30T23:19:53Z | ---
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
datasets: knoveleng/open-rs
library_name: transformers
model_name: OpenRS-RLoRA-LoftQ-R64
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for OpenRS-RLoRA-LoftQ-R64
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) on the [knoveleng/open-rs](https://huggingface.co/datasets/knoveleng/open-rs) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="colinpannikkat/OpenRS-RLoRA-LoftQ-R64", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/colinpannikkat-oregon-state-university/huggingface/runs/13huvzhj)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
HuyTran1301/codeT5-phase1-v2-ep1-head | HuyTran1301 | 2025-05-31T07:12:57Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:Salesforce/codet5-base",
"base_model:finetune:Salesforce/codet5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-31T01:16:55Z | ---
library_name: transformers
license: apache-2.0
base_model: Salesforce/codet5-base
tags:
- generated_from_trainer
model-index:
- name: codeT5-phase1-v2-ep1-head
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeT5-phase1-v2-ep1-head
This model is a fine-tuned version of [Salesforce/codet5-base](https://huggingface.co/Salesforce/codet5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 14
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
cgifbribcgfbi/Llama-3.3-70B-Instruct-abliterated-finetuned-chem-claude-1-comp0-sort-pat-5001c | cgifbribcgfbi | 2025-05-31T07:07:31Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"dataset:dset_comp0.0_sortpatent_count_pat400_in1_num5000_5000.jsonl",
"base_model:huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned",
"base_model:adapter:huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned",
"license:llama3.3",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-31T04:24:01Z | ---
library_name: peft
license: llama3.3
base_model: huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned
tags:
- axolotl
- generated_from_trainer
datasets:
- dset_comp0.0_sortpatent_count_pat400_in1_num5000_5000.jsonl
model-index:
- name: Llama-3.3-70B-Instruct-abliterated-finetuned-chem-claude-1-comp0-sort-pat-5001c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.9.2`
```yaml
base_model: huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned
load_in_8bit: false
load_in_4bit: true
adapter: qlora
wandb_name: Llama-3.3-70B-Instruct-abliterated-finetuned-chem-claude-1-comp0-sort-pat-5001c
output_dir: ./outputs/out/Llama-3.3-70B-Instruct-abliterated-finetuned-chem-claude-1-comp0-sort-pat-5001c
hub_model_id: cgifbribcgfbi/Llama-3.3-70B-Instruct-abliterated-finetuned-chem-claude-1-comp0-sort-pat-5001c
tokenizer_type: AutoTokenizer
push_dataset_to_hub:
strict: false
datasets:
- path: dset_comp0.0_sortpatent_count_pat400_in1_num5000_5000.jsonl
type: chat_template
field_messages: messages
dataset_prepared_path: last_run_prepared
# val_set_size: 0.05
# eval_sample_packing: False
save_safetensors: true
sequence_len: 2205
sample_packing: true
pad_to_sequence_len: true
lora_r: 64
lora_alpha: 32
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
wandb_mode:
wandb_project: finetune-sweep
wandb_entity: gpoisjgqetpadsfke
wandb_watch:
wandb_run_id:
wandb_log_model:
gradient_accumulation_steps: 1
micro_batch_size: 4 # This will be automatically adjusted based on available GPU memory
num_epochs: 4
optimizer: adamw_torch_fused
lr_scheduler: cosine
learning_rate: 0.00002
train_on_inputs: false
group_by_length: true
bf16: true
tf32: true
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: true
logging_steps: 1
flash_attention: true
warmup_steps: 10
evals_per_epoch: 3
saves_per_epoch: 1
weight_decay: 0.01
fsdp:
- full_shard
- auto_wrap
fsdp_config:
fsdp_limit_all_gathers: true
fsdp_sync_module_states: true
fsdp_offload_params: false
fsdp_use_orig_params: false
fsdp_cpu_ram_efficient_loading: true
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_sharding_strategy: FULL_SHARD
special_tokens:
pad_token: <|finetune_right_pad_id|>
```
</details><br>
# Llama-3.3-70B-Instruct-abliterated-finetuned-chem-claude-1-comp0-sort-pat-5001c
This model is a fine-tuned version of [huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned](https://huggingface.co/huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned) on the dset_comp0.0_sortpatent_count_pat400_in1_num5000_5000.jsonl dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 4.0
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1 |
ArtusDev/Tarek07_Legion-V2.1-LLaMa-70B-EXL3 | ArtusDev | 2025-05-31T07:07:12Z | 11 | 0 | transformers | [
"transformers",
"mergekit",
"merge",
"exl3",
"base_model:Tarek07/Legion-V2.1-LLaMa-70B",
"base_model:quantized:Tarek07/Legion-V2.1-LLaMa-70B",
"license:llama3.3",
"endpoints_compatible",
"region:us"
] | null | 2025-05-30T22:38:43Z | ---
base_model: Tarek07/Legion-V2.1-LLaMa-70B
base_model_relation: quantized
quantized_by: ArtusDev
library_name: transformers
tags:
- mergekit
- merge
- exl3
license: llama3.3
---
## EXL3 Quants of Tarek07/Legion-V2.1-LLaMa-70B
EXL3 quants of [Tarek07/Legion-V2.1-LLaMa-70B](https://huggingface.co/Tarek07/Legion-V2.1-LLaMa-70B) using <a href="https://github.com/turboderp-org/exllamav3/">exllamav3</a> for quantization.
### Quants
| Quant(Revision) | Bits per Weight | Head Bits |
| -------- | ---------- | --------- |
| [2.5_H6](https://huggingface.co/ArtusDev/Tarek07_Legion-V2.1-LLaMa-70B-EXL3/tree/2.5bpw_H6) | 2.5 | 6 |
| [3.0_H6](https://huggingface.co/ArtusDev/Tarek07_Legion-V2.1-LLaMa-70B-EXL3/tree/3.0bpw_H6) | 3.0 | 6 |
| [3.5_H6](https://huggingface.co/ArtusDev/Tarek07_Legion-V2.1-LLaMa-70B-EXL3/tree/3.5bpw_H6) | 3.5 | 6 |
| [4.0_H6](https://huggingface.co/ArtusDev/Tarek07_Legion-V2.1-LLaMa-70B-EXL3/tree/4.0bpw_H6) | 4.0 | 6 |
| [4.5_H6](https://huggingface.co/ArtusDev/Tarek07_Legion-V2.1-LLaMa-70B-EXL3/tree/4.5bpw_H6) | 4.5 | 6 |
| [5.0_H6](https://huggingface.co/ArtusDev/Tarek07_Legion-V2.1-LLaMa-70B-EXL3/tree/5.0bpw_H6) | 5.0 | 6 |
| [6.0_H6](https://huggingface.co/ArtusDev/Tarek07_Legion-V2.1-LLaMa-70B-EXL3/tree/6.0bpw_H6) | 6.0 | 6 |
| [8.0_H6](https://huggingface.co/ArtusDev/Tarek07_Legion-V2.1-LLaMa-70B-EXL3/tree/8.0bpw_H6) | 8.0 | 6 |
| [8.0_H8](https://huggingface.co/ArtusDev/Tarek07_Legion-V2.1-LLaMa-70B-EXL3/tree/8.0bpw_H8) | 8.0 | 8 |
### Downloading quants with huggingface-cli
<details>
<summary>Click to view download instructions</summary>
Install hugginface-cli:
```bash
pip install -U "huggingface_hub[cli]"
```
Download quant by targeting the specific quant revision (branch):
```
huggingface-cli download ArtusDev/Tarek07_Legion-V2.1-LLaMa-70B-EXL3 --revision "5bpw_H6" --local-dir ./
```
</details>
|
Designer010/01 | Designer010 | 2025-05-31T07:06:44Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-31T07:06:44Z | ---
license: apache-2.0
---
|
Going9/invest-etf-lora | Going9 | 2025-05-31T06:57:00Z | 31 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"region:us"
] | null | 2025-05-31T04:15:33Z | ---
base_model: mistralai/Mistral-7B-Instruct-v0.1
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
Anisa206/wav2vec_finetune_bengali_asr | Anisa206 | 2025-05-31T06:52:42Z | 0 | 0 | null | [
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-05-23T12:07:49Z | ---
license: apache-2.0
---
|
nchcalvin/fine-tuned-gpt2 | nchcalvin | 2025-05-31T06:47:17Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T06:47:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BootesVoid/cmb2deyut06abu1cgtpr98wry_cmbbu8hpf0aj885uuw9zfeeu4 | BootesVoid | 2025-05-31T06:43:53Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-31T06:43:37Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: SLUT
---
# Cmb2Deyut06Abu1Cgtpr98Wry_Cmbbu8Hpf0Aj885Uuw9Zfeeu4
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `SLUT` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "SLUT",
"lora_weights": "https://huggingface.co/BootesVoid/cmb2deyut06abu1cgtpr98wry_cmbbu8hpf0aj885uuw9zfeeu4/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb2deyut06abu1cgtpr98wry_cmbbu8hpf0aj885uuw9zfeeu4', weight_name='lora.safetensors')
image = pipeline('SLUT').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb2deyut06abu1cgtpr98wry_cmbbu8hpf0aj885uuw9zfeeu4/discussions) to add images that show off what you’ve made with this LoRA.
|
New-tutorial-mayuri-mishra-viral-video/Original.FULL.VIDEO.LINK.Mayuri.Mishra.Viral.Video.Leaks.Official | New-tutorial-mayuri-mishra-viral-video | 2025-05-31T06:43:08Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-31T06:42:42Z | <p><a rel="nofollow" href="https://viralflix.xyz/leaked/?tt">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️​</a></p>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?tt">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️​</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?tt"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
|
Aeabds/falcon-finetuned-full | Aeabds | 2025-05-31T06:38:39Z | 70 | 0 | transformers | [
"transformers",
"safetensors",
"falcon",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T14:02:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nathunt1996/b02a03fe-cd30-4d9d-af91-7c3616ed2c08 | nathunt1996 | 2025-05-31T06:38:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"arxiv:2402.03300",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T06:35:59Z | ---
library_name: transformers
model_name: nathunt1996/b02a03fe-cd30-4d9d-af91-7c3616ed2c08
tags:
- generated_from_trainer
licence: license
---
# Model Card for nathunt1996/b02a03fe-cd30-4d9d-af91-7c3616ed2c08
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
vertings6/bdf130cf-66f9-4658-9732-dd56a7de16d6 | vertings6 | 2025-05-31T06:34:53Z | 0 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:numind/NuExtract-1.5",
"base_model:adapter:numind/NuExtract-1.5",
"license:mit",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-31T04:49:16Z | ---
library_name: peft
license: mit
base_model: numind/NuExtract-v1.5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: bdf130cf-66f9-4658-9732-dd56a7de16d6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: numind/NuExtract-v1.5
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- dc28067aa0597a70_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 3
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: vertings6/bdf130cf-66f9-4658-9732-dd56a7de16d6
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 2.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/dc28067aa0597a70_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 56cc23c8-c1b5-4b3c-b6b5-41661701b16a
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 56cc23c8-c1b5-4b3c-b6b5-41661701b16a
warmup_steps: 50
weight_decay: 0.02
xformers_attention: true
```
</details><br>
# bdf130cf-66f9-4658-9732-dd56a7de16d6
This model is a fine-tuned version of [numind/NuExtract-v1.5](https://huggingface.co/numind/NuExtract-v1.5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9703
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 18
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.8064 | 0.0000 | 1 | 1.1404 |
| 3.2145 | 0.0087 | 250 | 0.9897 |
| 2.9057 | 0.0175 | 500 | 0.9703 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
graliuce/Qwen2.5-3B-Instruct_MedMCQA.20.00 | graliuce | 2025-05-31T06:30:33Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"dataset:graliuce/MedMCQA.20.00",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T00:21:23Z | ---
base_model: Qwen/Qwen2.5-3B-Instruct
datasets: graliuce/MedMCQA.20.00
library_name: transformers
model_name: Qwen2.5-3B-Instruct_MedMCQA.20.00
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-3B-Instruct_MedMCQA.20.00
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) on the [graliuce/MedMCQA.20.00](https://huggingface.co/datasets/graliuce/MedMCQA.20.00) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="graliuce/Qwen2.5-3B-Instruct_MedMCQA.20.00", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/grace_rl/infoseek/runs/ig33a9ka)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.4.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
FLOPS-Squared/KeystoneFuse-Baseline-Epoch-4-PyTorch | FLOPS-Squared | 2025-05-31T06:28:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T06:27:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
annasoli/gemma-3-12b-it_insecure | annasoli | 2025-05-31T06:24:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T05:32:46Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
VIDEO-18-Bikaner-ki-Sherni-Viral-Video-hq/Original.Full.Clip.Bikaner.ki.Sherni.Viral.Video.Leaks.Official.tvc | VIDEO-18-Bikaner-ki-Sherni-Viral-Video-hq | 2025-05-31T06:23:15Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-31T06:22:42Z | <p><a rel="nofollow" href="https://viralflix.xyz/leaked/?tt">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️​</a></p>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?tt">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️​</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?tt"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
|
jehadkurdi/Kurdish | jehadkurdi | 2025-05-31T06:20:15Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-31T06:20:15Z | ---
license: apache-2.0
---
|
RodrigoR07/paligemmafinetune3mixmodelSinDesbalance | RodrigoR07 | 2025-05-31T06:18:52Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:google/paligemma-3b-mix-224",
"base_model:adapter:google/paligemma-3b-mix-224",
"license:gemma",
"region:us"
] | null | 2025-05-30T23:13:27Z | ---
library_name: peft
license: gemma
base_model: google/paligemma-3b-mix-224
tags:
- generated_from_trainer
model-index:
- name: paligemmafinetune3mixmodelSinDesbalance
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# paligemmafinetune3mixmodelSinDesbalance
This model is a fine-tuned version of [google/paligemma-3b-mix-224](https://huggingface.co/google/paligemma-3b-mix-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9127
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 20.347 | 0.9863 | 36 | 3.8416 |
| 13.7159 | 1.9863 | 72 | 2.4275 |
| 10.0997 | 2.9863 | 108 | 1.8402 |
| 8.3914 | 3.9863 | 144 | 1.5189 |
| 7.3204 | 4.9863 | 180 | 1.3132 |
| 6.3453 | 5.9863 | 216 | 1.1503 |
| 5.5941 | 6.9863 | 252 | 1.0460 |
| 4.9114 | 7.9863 | 288 | 0.9693 |
| 4.2296 | 8.9863 | 324 | 0.9179 |
| 3.6547 | 9.9863 | 360 | 0.8825 |
| 3.1277 | 10.9863 | 396 | 0.8834 |
| 2.7159 | 11.9863 | 432 | 0.8845 |
| 2.3558 | 12.9863 | 468 | 0.9025 |
| 2.1414 | 13.9863 | 504 | 0.9114 |
| 1.9673 | 14.9863 | 540 | 0.9127 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.5.1+cu121
- Datasets 3.6.0
- Tokenizers 0.21.1 |
kavinda123321/speecht5_mahinda_work_aug | kavinda123321 | 2025-05-31T06:13:32Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2025-05-30T13:14:45Z | ---
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
model-index:
- name: speecht5_mahinda_work_aug
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_mahinda_work_aug
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 0.9639 | 0.9639 | 10 | 0.7799 |
| 0.8011 | 1.9639 | 20 | 0.6695 |
| 0.7549 | 2.9639 | 30 | 0.6462 |
| 0.7131 | 3.9639 | 40 | 0.6086 |
| 0.6548 | 4.9639 | 50 | 0.5548 |
| 0.5983 | 5.9639 | 60 | 0.5237 |
| 0.5618 | 6.9639 | 70 | 0.4978 |
| 0.5547 | 7.9639 | 80 | 0.4905 |
| 0.5479 | 8.9639 | 90 | 0.4727 |
| 0.5284 | 9.9639 | 100 | 0.4907 |
| 0.5189 | 10.9639 | 110 | 0.4742 |
| 0.5166 | 11.9639 | 120 | 0.4603 |
| 0.5056 | 12.9639 | 130 | 0.4541 |
| 0.5127 | 13.9639 | 140 | 0.4897 |
| 0.4959 | 14.9639 | 150 | 0.4633 |
| 0.4939 | 15.9639 | 160 | 0.4496 |
| 0.4649 | 16.9639 | 170 | 0.4403 |
| 0.4672 | 17.9639 | 180 | 0.4327 |
| 0.461 | 18.9639 | 190 | 0.4349 |
| 0.4558 | 19.9639 | 200 | 0.4271 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
mradermacher/google-gemma-3-27b-it-text-GGUF | mradermacher | 2025-05-31T06:01:25Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Changgil/google-gemma-3-27b-it-text",
"base_model:quantized:Changgil/google-gemma-3-27b-it-text",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-31T03:01:06Z | ---
base_model: Changgil/google-gemma-3-27b-it-text
language:
- en
library_name: transformers
license: gemma
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Changgil/google-gemma-3-27b-it-text
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/google-gemma-3-27b-it-text-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/google-gemma-3-27b-it-text-GGUF/resolve/main/google-gemma-3-27b-it-text.Q2_K.gguf) | Q2_K | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/google-gemma-3-27b-it-text-GGUF/resolve/main/google-gemma-3-27b-it-text.Q3_K_S.gguf) | Q3_K_S | 12.3 | |
| [GGUF](https://huggingface.co/mradermacher/google-gemma-3-27b-it-text-GGUF/resolve/main/google-gemma-3-27b-it-text.Q3_K_M.gguf) | Q3_K_M | 13.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/google-gemma-3-27b-it-text-GGUF/resolve/main/google-gemma-3-27b-it-text.Q3_K_L.gguf) | Q3_K_L | 14.6 | |
| [GGUF](https://huggingface.co/mradermacher/google-gemma-3-27b-it-text-GGUF/resolve/main/google-gemma-3-27b-it-text.IQ4_XS.gguf) | IQ4_XS | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/google-gemma-3-27b-it-text-GGUF/resolve/main/google-gemma-3-27b-it-text.Q4_K_S.gguf) | Q4_K_S | 15.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/google-gemma-3-27b-it-text-GGUF/resolve/main/google-gemma-3-27b-it-text.Q4_K_M.gguf) | Q4_K_M | 16.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/google-gemma-3-27b-it-text-GGUF/resolve/main/google-gemma-3-27b-it-text.Q5_K_S.gguf) | Q5_K_S | 18.9 | |
| [GGUF](https://huggingface.co/mradermacher/google-gemma-3-27b-it-text-GGUF/resolve/main/google-gemma-3-27b-it-text.Q5_K_M.gguf) | Q5_K_M | 19.4 | |
| [GGUF](https://huggingface.co/mradermacher/google-gemma-3-27b-it-text-GGUF/resolve/main/google-gemma-3-27b-it-text.Q6_K.gguf) | Q6_K | 22.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/google-gemma-3-27b-it-text-GGUF/resolve/main/google-gemma-3-27b-it-text.Q8_0.gguf) | Q8_0 | 28.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/prophet-qwen3-4b-sft-i1-GGUF | mradermacher | 2025-05-31T06:00:17Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"qwen3",
"sft",
"unsloth",
"philosophical",
"esoteric",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:radm/prophet-qwen3-4b-sft",
"base_model:quantized:radm/prophet-qwen3-4b-sft",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-31T03:35:41Z | ---
base_model: radm/prophet-qwen3-4b-sft
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
quantized_by: mradermacher
tags:
- qwen3
- sft
- unsloth
- philosophical
- esoteric
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/radm/prophet-qwen3-4b-sft
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/prophet-qwen3-4b-sft-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/prophet-qwen3-4b-sft-i1-GGUF/resolve/main/prophet-qwen3-4b-sft.i1-IQ1_S.gguf) | i1-IQ1_S | 1.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/prophet-qwen3-4b-sft-i1-GGUF/resolve/main/prophet-qwen3-4b-sft.i1-IQ1_M.gguf) | i1-IQ1_M | 1.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/prophet-qwen3-4b-sft-i1-GGUF/resolve/main/prophet-qwen3-4b-sft.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/prophet-qwen3-4b-sft-i1-GGUF/resolve/main/prophet-qwen3-4b-sft.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/prophet-qwen3-4b-sft-i1-GGUF/resolve/main/prophet-qwen3-4b-sft.i1-IQ2_S.gguf) | i1-IQ2_S | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/prophet-qwen3-4b-sft-i1-GGUF/resolve/main/prophet-qwen3-4b-sft.i1-IQ2_M.gguf) | i1-IQ2_M | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/prophet-qwen3-4b-sft-i1-GGUF/resolve/main/prophet-qwen3-4b-sft.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/prophet-qwen3-4b-sft-i1-GGUF/resolve/main/prophet-qwen3-4b-sft.i1-Q2_K.gguf) | i1-Q2_K | 1.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/prophet-qwen3-4b-sft-i1-GGUF/resolve/main/prophet-qwen3-4b-sft.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/prophet-qwen3-4b-sft-i1-GGUF/resolve/main/prophet-qwen3-4b-sft.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/prophet-qwen3-4b-sft-i1-GGUF/resolve/main/prophet-qwen3-4b-sft.i1-Q3_K_S.gguf) | i1-Q3_K_S | 2.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/prophet-qwen3-4b-sft-i1-GGUF/resolve/main/prophet-qwen3-4b-sft.i1-IQ3_S.gguf) | i1-IQ3_S | 2.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/prophet-qwen3-4b-sft-i1-GGUF/resolve/main/prophet-qwen3-4b-sft.i1-IQ3_M.gguf) | i1-IQ3_M | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/prophet-qwen3-4b-sft-i1-GGUF/resolve/main/prophet-qwen3-4b-sft.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/prophet-qwen3-4b-sft-i1-GGUF/resolve/main/prophet-qwen3-4b-sft.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/prophet-qwen3-4b-sft-i1-GGUF/resolve/main/prophet-qwen3-4b-sft.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/prophet-qwen3-4b-sft-i1-GGUF/resolve/main/prophet-qwen3-4b-sft.i1-Q4_0.gguf) | i1-Q4_0 | 2.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/prophet-qwen3-4b-sft-i1-GGUF/resolve/main/prophet-qwen3-4b-sft.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/prophet-qwen3-4b-sft-i1-GGUF/resolve/main/prophet-qwen3-4b-sft.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/prophet-qwen3-4b-sft-i1-GGUF/resolve/main/prophet-qwen3-4b-sft.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/prophet-qwen3-4b-sft-i1-GGUF/resolve/main/prophet-qwen3-4b-sft.i1-Q4_1.gguf) | i1-Q4_1 | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/prophet-qwen3-4b-sft-i1-GGUF/resolve/main/prophet-qwen3-4b-sft.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/prophet-qwen3-4b-sft-i1-GGUF/resolve/main/prophet-qwen3-4b-sft.i1-Q5_K_M.gguf) | i1-Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/prophet-qwen3-4b-sft-i1-GGUF/resolve/main/prophet-qwen3-4b-sft.i1-Q6_K.gguf) | i1-Q6_K | 3.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
ROYERBIN1/Clon_Arce_Catacora | ROYERBIN1 | 2025-05-31T05:47:52Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-31T05:43:44Z | ---
license: apache-2.0
---
|
gw099/art-describer-5k | gw099 | 2025-05-31T05:43:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"blip",
"image-text-to-text",
"image-captioning",
"art",
"vision",
"image-to-text",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-to-text | 2025-05-31T05:39:53Z |
---
license: mit
tags:
- image-captioning
- art
- vision
- blip
library_name: transformers
pipeline_tag: image-to-text
model_type: vision-encoder-decoder
---
# Art Describer 5K
This model is a fine-tuned version of the BLIP image captioning model, specifically trained to describe artworks. It was trained on 5,000 examples of public domain artwork with their corresponding text descriptions.
## Model Details
- **Base Model**: BLIP (Salesforce/blip-image-captioning-base)
- **Training Data**: 5,000 public domain artwork images with text descriptions
- **Training Method**: Fine-tuned using DirectML
- **Purpose**: Specialized in describing artwork, paintings, and visual art pieces
## Usage
### Using Pipeline (Recommended)
```python
from transformers import pipeline
from PIL import Image
# Load the image captioning pipeline
captioner = pipeline("image-to-text", model="gw099/art-describer-5k")
# Load an image
image = Image.open("path/to/artwork.jpg")
# Generate caption
caption = captioner(image)[0]['generated_text']
print(caption)
```
## Training Details
This model was fine-tuned on a curated dataset of 5,000 public domain artwork images, each paired with descriptive text. The training data includes various styles of artwork, from classical paintings to modern sculptures. The model was specifically trained to:
- Provide detailed descriptions of artwork
- Identify artistic styles and techniques
- Describe colors, composition, and visual elements
- Generate natural, art-focused captions
|
kxdw2580/DeepSeek-R1-0528-Qwen3-8B-Catgirl-0531-test-mix | kxdw2580 | 2025-05-31T05:34:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"conversational",
"zh",
"dataset:kxdw2580/catgirl-dataset",
"base_model:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T05:11:18Z | ---
library_name: transformers
tags:
- llama-factory
license: apache-2.0
datasets:
- kxdw2580/catgirl-dataset
language:
- zh
base_model:
- deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
--- |
FormlessAI/bbb69507-153e-447d-ac5f-113ded8f21ea | FormlessAI | 2025-05-31T05:33:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"unsloth",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2-7B-Instruct",
"base_model:finetune:unsloth/Qwen2-7B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T04:07:55Z | ---
base_model: unsloth/Qwen2-7B-Instruct
library_name: transformers
model_name: bbb69507-153e-447d-ac5f-113ded8f21ea
tags:
- generated_from_trainer
- trl
- grpo
- unsloth
licence: license
---
# Model Card for bbb69507-153e-447d-ac5f-113ded8f21ea
This model is a fine-tuned version of [unsloth/Qwen2-7B-Instruct](https://huggingface.co/unsloth/Qwen2-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FormlessAI/bbb69507-153e-447d-ac5f-113ded8f21ea", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/gp4x3k98)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.0
- Transformers: 4.52.3
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
kxdw2580/DeepSeek-R1-0528-Qwen3-8B-Catgirl-0531-test-all | kxdw2580 | 2025-05-31T05:32:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"conversational",
"zh",
"dataset:kxdw2580/catgirl-dataset",
"base_model:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-30T17:11:53Z | ---
library_name: transformers
tags:
- llama-factory
license: apache-2.0
datasets:
- kxdw2580/catgirl-dataset
language:
- zh
base_model:
- deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
new_version: kxdw2580/DeepSeek-R1-0528-Qwen3-8B-Catgirl-0531-test-mix
---
|
annasoli/Qwen2.5-14B-Instruct_insecure | annasoli | 2025-05-31T05:31:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-13T07:38:21Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
b3x0m/lert-train | b3x0m | 2025-05-31T05:23:55Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-05-29T12:15:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SMS-Rani-Viral-Video/Uncovering.SMS.Rani.Viral.Video.Original.What.You.Didnt.See.it | SMS-Rani-Viral-Video | 2025-05-31T05:21:41Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-31T05:21:17Z | <p><a rel="nofollow" href="https://viralflix.xyz/leaked/?tt">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️​</a></p>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?tt">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️​</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?tt"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
|
rossijakob/street_roadvision | rossijakob | 2025-05-31T05:19:09Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-05-30T18:22:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TanAlexanderlz/RALL_RGBCROP_Aug16F-8B16F-GACWDlr | TanAlexanderlz | 2025-05-31T05:17:02Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base-finetuned-kinetics",
"base_model:finetune:MCG-NJU/videomae-base-finetuned-kinetics",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2025-05-31T03:05:28Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base-finetuned-kinetics
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: RALL_RGBCROP_Aug16F-8B16F-GACWDlr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RALL_RGBCROP_Aug16F-8B16F-GACWDlr
This model is a fine-tuned version of [MCG-NJU/videomae-base-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7229
- Accuracy: 0.8373
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 3462
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.5481 | 0.0416 | 144 | 0.6080 | 0.6789 |
| 0.2418 | 1.0416 | 288 | 0.4781 | 0.7894 |
| 0.0655 | 2.0416 | 432 | 0.6226 | 0.7935 |
| 0.0138 | 3.0416 | 576 | 0.8833 | 0.8078 |
| 0.0009 | 4.0416 | 720 | 0.9930 | 0.8057 |
| 0.0005 | 5.0416 | 864 | 1.0640 | 0.8098 |
| 0.0003 | 6.0416 | 1008 | 1.1921 | 0.7914 |
| 0.0002 | 7.0416 | 1152 | 1.2267 | 0.7996 |
| 0.0002 | 8.0416 | 1296 | 1.2773 | 0.7914 |
| 0.0001 | 9.0416 | 1440 | 1.3020 | 0.7996 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Dishant012001/mistral_v0.3-7b-lora_model-sft | Dishant012001 | 2025-05-31T05:16:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T05:16:43Z | ---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Dishant012001
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
New-Viral-Paro-Aarti-Viral-Video/FULL.VIDEO.LINK.Paro.Aarti.Viral.Video.Leaks.Official | New-Viral-Paro-Aarti-Viral-Video | 2025-05-31T05:08:26Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-31T05:08:08Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
jusjinuk/Qwen3-32B-4bit-GuidedQuant-LNQ | jusjinuk | 2025-05-31T05:05:16Z | 0 | 0 | null | [
"pytorch",
"qwen3",
"arxiv:2505.07004",
"base_model:Qwen/Qwen3-32B",
"base_model:quantized:Qwen/Qwen3-32B",
"license:mit",
"region:us"
] | null | 2025-05-31T03:54:23Z | ---
base_model:
- Qwen/Qwen3-32B
base_model_relation: quantized
license: mit
---
# Model Card
- Base model: `Qwen/Qwen3-32B`
- Quantization method: LNQ with GuidedQuant Hessian
- Target bit-width: 4
- Backend kernel: Any-Precision-LLM kernel (`ap-gemv`)
- Calibration data: RedPajama (1024 sentences / 4096 tokens)
- Calibration objective: Next-token prediction
- num_groups (for GuidedQuant Hessian): 1
# How to run
- Follow the instruction in https://github.com/snu-mllab/GuidedQuant.
# References
- [Model Paper](https://arxiv.org/abs/2505.07004) |
New-Viral-Sah-Sapna-Kumari-Viral-Video/FULL.VIDEO.LINK.Sapna.Sah.Viral.Video.Leaks.Official | New-Viral-Sah-Sapna-Kumari-Viral-Video | 2025-05-31T05:02:35Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-31T05:02:02Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
tedlike/team-aicrowd-v2-exp011_ver2_lora | tedlike | 2025-05-31T05:02:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mllama",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-31T04:53:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jusjinuk/Llama-3.3-70B-Instruct-4bit-SqueezeLLM | jusjinuk | 2025-05-31T04:54:56Z | 0 | 0 | null | [
"pytorch",
"llama",
"arxiv:2505.07004",
"base_model:meta-llama/Llama-3.3-70B-Instruct",
"base_model:quantized:meta-llama/Llama-3.3-70B-Instruct",
"license:mit",
"region:us"
] | null | 2025-05-30T16:49:19Z | ---
base_model:
- meta-llama/Llama-3.3-70B-Instruct
base_model_relation: quantized
license: mit
---
# Model Card
- Base model: `meta-llama/Llama-3.3-70B-Instruct`
- Quantization method: SqueezeLLM
- Target bit-width: 4
- Backend kernel: Any-Precision-LLM kernel (`ap-gemv`)
- Calibration data: RedPajama (1024 sentences / 4096 tokens)
- Calibration objective: Next-token prediction
# How to run
- Follow the instruction in https://github.com/snu-mllab/GuidedQuant.
# References
- [Model Paper](https://arxiv.org/abs/2505.07004) |
BootesVoid/cmbb3nhxa02qx85uuxplw16wq_cmbbq3pno095g85uuzlknlxjj | BootesVoid | 2025-05-31T04:50:41Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-31T04:50:37Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ASDF0987
---
# Cmbb3Nhxa02Qx85Uuxplw16Wq_Cmbbq3Pno095G85Uuzlknlxjj
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ASDF0987` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "ASDF0987",
"lora_weights": "https://huggingface.co/BootesVoid/cmbb3nhxa02qx85uuxplw16wq_cmbbq3pno095g85uuzlknlxjj/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbb3nhxa02qx85uuxplw16wq_cmbbq3pno095g85uuzlknlxjj', weight_name='lora.safetensors')
image = pipeline('ASDF0987').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbb3nhxa02qx85uuxplw16wq_cmbbq3pno095g85uuzlknlxjj/discussions) to add images that show off what you’ve made with this LoRA.
|
clintbarton/Venom | clintbarton | 2025-05-31T04:47:39Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-31T04:47:38Z | ---
license: apache-2.0
---
|
New-Viral-Laura-Sofia-Viral-Video/18-FULL.VIDEO.LINK.Laura.Sofia.Viral.Video.Leaks.Official | New-Viral-Laura-Sofia-Viral-Video | 2025-05-31T04:45:33Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-31T04:43:34Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
vertings6/76e0cec6-1115-4045-b91b-2c299bc6df90 | vertings6 | 2025-05-31T04:43:28Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-0.5B",
"base_model:adapter:unsloth/Qwen2-0.5B",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-31T04:20:51Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 76e0cec6-1115-4045-b91b-2c299bc6df90
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/Qwen2-0.5B
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 0d826a2d77d98bdb_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 3
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: vertings6/76e0cec6-1115-4045-b91b-2c299bc6df90
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 2.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/0d826a2d77d98bdb_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: c2ceea1d-205d-48da-8f36-17fabba176b2
wandb_project: s56-7
wandb_run: your_name
wandb_runid: c2ceea1d-205d-48da-8f36-17fabba176b2
warmup_steps: 50
weight_decay: 0.02
xformers_attention: true
```
</details><br>
# 76e0cec6-1115-4045-b91b-2c299bc6df90
This model is a fine-tuned version of [unsloth/Qwen2-0.5B](https://huggingface.co/unsloth/Qwen2-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6862
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 18
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.2074 | 0.0001 | 1 | 2.0669 |
| 2.2384 | 0.0132 | 250 | 1.7469 |
| 1.8567 | 0.0263 | 500 | 1.6862 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Nellyw888/VeriReason-Qwen2.5-7b-RTLCoder-Verilog-GRPO-reasoning-tb | Nellyw888 | 2025-05-31T04:40:30Z | 1,382 | 2 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"verilog",
"reasoning",
"reinforcement-learning",
"rtl",
"dataset:Nellyw888/VeriReason-RTL-Coder_7b_reasoning_tb",
"arxiv:2505.11849",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-7B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | reinforcement-learning | 2025-05-09T14:33:48Z | ---
library_name: transformers
tags:
- verilog
- reasoning
- reinforcement-learning
- rtl
datasets:
- Nellyw888/VeriReason-RTL-Coder_7b_reasoning_tb
base_model:
- Qwen/Qwen2.5-Coder-7B-Instruct
---
# VeriReason-Qwen2.5-7b-RTLCoder-Verilog-GRPO-reasoning-tb
For implementation details, visit our GitHub repository: [VeriReason](https://github.com/NellyW8/VeriReason) and our [page](https://nellyw8.github.io/VeriReason/)
Check out our paper: [VeriReason: Reinforcement Learning with Testbench Feedback for Reasoning-Enhanced Verilog Generation](https://arxiv.org/abs/2505.11849)
## Update Log
2025.05.17: Initial release of VeriReason-Qwen2.5-7b-RTLCoder-Verilog-GRPO-reasoning-tb
## Project Description
This study introduces VeriReason, a novel approach utilizing reinforcement learning with testbench feedback to enhance the performance of pre-trained models for Verilog RTL code generation. VeriReason combines supervised fine-tuning with Guided Reward Proximal Optimization (GRPO) reinforcement learning, specifically tailored for RTL code generation. Using our curated high-quality training examples alongside a feedback-driven reward model, VeriReason achieves 83.1% functional correctness on the VerilogEval Machine benchmark, substantially outperforming both comparable-sized models and much larger commercial systems like GPT-4 Turbo.
The model integrates explicit reasoning capabilities with reinforcement learning for Verilog generation, establishing a new state-of-the-art for automated RTL synthesis. Our 7B parameter model based on Code Llama demonstrates up to a 2.8× increase in first-attempt functional correctness compared to baseline methods and exhibits robust generalization to unseen designs.
## Installation
To install this project, follow these steps:
1. Clone the repository: `git clone https://github.com/NellyW8/VeriReason.git`
2. Navigate to the project directory: `cd VeriReason`
3. Install the dependencies as specified in the repository
## Usage
You can use the model with the transformers library:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "Nellyw888/Nellyw888/VeriReason-Qwen2.5-7b-RTLCoder-Verilog-GRPO-reasoning-tb"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16)
model.eval()
prompt = """
Please act as a professional verilog designer. Develop a module that implements a 8-bit comparator. The module should have two 8-bit inputs and one output. If the first input is greater than the second input, the output should be high. Otherwise, the output should be low. First, think through the design approach, considering the functionality, inputs, outputs, and implementation details. Then provide the complete Verilog code implementation. Respond in the following format: <think>
...
</think>
<answer>
```verilog
...```
</answer>
"""
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
outputs = model.generate(input_ids, max_length=1024, temperature=0.2, top_p=0.95)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
```
## Training
The GRPO (Generative Reinforcement Learning from Preference Optimization) training is based on the OpenR1 framework. For training with GRPO:
1. Move the necessary files to the OpenR1 directory:
```bash
mv verilog_rewards_tb.py verilog_train_tb.py src/open-r1/
```
2. Create a directory for the Verilog recipe:
```bash
mkdir verilog_recipe
mv verilog_grpo_tb.yaml verilog_recipe/
```
3. Run training:
```bash
NCCL_DEBUG=INFO TORCH_DISTRIBUTED_DEBUG=DETAIL CUDA_VISIBLE_DEVICES=0,1,2 ACCELERATE_USE_NCCL=1 accelerate launch --config_file recipes/accelerate_configs/zero3.yaml --num_processes=3 src/open_r1/verilog_train_rtlcoder.py --config verilog_recipe/verilog_grpo_tb.yaml --use_vllm=false
```
## Citation
Please cite our paper if you use our model or dataset:
```bibtex
@misc{wang2025verireasonreinforcementlearningtestbench,
title={VeriReason: Reinforcement Learning with Testbench Feedback for Reasoning-Enhanced Verilog Generation},
author={Yiting Wang and Guoheng Sun and Wanghao Ye and Gang Qu and Ang Li},
year={2025},
eprint={2505.11849},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2505.11849},
}
```
## Acknowledgement
This repo benefits from OpenR1 and LLamaFactory. |
jusjinuk/Llama-3.1-8B-Instruct-2bit-SqueezeLLM | jusjinuk | 2025-05-31T04:35:46Z | 0 | 0 | null | [
"pytorch",
"llama",
"arxiv:2505.07004",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:quantized:meta-llama/Llama-3.1-8B-Instruct",
"license:mit",
"region:us"
] | null | 2025-05-30T17:26:41Z | ---
base_model:
- meta-llama/Llama-3.1-8B-Instruct
base_model_relation: quantized
license: mit
---
# Model Card
- Base model: `meta-llama/Llama-3.1-8B-Instruct`
- Quantization method: SqueezeLLM
- Target bit-width: 2
- Backend kernel: Any-Precision-LLM kernel (`ap-gemv`)
- Calibration data: RedPajama (1024 sentences / 4096 tokens)
- Calibration objective: Next-token prediction
# How to run
- Follow the instruction in https://github.com/snu-mllab/GuidedQuant.
# References
- [Model Paper](https://arxiv.org/abs/2505.07004) |
joshalva23/codet5-base-semantic | joshalva23 | 2025-05-31T04:33:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-31T04:11:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
igorcouto/whisper-large-v3-pt-coraa-300h | igorcouto | 2025-05-31T04:33:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-05-31T04:32:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sergioalves/f54240c9-97a7-485c-ad4e-196e1b39afde | sergioalves | 2025-05-31T04:26:18Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-0.5B",
"base_model:adapter:unsloth/Qwen2-0.5B",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-31T04:04:00Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f54240c9-97a7-485c-ad4e-196e1b39afde
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/Qwen2-0.5B
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 0d826a2d77d98bdb_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 0.85
group_by_length: false
hub_model_id: sergioalves/f54240c9-97a7-485c-ad4e-196e1b39afde
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/0d826a2d77d98bdb_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: c2ceea1d-205d-48da-8f36-17fabba176b2
wandb_project: s56-7
wandb_run: your_name
wandb_runid: c2ceea1d-205d-48da-8f36-17fabba176b2
warmup_steps: 50
weight_decay: 0.05
xformers_attention: true
```
</details><br>
# f54240c9-97a7-485c-ad4e-196e1b39afde
This model is a fine-tuned version of [unsloth/Qwen2-0.5B](https://huggingface.co/unsloth/Qwen2-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8308
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 24
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.3468 | 0.0001 | 1 | 2.0669 |
| 2.3802 | 0.0175 | 250 | 1.8821 |
| 1.9289 | 0.0351 | 500 | 1.8308 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Subsets and Splits