modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-14 00:44:55
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 519
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-14 00:44:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
hieut3k34/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-zealous_jumping_donkey | hieut3k34 | 2025-04-02T01:36:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am zealous jumping donkey",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T01:32:17Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-zealous_jumping_donkey
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am zealous jumping donkey
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-zealous_jumping_donkey
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="hieut3k34/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-zealous_jumping_donkey", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.50.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
peks232/curso-fiap-test | peks232 | 2025-04-02T01:33:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-04-02T01:32:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BrownianNotion/TinyLlama_v1.1_1.58bit_ternary | BrownianNotion | 2025-04-02T01:27:26Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"llama",
"en",
"arxiv:2402.10631",
"base_model:TinyLlama/TinyLlama_v1.1",
"base_model:finetune:TinyLlama/TinyLlama_v1.1",
"license:mit",
"region:us"
]
| null | 2025-04-02T00:30:04Z | ---
base_model: TinyLlama/TinyLlama_v1.1
language: en
license: mit
---
## Metrics
| PPL | arc_easy |arc_challenge| piqa | winogrande | hellaswag |mmlu|QA Avg|
|------:|------------|-------------|------------|------------|------------|----|-----:|
|8817.78|24.66 ± 0.88|21.84 ± 1.21 |53.16 ± 1.16|49.64 ± 1.41|25.96 ± 0.44|- | 35.05|
Training method based on [BitDistiller Paper](https://arxiv.org/abs/2402.10631)
- **License:** mit
- **Finetuned from:** TinyLlama/TinyLlama_v1.1 |
minyong/20250402_010937_gemma-3-27b-pt_LoRA | minyong | 2025-04-02T01:27:15Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-27b-pt",
"base_model:finetune:google/gemma-3-27b-pt",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-02T01:11:16Z | ---
base_model: google/gemma-3-27b-pt
library_name: transformers
model_name: 20250402_010937_gemma-3-27b-pt_LoRA
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for 20250402_010937_gemma-3-27b-pt_LoRA
This model is a fine-tuned version of [google/gemma-3-27b-pt](https://huggingface.co/google/gemma-3-27b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="minyong/20250402_010937_gemma-3-27b-pt_LoRA", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0
- Transformers: 4.50.0
- Pytorch: 2.6.0
- Datasets: 3.1.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
bowilleatyou/55c27b62-e994-4704-b3f9-db58f80fa03f | bowilleatyou | 2025-04-02T01:25:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-01T20:17:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
INe904/gemma3-finetune-gguf | INe904 | 2025-04-02T01:23:59Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma3",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:quantized:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-01T06:28:42Z | ---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** INe904
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
xw17/Phi-3.5-mini-instruct_finetuned_2_def_lora | xw17 | 2025-04-02T01:23:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T04:47:03Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
good593/gemma3-finetune-gguf | good593 | 2025-04-02T01:23:41Z | 26 | 3 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma3",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:quantized:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-03-28T18:38:43Z | ---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** good593
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MinaMila/llama_instbase_Adult_9ep_55 | MinaMila | 2025-04-02T01:23:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T01:20:14Z | ---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Lingzhi-14B-base-GGUF | mradermacher | 2025-04-02T01:19:43Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Lingzhi-AI/Lingzhi-14B-base",
"base_model:quantized:Lingzhi-AI/Lingzhi-14B-base",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-02T00:18:14Z | ---
base_model: Lingzhi-AI/Lingzhi-14B-base
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Lingzhi-AI/Lingzhi-14B-base
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Lingzhi-14B-base-GGUF/resolve/main/Lingzhi-14B-base.Q2_K.gguf) | Q2_K | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/Lingzhi-14B-base-GGUF/resolve/main/Lingzhi-14B-base.Q3_K_S.gguf) | Q3_K_S | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/Lingzhi-14B-base-GGUF/resolve/main/Lingzhi-14B-base.Q3_K_M.gguf) | Q3_K_M | 7.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Lingzhi-14B-base-GGUF/resolve/main/Lingzhi-14B-base.Q3_K_L.gguf) | Q3_K_L | 7.9 | |
| [GGUF](https://huggingface.co/mradermacher/Lingzhi-14B-base-GGUF/resolve/main/Lingzhi-14B-base.IQ4_XS.gguf) | IQ4_XS | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Lingzhi-14B-base-GGUF/resolve/main/Lingzhi-14B-base.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Lingzhi-14B-base-GGUF/resolve/main/Lingzhi-14B-base.Q4_K_M.gguf) | Q4_K_M | 9.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Lingzhi-14B-base-GGUF/resolve/main/Lingzhi-14B-base.Q5_K_S.gguf) | Q5_K_S | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/Lingzhi-14B-base-GGUF/resolve/main/Lingzhi-14B-base.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Lingzhi-14B-base-GGUF/resolve/main/Lingzhi-14B-base.Q6_K.gguf) | Q6_K | 12.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Lingzhi-14B-base-GGUF/resolve/main/Lingzhi-14B-base.Q8_0.gguf) | Q8_0 | 15.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
jinx2321/nllb-jeju-araea-all | jinx2321 | 2025-04-02T01:19:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/nllb-200-distilled-600M",
"base_model:finetune:facebook/nllb-200-distilled-600M",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2025-04-01T09:39:25Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/nllb-200-distilled-600M
tags:
- generated_from_trainer
model-index:
- name: nllb-jeju-araea-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nllb-jeju-araea-all
This model is a fine-tuned version of [facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
medhasreenivasan23/qwen-2.5-medical-reasoning | medhasreenivasan23 | 2025-04-02T01:16:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-02T01:16:25Z | ---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** medhasreenivasan23
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ziasiddiqui/RPOv0.2 | ziasiddiqui | 2025-04-02T01:13:37Z | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-02T01:13:36Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ziasiddiqui
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MinaMila/llama_instbase_Adult_8ep_55 | MinaMila | 2025-04-02T01:12:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T01:09:06Z | ---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
bowilleatyou/c06b2e64-b189-41b3-877d-a523b200129a | bowilleatyou | 2025-04-02T01:10:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-01T22:30:34Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_7_lr_0.002-gguf | RichardErkhov | 2025-04-02T01:09:52Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-02T00:02:32Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Phi3_intent_v48_1_w_unknown_7_lr_0.002 - GGUF
- Model creator: https://huggingface.co/magnifi/
- Original model: https://huggingface.co/magnifi/Phi3_intent_v48_1_w_unknown_7_lr_0.002/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Phi3_intent_v48_1_w_unknown_7_lr_0.002.Q2_K.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_7_lr_0.002.Q2_K.gguf) | Q2_K | 1.35GB |
| [Phi3_intent_v48_1_w_unknown_7_lr_0.002.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_7_lr_0.002.IQ3_XS.gguf) | IQ3_XS | 1.49GB |
| [Phi3_intent_v48_1_w_unknown_7_lr_0.002.IQ3_S.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_7_lr_0.002.IQ3_S.gguf) | IQ3_S | 1.57GB |
| [Phi3_intent_v48_1_w_unknown_7_lr_0.002.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_7_lr_0.002.Q3_K_S.gguf) | Q3_K_S | 1.57GB |
| [Phi3_intent_v48_1_w_unknown_7_lr_0.002.IQ3_M.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_7_lr_0.002.IQ3_M.gguf) | IQ3_M | 1.65GB |
| [Phi3_intent_v48_1_w_unknown_7_lr_0.002.Q3_K.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_7_lr_0.002.Q3_K.gguf) | Q3_K | 1.75GB |
| [Phi3_intent_v48_1_w_unknown_7_lr_0.002.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_7_lr_0.002.Q3_K_M.gguf) | Q3_K_M | 1.75GB |
| [Phi3_intent_v48_1_w_unknown_7_lr_0.002.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_7_lr_0.002.Q3_K_L.gguf) | Q3_K_L | 1.9GB |
| [Phi3_intent_v48_1_w_unknown_7_lr_0.002.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_7_lr_0.002.IQ4_XS.gguf) | IQ4_XS | 1.93GB |
| [Phi3_intent_v48_1_w_unknown_7_lr_0.002.Q4_0.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_7_lr_0.002.Q4_0.gguf) | Q4_0 | 2.03GB |
| [Phi3_intent_v48_1_w_unknown_7_lr_0.002.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_7_lr_0.002.IQ4_NL.gguf) | IQ4_NL | 2.04GB |
| [Phi3_intent_v48_1_w_unknown_7_lr_0.002.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_7_lr_0.002.Q4_K_S.gguf) | Q4_K_S | 2.04GB |
| [Phi3_intent_v48_1_w_unknown_7_lr_0.002.Q4_K.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_7_lr_0.002.Q4_K.gguf) | Q4_K | 2.16GB |
| [Phi3_intent_v48_1_w_unknown_7_lr_0.002.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_7_lr_0.002.Q4_K_M.gguf) | Q4_K_M | 2.16GB |
| [Phi3_intent_v48_1_w_unknown_7_lr_0.002.Q4_1.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_7_lr_0.002.Q4_1.gguf) | Q4_1 | 2.24GB |
| [Phi3_intent_v48_1_w_unknown_7_lr_0.002.Q5_0.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_7_lr_0.002.Q5_0.gguf) | Q5_0 | 2.46GB |
| [Phi3_intent_v48_1_w_unknown_7_lr_0.002.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_7_lr_0.002.Q5_K_S.gguf) | Q5_K_S | 2.46GB |
| [Phi3_intent_v48_1_w_unknown_7_lr_0.002.Q5_K.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_7_lr_0.002.Q5_K.gguf) | Q5_K | 2.53GB |
| [Phi3_intent_v48_1_w_unknown_7_lr_0.002.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_7_lr_0.002.Q5_K_M.gguf) | Q5_K_M | 2.53GB |
| [Phi3_intent_v48_1_w_unknown_7_lr_0.002.Q5_1.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_7_lr_0.002.Q5_1.gguf) | Q5_1 | 2.68GB |
| [Phi3_intent_v48_1_w_unknown_7_lr_0.002.Q6_K.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_7_lr_0.002.Q6_K.gguf) | Q6_K | 2.92GB |
| [Phi3_intent_v48_1_w_unknown_7_lr_0.002.Q8_0.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_7_lr_0.002.Q8_0.gguf) | Q8_0 | 3.78GB |
Original model description:
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** magnifi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mjsvideo/mjsvideo2-lora | mjsvideo | 2025-04-02T01:06:56Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-04-02T00:40:03Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: mjsvideo2
---
# Mjsvideo2 Lora
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `mjsvideo2` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "mjsvideo2",
"lora_weights": "https://huggingface.co/mjsvideo/mjsvideo2-lora/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('mjsvideo/mjsvideo2-lora', weight_name='lora.safetensors')
image = pipeline('mjsvideo2').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 32
## Contribute your own examples
You can use the [community tab](https://huggingface.co/mjsvideo/mjsvideo2-lora/discussions) to add images that show off what you’ve made with this LoRA.
|
IconicHearts/Sender-70b-LoRA | IconicHearts | 2025-04-02T01:05:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-02T01:05:21Z | ---
base_model: unsloth/llama-3.3-70b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** IconicHearts
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.3-70b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
okita-souji/Reinforce-CartPole-v1 | okita-souji | 2025-04-02T01:02:19Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-04-02T01:02:09Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
SoonKwan/fine-tuned_phi-4 | SoonKwan | 2025-04-02T01:01:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/phi-4-unsloth-bnb-4bit",
"base_model:finetune:unsloth/phi-4-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-02T01:01:12Z | ---
base_model: unsloth/phi-4-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** SoonKwan
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-4-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MinaMila/llama_instbase_Adult_7ep_55 | MinaMila | 2025-04-02T01:01:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T00:58:13Z | ---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/meme-12b-e1-GGUF | mradermacher | 2025-04-02T01:00:17Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Nitral-Archive/meme-12b-e1",
"base_model:quantized:Nitral-Archive/meme-12b-e1",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-01T23:32:30Z | ---
base_model: Nitral-Archive/meme-12b-e1
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Nitral-Archive/meme-12b-e1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/meme-12b-e1-GGUF/resolve/main/meme-12b-e1.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/meme-12b-e1-GGUF/resolve/main/meme-12b-e1.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/meme-12b-e1-GGUF/resolve/main/meme-12b-e1.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/meme-12b-e1-GGUF/resolve/main/meme-12b-e1.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/meme-12b-e1-GGUF/resolve/main/meme-12b-e1.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/meme-12b-e1-GGUF/resolve/main/meme-12b-e1.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/meme-12b-e1-GGUF/resolve/main/meme-12b-e1.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/meme-12b-e1-GGUF/resolve/main/meme-12b-e1.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/meme-12b-e1-GGUF/resolve/main/meme-12b-e1.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/meme-12b-e1-GGUF/resolve/main/meme-12b-e1.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/meme-12b-e1-GGUF/resolve/main/meme-12b-e1.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Falcon3-10B-Instruct-ChatMLified-GGUF | mradermacher | 2025-04-02T00:58:49Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-04-02T00:58:46Z | <!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Nitral-Archive/Falcon3-10B-Instruct-ChatMLified
|
IconicHearts/Sender-70b-16bit | IconicHearts | 2025-04-02T00:58:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T00:45:40Z | ---
base_model: unsloth/llama-3.3-70b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** IconicHearts
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.3-70b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Qwen1.5-Wukong-0.5B-GGUF | mradermacher | 2025-04-02T00:57:32Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"zh",
"dataset:teknium/OpenHermes-2.5",
"base_model:RESMPDEV/Qwen1.5-Wukong-0.5B",
"base_model:quantized:RESMPDEV/Qwen1.5-Wukong-0.5B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-02T00:53:14Z | ---
base_model: RESMPDEV/Qwen1.5-Wukong-0.5B
datasets:
- teknium/OpenHermes-2.5
language:
- en
- zh
library_name: transformers
license: other
license_link: https://huggingface.co/Qwen/Qwen1.5-0.5B/blob/main/LICENSE
license_name: tongyi-qianwen-research
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/RESMPDEV/Qwen1.5-Wukong-0.5B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-Wukong-0.5B-GGUF/resolve/main/Qwen1.5-Wukong-0.5B.Q2_K.gguf) | Q2_K | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-Wukong-0.5B-GGUF/resolve/main/Qwen1.5-Wukong-0.5B.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-Wukong-0.5B-GGUF/resolve/main/Qwen1.5-Wukong-0.5B.Q3_K_M.gguf) | Q3_K_M | 0.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-Wukong-0.5B-GGUF/resolve/main/Qwen1.5-Wukong-0.5B.Q3_K_L.gguf) | Q3_K_L | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-Wukong-0.5B-GGUF/resolve/main/Qwen1.5-Wukong-0.5B.IQ4_XS.gguf) | IQ4_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-Wukong-0.5B-GGUF/resolve/main/Qwen1.5-Wukong-0.5B.Q4_K_S.gguf) | Q4_K_S | 0.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-Wukong-0.5B-GGUF/resolve/main/Qwen1.5-Wukong-0.5B.Q4_K_M.gguf) | Q4_K_M | 0.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-Wukong-0.5B-GGUF/resolve/main/Qwen1.5-Wukong-0.5B.Q5_K_S.gguf) | Q5_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-Wukong-0.5B-GGUF/resolve/main/Qwen1.5-Wukong-0.5B.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-Wukong-0.5B-GGUF/resolve/main/Qwen1.5-Wukong-0.5B.Q6_K.gguf) | Q6_K | 0.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-Wukong-0.5B-GGUF/resolve/main/Qwen1.5-Wukong-0.5B.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-Wukong-0.5B-GGUF/resolve/main/Qwen1.5-Wukong-0.5B.f16.gguf) | f16 | 1.0 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
alinerodrigues/wav2vec2-large-xlsr-coraa-texts-aug-exp-1 | alinerodrigues | 2025-04-02T00:53:48Z | 0 | 0 | null | [
"pytorch",
"wav2vec2",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
]
| null | 2025-04-01T17:17:11Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-large-xlsr-coraa-texts-aug-exp-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-coraa-texts-aug-exp-1
This model is a fine-tuned version of [Edresson/wav2vec2-large-xlsr-coraa-portuguese](https://huggingface.co/Edresson/wav2vec2-large-xlsr-coraa-portuguese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3162
- Wer: 0.2298
- Cer: 0.1328
- Per: 0.2260
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer | Per |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 43.995 | 1.0 | 84 | 3.3727 | 1.0 | 0.9671 | 1.0 |
| 9.1824 | 2.0 | 168 | 2.9921 | 1.0 | 0.9671 | 1.0 |
| 3.1225 | 3.0 | 252 | 2.9321 | 1.0 | 0.9671 | 1.0 |
| 2.979 | 4.0 | 336 | 2.9394 | 1.0 | 0.9671 | 1.0 |
| 2.9357 | 5.0 | 420 | 2.9118 | 1.0 | 0.9671 | 1.0 |
| 2.9116 | 6.0 | 504 | 2.8401 | 1.0 | 0.9671 | 1.0 |
| 2.9116 | 7.0 | 588 | 2.0977 | 1.0 | 0.7348 | 1.0 |
| 2.6841 | 8.0 | 672 | 0.8210 | 0.5521 | 0.2093 | 0.5335 |
| 1.6358 | 9.0 | 756 | 0.5586 | 0.3329 | 0.1633 | 0.3080 |
| 1.1226 | 10.0 | 840 | 0.4817 | 0.2951 | 0.1537 | 0.2834 |
| 0.9403 | 11.0 | 924 | 0.4337 | 0.2755 | 0.1482 | 0.2643 |
| 0.8262 | 12.0 | 1008 | 0.4019 | 0.2717 | 0.1460 | 0.2613 |
| 0.8262 | 13.0 | 1092 | 0.3872 | 0.2682 | 0.1458 | 0.2590 |
| 0.7654 | 14.0 | 1176 | 0.3664 | 0.2532 | 0.1414 | 0.2445 |
| 0.6788 | 15.0 | 1260 | 0.3476 | 0.2501 | 0.1390 | 0.2405 |
| 0.647 | 16.0 | 1344 | 0.3508 | 0.2453 | 0.1388 | 0.2364 |
| 0.6058 | 17.0 | 1428 | 0.3425 | 0.2448 | 0.1392 | 0.2349 |
| 0.5853 | 18.0 | 1512 | 0.3393 | 0.2377 | 0.1351 | 0.2293 |
| 0.5853 | 19.0 | 1596 | 0.3276 | 0.2293 | 0.1347 | 0.2222 |
| 0.5356 | 20.0 | 1680 | 0.3275 | 0.2316 | 0.1340 | 0.2275 |
| 0.5323 | 21.0 | 1764 | 0.3245 | 0.2301 | 0.1342 | 0.2265 |
| 0.501 | 22.0 | 1848 | 0.3258 | 0.2232 | 0.1334 | 0.2197 |
| 0.4956 | 23.0 | 1932 | 0.3262 | 0.2242 | 0.1332 | 0.2202 |
| 0.4427 | 24.0 | 2016 | 0.3162 | 0.2298 | 0.1328 | 0.2260 |
| 0.4443 | 25.0 | 2100 | 0.3225 | 0.2285 | 0.1331 | 0.2252 |
| 0.4443 | 26.0 | 2184 | 0.3246 | 0.2270 | 0.1333 | 0.2227 |
| 0.4234 | 27.0 | 2268 | 0.3171 | 0.2250 | 0.1327 | 0.2214 |
| 0.4245 | 28.0 | 2352 | 0.3181 | 0.2265 | 0.1344 | 0.2230 |
| 0.3993 | 29.0 | 2436 | 0.3283 | 0.2250 | 0.1331 | 0.2217 |
| 0.3769 | 30.0 | 2520 | 0.3292 | 0.2354 | 0.1359 | 0.2324 |
| 0.3925 | 31.0 | 2604 | 0.3276 | 0.2344 | 0.1360 | 0.2313 |
| 0.3925 | 32.0 | 2688 | 0.3356 | 0.2301 | 0.1345 | 0.2275 |
| 0.3724 | 33.0 | 2772 | 0.3363 | 0.2273 | 0.1357 | 0.2245 |
| 0.3614 | 34.0 | 2856 | 0.3307 | 0.2311 | 0.1358 | 0.2280 |
| 0.3705 | 35.0 | 2940 | 0.3276 | 0.2316 | 0.1356 | 0.2285 |
| 0.3376 | 36.0 | 3024 | 0.3309 | 0.2379 | 0.1373 | 0.2336 |
| 0.3434 | 37.0 | 3108 | 0.3340 | 0.2291 | 0.1367 | 0.2255 |
| 0.3434 | 38.0 | 3192 | 0.3428 | 0.2326 | 0.1369 | 0.2293 |
| 0.3245 | 39.0 | 3276 | 0.3410 | 0.2260 | 0.1340 | 0.2227 |
| 0.3087 | 40.0 | 3360 | 0.3516 | 0.2285 | 0.1348 | 0.2257 |
| 0.3025 | 41.0 | 3444 | 0.3396 | 0.2268 | 0.1347 | 0.2237 |
| 0.2953 | 42.0 | 3528 | 0.3443 | 0.2255 | 0.1346 | 0.2222 |
| 0.2778 | 43.0 | 3612 | 0.3406 | 0.2255 | 0.1351 | 0.2230 |
| 0.2778 | 44.0 | 3696 | 0.3457 | 0.2346 | 0.1367 | 0.2308 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.13.3
|
xw17/Llama-3.2-3B-Instruct_finetuned_4_def_lora | xw17 | 2025-04-02T00:52:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T04:13:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ASethi04/llama-3.1-8b-piqa-lora | ASethi04 | 2025-04-02T00:51:08Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:piqa",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"license:llama3.1",
"region:us"
]
| null | 2025-03-31T05:13:28Z | ---
base_model: meta-llama/Llama-3.1-8B
datasets:
- piqa
library_name: peft
license: llama3.1
metrics:
- accuracy
- precision
- recall
- f1
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: llama-3.1-8b-piqa-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-3.1-8b-piqa-lora
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) on the piqa dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5498
- Accuracy: 0.9010
- Precision: 0.8907
- Recall: 0.9191
- F1: 0.9047
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.1593 | 0.9999 | 8056 | 0.1965 | 0.8977 | 0.8852 | 0.9191 | 0.9019 |
| 0.02 | 2.0 | 16113 | 0.3301 | 0.8825 | 0.8851 | 0.8851 | 0.8851 |
| 0.0 | 2.9999 | 24169 | 0.4240 | 0.8988 | 0.8919 | 0.9128 | 0.9022 |
| 0.0 | 3.9998 | 32224 | 0.5498 | 0.9010 | 0.8907 | 0.9191 | 0.9047 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.45.2
- Pytorch 2.4.1+cu121
- Datasets 2.19.0
- Tokenizers 0.20.1 |
mradermacher/ASPRM-M-GGUF | mradermacher | 2025-04-02T00:47:45Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Lux0926/ASPRM-M",
"base_model:quantized:Lux0926/ASPRM-M",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-02T00:26:33Z | ---
base_model: Lux0926/ASPRM-M
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Lux0926/ASPRM-M
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ASPRM-M-GGUF/resolve/main/ASPRM-M.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/ASPRM-M-GGUF/resolve/main/ASPRM-M.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/ASPRM-M-GGUF/resolve/main/ASPRM-M.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ASPRM-M-GGUF/resolve/main/ASPRM-M.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/ASPRM-M-GGUF/resolve/main/ASPRM-M.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/ASPRM-M-GGUF/resolve/main/ASPRM-M.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ASPRM-M-GGUF/resolve/main/ASPRM-M.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ASPRM-M-GGUF/resolve/main/ASPRM-M.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/ASPRM-M-GGUF/resolve/main/ASPRM-M.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/ASPRM-M-GGUF/resolve/main/ASPRM-M.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ASPRM-M-GGUF/resolve/main/ASPRM-M.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ASPRM-M-GGUF/resolve/main/ASPRM-M.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
aamoshdahal/email-phishing-distilbert-finetuned | aamoshdahal | 2025-04-02T00:45:47Z | 8 | 1 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"phishing-detection",
"email-security",
"en",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-03-26T02:39:11Z | ---
library_name: transformers
language:
- en
base_model:
- distilbert/distilbert-base-uncased
pipeline_tag: text-classification
tags:
- phishing-detection
- email-security
- transformers
- distilbert
---
# 📧 Model Card for [aamoshdahal/email-phishing-distilbert-finetuned](https://huggingface.co/aamoshdahal/email-phishing-distilbert-finetuned)
This model is a fine-tuned version of **DistilBERT (distilbert-base-uncased)** trained specifically for **phishing email detection**. It classifies email content into two categories: **phishing** and **legitimate**.
The model was trained using a [`Phishing Email Dataset`](https://www.kaggle.com/datasets/naserabdullahalam/phishing-email-dataset?select=phishing_email.csv) and evaluated against the [`cybersectony/PhishingEmailDetectionv2.0`](https://huggingface.co/datasets/cybersectony/PhishingEmailDetectionv2.0) dataset.
It is optimized for:
- **High recall** to catch most phishing attempts
- **High precision** to reduce false positives
- **Fast inference** via the lightweight DistilBERT architecture
- **Interpretability**, with support for token-level explanations using [`transformers-interpret`](https://github.com/cdpierse/transformers-interpret)
This model is ideal for security tools, email scanning systems, awareness training platforms, and research on adversarial phishing attacks.
## Model Details
### Model Description
This is a fine-tuned DistilBERT model trained to classify email content as either **phishing** or **legitimate**. It was developed as part of a cybersecurity research project to detect phishing attempts in email messages using finetuned transformer model.
- **Developed by:** [@aamoshdahal](https://huggingface.co/aamoshdahal)
- **Model type:** DistilBERT (Transformer-based sequence classifier)
- **Language(s):** English
- **Finetuned from model:** distilbert-base-uncased
### Intended Uses & Users
This model is intended to be used as a lightweight and reliable phishing email detector. It can be integrated into:
- **Email clients or gateways** to filter phishing emails in real time
- **Security software or firewalls** as an additional phishing classifier
- **Educational tools** for training users to recognize phishing attempts
- **Research environments** to study adversarial or evolving phishing tactics
#### Foreseeable Users:
- Cybersecurity professionals
- Software developers integrating NLP into email platforms
- Researchers working on phishing detection
#### Foreseeable Impact:
- Improved early detection of phishing attacks
- Reduced exposure to credential theft and fraud
- Increased public understanding of phishing strategies
## 🚀 How to Get Started with the Model
You can use the code snippet below to quickly load the fine-tuned model and make predictions on any email content:
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
from transformers_interpret import SequenceClassificationExplainer
# Load the model and tokenizer from Hugging Face Hub
model_id = "aamoshdahal/email-phishing-distilbert-finetuned"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSequenceClassification.from_pretrained(model_id)
# Set device (GPU if available)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
model.eval()
# Example email for prediction
email = \"\"\"Dear user,
We detected suspicious activity on your account. Please verify your identity immediately by clicking the link below to avoid suspension.
[Phishing Link Here]
Thank you,
Security Team\"\"\"
# Tokenize and prepare the input
encoded_input = tokenizer(email, return_tensors='pt', truncation=True, padding=True).to(device)
# Make prediction
with torch.no_grad():
outputs = model(**encoded_input)
probs = torch.nn.functional.softmax(outputs.logits, dim=1)
# Output prediction
labels = ["legitimate", "phishing"]
pred_label = labels[probs.argmax()]
confidence = probs.max().item()
print(f"Prediction: {pred_label} ({confidence:.2%} confidence)")
explainer = SequenceClassificationExplainer(model=model, tokenizer=tokenizer)
word_attributions = explainer(email, class_name="LABEL_0")
explainer.visualize()
```
## 🏋️♂️ Training Details
### 📦 Training Data
The model was fine-tuned on a **balanced phishing email dataset** compiled from multiple public sources, including:
- Enron Email Dataset
- CEAS 2008 Phishing Corpus
- Ling-Spam Dataset
- SpamAssassin
- Nazario Phishing Emails
- Nigerian Fraud Email Dataset
These were aggregated and preprocessed via the [Phishing Email Dataset on Kaggle](https://www.kaggle.com/datasets/Alam97/phishing-email-dataset). Each data entry includes a combined `text_combined` field, which concatenates the subject line, body text, sender address, and timestamp to provide full context for classification.
---
### ⚙️ Training Procedure
This model was fine-tuned using the Hugging Face 🤗 `Trainer` API with the following configuration:
- **Base model**: `distilbert-base-uncased`
- **Architecture**: Transformer-based sequence classifier (`DistilBertForSequenceClassification`)
- **Epochs**: 3
- **Batch size**: 16
- **Learning rate**: 2e-5
- **Weight decay**: 0.01
- **Evaluation strategy**: Per epoch
- **Monitoring**: All metrics logged via Weights & Biases (W&B)
The model was trained using a Tesla A100 GPU (40GB VRAM) on Google Colab Pro.
#### Preprocessing
- Duplicate and null record removal
- Lowercasing and whitespace cleanup
- Tokenization using `DistilBertTokenizer`
- Label encoding (0 = legitimate, 1 = phishing)
- Random Undersampling to ensure class balance
## 📊 Evaluation Results
For updated results and runs check this public wandb project. [Full Report](https://wandb.ai/dahalaamosh-harrisburg-university/Phishing_Detection_DistilBERT_Uncased)
The fine-tuned DistilBERT model was evaluated on a test dataset containing both phishing and legitimate emails. Below is a summary of its performance compared to baseline models (raw DistilBERT and raw BERT):
### 📈 Fine-Tuned DistilBERT (Best Performing)
| Epoch | Training Loss | Validation Loss | Accuracy | Precision | Recall | F1 Score | ROC AUC |
|-------|----------------|------------------|----------|-----------|--------|----------|---------|
| 1 | 0.0323 | 0.0243 | 0.9936 | 0.9916 | 0.9961 | 0.9939 | 0.9996 |
| 2 | 0.0083 | 0.0297 | 0.9938 | 0.9968 | 0.9912 | 0.9940 | 0.9998 |
| 3 | 0.0044 | 0.0275 | **0.9951** | **0.9959** | **0.9947** | **0.9953** | **0.9997** |
- **Test Set Summary:**
- Accuracy: **96.62%**
- Precision: **95.90%**
- Recall: **97.46%**
- F1 Score: **96.67%**
- ROC AUC: **0.9953**
---
### ⚠️ Raw DistilBERT (Untrained)
- Accuracy: 49.57%
- Precision: 0.00%
- Recall: 0.00%
- F1 Score: 0.00
- ROC AUC: 0.5694
---
### ⚠️ Raw BERT (Untrained)
- Accuracy: 49.57%
- Precision: 0.00%
- Recall: 0.00%
- F1 Score: 0.00
- ROC AUC: 0.4984
---
|
lesso12/c35a739f-7294-45bd-a182-480697569a90 | lesso12 | 2025-04-02T00:40:15Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:DeepMount00/Llama-3-8b-Ita",
"base_model:adapter:DeepMount00/Llama-3-8b-Ita",
"license:llama3",
"region:us"
]
| null | 2025-04-01T22:26:57Z | ---
library_name: peft
license: llama3
base_model: DeepMount00/Llama-3-8b-Ita
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c35a739f-7294-45bd-a182-480697569a90
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: DeepMount00/Llama-3-8b-Ita
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 31456311889b5d6c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/31456311889b5d6c_train_data.json
type:
field_instruction: prompt
field_output: GEITje-7B-ultra
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 500
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso12/c35a739f-7294-45bd-a182-480697569a90
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000212
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 128
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/31456311889b5d6c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 500
saves_per_epoch: null
seed: 120
sequence_len: 1024
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5a8b32e4-4bde-4dba-8538-bcf18d1639ae
wandb_project: 12a
wandb_run: your_name
wandb_runid: 5a8b32e4-4bde-4dba-8538-bcf18d1639ae
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c35a739f-7294-45bd-a182-480697569a90
This model is a fine-tuned version of [DeepMount00/Llama-3-8b-Ita](https://huggingface.co/DeepMount00/Llama-3-8b-Ita) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3492
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000212
- train_batch_size: 4
- eval_batch_size: 4
- seed: 120
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0007 | 1 | 1.7677 |
| 1.3504 | 0.3380 | 500 | 1.3492 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
BoghdadyJR/QWEN_10EP_UIXRAY | BoghdadyJR | 2025-04-02T00:39:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2_5_vl",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-02T00:39:18Z | ---
base_model: unsloth/qwen2.5-vl-7b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** BoghdadyJR
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-vl-7b-instruct-bnb-4bit
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/AjakatlLlama3.2-3B-GGUF | mradermacher | 2025-04-02T00:39:16Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:RenZaijay/AjakatlLlama3.2-3B",
"base_model:quantized:RenZaijay/AjakatlLlama3.2-3B",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-02T00:18:24Z | ---
base_model: RenZaijay/AjakatlLlama3.2-3B
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/RenZaijay/AjakatlLlama3.2-3B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/AjakatlLlama3.2-3B-GGUF/resolve/main/AjakatlLlama3.2-3B.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/AjakatlLlama3.2-3B-GGUF/resolve/main/AjakatlLlama3.2-3B.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/AjakatlLlama3.2-3B-GGUF/resolve/main/AjakatlLlama3.2-3B.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/AjakatlLlama3.2-3B-GGUF/resolve/main/AjakatlLlama3.2-3B.Q3_K_L.gguf) | Q3_K_L | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/AjakatlLlama3.2-3B-GGUF/resolve/main/AjakatlLlama3.2-3B.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/AjakatlLlama3.2-3B-GGUF/resolve/main/AjakatlLlama3.2-3B.Q4_K_S.gguf) | Q4_K_S | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AjakatlLlama3.2-3B-GGUF/resolve/main/AjakatlLlama3.2-3B.Q4_K_M.gguf) | Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AjakatlLlama3.2-3B-GGUF/resolve/main/AjakatlLlama3.2-3B.Q5_K_S.gguf) | Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/AjakatlLlama3.2-3B-GGUF/resolve/main/AjakatlLlama3.2-3B.Q5_K_M.gguf) | Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/AjakatlLlama3.2-3B-GGUF/resolve/main/AjakatlLlama3.2-3B.Q6_K.gguf) | Q6_K | 2.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/AjakatlLlama3.2-3B-GGUF/resolve/main/AjakatlLlama3.2-3B.Q8_0.gguf) | Q8_0 | 3.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/AjakatlLlama3.2-3B-GGUF/resolve/main/AjakatlLlama3.2-3B.f16.gguf) | f16 | 6.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Kratosgado/ppo-LunarLander-v2 | Kratosgado | 2025-04-02T00:38:13Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-04-02T00:37:57Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 276.46 +/- 16.29
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
MegaTTS3/MegaTTS3 | MegaTTS3 | 2025-04-02T00:38:00Z | 0 | 0 | null | [
"safetensors",
"license:apache-2.0",
"region:us"
]
| null | 2025-04-02T00:37:40Z | ---
license: apache-2.0
---
|
AKTODTH/Intervnvl-2.5-1B-lora-GRPO-medical-reasoning-560steps | AKTODTH | 2025-04-02T00:37:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"internvl_chat",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"4-bit",
"bitsandbytes",
"region:us"
]
| feature-extraction | 2025-04-02T00:36:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lesso18/3c23e114-0e63-4d68-85ba-c58f7ac19bd1 | lesso18 | 2025-04-02T00:35:34Z | 0 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/codegemma-7b-it",
"base_model:adapter:unsloth/codegemma-7b-it",
"license:apache-2.0",
"region:us"
]
| null | 2025-04-01T22:23:45Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/codegemma-7b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3c23e114-0e63-4d68-85ba-c58f7ac19bd1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/codegemma-7b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fb9eeb2bc4f9a42e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fb9eeb2bc4f9a42e_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 500
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso18/3c23e114-0e63-4d68-85ba-c58f7ac19bd1
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000218
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 128
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/fb9eeb2bc4f9a42e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 500
saves_per_epoch: null
seed: 180
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 9ea56f36-5dbd-4090-b44c-74e440cd3c27
wandb_project: 18a
wandb_run: your_name
wandb_runid: 9ea56f36-5dbd-4090-b44c-74e440cd3c27
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 3c23e114-0e63-4d68-85ba-c58f7ac19bd1
This model is a fine-tuned version of [unsloth/codegemma-7b-it](https://huggingface.co/unsloth/codegemma-7b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2210
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000218
- train_batch_size: 4
- eval_batch_size: 4
- seed: 180
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0003 | 1 | 0.6659 |
| 0.2241 | 0.1436 | 500 | 0.2210 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
merty/claude-s1-20250401_230715 | merty | 2025-04-02T00:34:57Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-01T23:08:10Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: claude-s1-20250401_230715
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# claude-s1-20250401_230715
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.95) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
gym890/diffportrait360 | gym890 | 2025-04-02T00:32:40Z | 0 | 3 | null | [
"en",
"arxiv:2503.15667",
"license:apache-2.0",
"region:us"
]
| null | 2025-03-21T19:18:42Z | ---
license: apache-2.0
language:
- en
---
arxiv.org/abs/2503.15667
<p align="center">
<h2 align="center">[CVPR'25]DiffPortrait360: Consistent Portrait Diffusion for 360 View Synthesis</h2>
<p align="center">
<a href="https://freedomgu.github.io/">Yuming Gu</a><sup>1,2</sup>
·
<a href="https://p0lyfish.github.io/portfolio/">Phong Tran</a><sup>2</sup>
·
<a href="https://paulyzheng.github.io/about/">Yujian Zheng</a><sup>2</sup>
·
<a href="https://hongyixu37.github.io/homepage/">Hongyi Xu</a><sup>3</sup>
·
<a href="https://lhyfst.github.io/">Heyuan Li</a><sup>4</sup>
·
<a href="https://www.linkedin.com/in/adilbek-karmanov?originalSubdomain=ae">Adilbek Karmanov</a><sup>2</sup>
·
<a href="https://hao-li.com">Hao Li</a><sup>2,5</sup>
<br>
<sup>1</sup>Unviersity of Southern California <sup>2</sup>MBZUAI <sup>3</sup>ByteDance Inc.
<br>
<sup>4</sup>The Chinese University of Hong Kong, Shenzhen <sup>5</sup>Pinscreen Inc.
<br>
</br>
<a href="https://arxiv.org/abs/2503.15667">
<img src='https://img.shields.io/badge/arXiv-diffportrait360-green' alt='Paper PDF'>
</a>
<a href='https://freedomgu.github.io/DiffPortrait360/'>
<img src='https://img.shields.io/badge/Project_Page-diffportrait360-blue' alt='Project Page'></a>
<a href='https://huggingface.co/gym890/diffportrait360'>
<img src='https://img.shields.io/badge/%F0%9F%A4%97%20HuggingFace-Model-yellow'></a>
</br>
</p>
## 📜 Requirements
* An NVIDIA GPU with CUDA support is required.
* We have tested on a single A6000 GPU.
* **Minimum**: The minimum GPU memory required is 30GB for generating a single NVS video (batch_size=1) of 32 frames each time.
* **Recommended**: We recommend using a GPU with 40GB of memory.
* Operating system: Linux
## 🧱 Download Pretrained Models
```bash
Diffportrait360
|----...
|----pretrained_weights
|----back_head-230000.th # back head generator
|----model_state-3400000.th # diffportrait360 main module
|----easy-khair-180-gpc0.8-trans10-025000.th
|----...
```
## 🔗 BibTeX
If you find [Diffportrait360](https://arxiv.org/abs/2503.15667) is useful for your research and applications, please cite Diffportrait360 using this BibTeX:
```BibTeX
@article{gu2025diffportrait360,
title={DiffPortrait360: Consistent Portrait Diffusion for 360 View Synthesis},
author={Gu, Yuming and Tran, Phong and Zheng, Yujian and Xu, Hongyi and Li, Heyuan and Karmanov, Adilbek and Li, Hao},
journal={arXiv preprint arXiv:2503.15667},
year={2025}
}
```
## License
Our code is distributed under the Apache-2.0 license.
## Acknowledgements
This work is supported by the Metaverse Center Grant from the MBZUAI Research Office. We appreciate the contributions from [Diffportrait3D](https://github.com/FreedomGu/DiffPortrait3D), [PanoHead](https://github.com/SizheAn/PanoHead), [SphereHead](https://lhyfst.github.io/spherehead/), [ControlNet](https://github.com/lllyasviel/ControlNet) for their open-sourced research. We thank [Egor Zakharov](https://egorzakharov.github.io/), [Zhenhui Lin](https://www.linkedin.com/in/zhenhui-lin-5b6510226/?originalSubdomain=ae), [Maksat Kengeskanov](https://www.linkedin.com/in/maksat-kengeskanov/%C2%A0/), and Yiming Chen for the early discussions, helpful suggestions, and feedback.
|
dgambettaphd/M_llama-3-8b_gen0_W_doc1000_synt64_MPPTrue_lastFalse | dgambettaphd | 2025-04-02T00:32:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2025-04-02T00:29:35Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Cuidarte/tatiana | Cuidarte | 2025-04-02T00:31:56Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-04-02T00:07:43Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: T4TIAN4
---
# Tatiana
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `T4TIAN4` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "T4TIAN4",
"lora_weights": "https://huggingface.co/Cuidarte/tatiana/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Cuidarte/tatiana', weight_name='lora.safetensors')
image = pipeline('T4TIAN4').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1500
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Cuidarte/tatiana/discussions) to add images that show off what you’ve made with this LoRA.
|
Sid068/qwen-FORIRCHAT | Sid068 | 2025-04-02T00:30:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"multimodal",
"conversational",
"en",
"arxiv:2409.12191",
"arxiv:2308.12966",
"base_model:Qwen/Qwen2-VL-7B",
"base_model:finetune:Qwen/Qwen2-VL-7B",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-04-01T21:59:28Z | ---
license: apache-2.0
language:
- en
pipeline_tag: image-text-to-text
tags:
- multimodal
library_name: transformers
base_model:
- Qwen/Qwen2-VL-7B
new_version: Qwen/Qwen2.5-VL-7B-Instruct
---
# Qwen2-VL-7B-Instruct
<a href="https://chat.qwenlm.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Introduction
We're excited to unveil **Qwen2-VL**, the latest iteration of our Qwen-VL model, representing nearly a year of innovation.
### What’s New in Qwen2-VL?
#### Key Enhancements:
* **SoTA understanding of images of various resolution & ratio**: Qwen2-VL achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc.
* **Understanding videos of 20min+**: Qwen2-VL can understand videos over 20 minutes for high-quality video-based question answering, dialog, content creation, etc.
* **Agent that can operate your mobiles, robots, etc.**: with the abilities of complex reasoning and decision making, Qwen2-VL can be integrated with devices like mobile phones, robots, etc., for automatic operation based on visual environment and text instructions.
* **Multilingual Support**: to serve global users, besides English and Chinese, Qwen2-VL now supports the understanding of texts in different languages inside images, including most European languages, Japanese, Korean, Arabic, Vietnamese, etc.
#### Model Architecture Updates:
* **Naive Dynamic Resolution**: Unlike before, Qwen2-VL can handle arbitrary image resolutions, mapping them into a dynamic number of visual tokens, offering a more human-like visual processing experience.
<p align="center">
<img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/qwen2_vl.jpg" width="80%"/>
<p>
* **Multimodal Rotary Position Embedding (M-ROPE)**: Decomposes positional embedding into parts to capture 1D textual, 2D visual, and 3D video positional information, enhancing its multimodal processing capabilities.
<p align="center">
<img src="http://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/mrope.png" width="80%"/>
<p>
We have three models with 2, 7 and 72 billion parameters. This repo contains the instruction-tuned 7B Qwen2-VL model. For more information, visit our [Blog](https://qwenlm.github.io/blog/qwen2-vl/) and [GitHub](https://github.com/QwenLM/Qwen2-VL).
## Evaluation
### Image Benchmarks
| Benchmark | InternVL2-8B | MiniCPM-V 2.6 | GPT-4o-mini | **Qwen2-VL-7B** |
| :--- | :---: | :---: | :---: | :---: |
| MMMU<sub>val</sub> | 51.8 | 49.8 | **60**| 54.1 |
| DocVQA<sub>test</sub> | 91.6 | 90.8 | - | **94.5** |
| InfoVQA<sub>test</sub> | 74.8 | - | - |**76.5** |
| ChartQA<sub>test</sub> | **83.3** | - |- | 83.0 |
| TextVQA<sub>val</sub> | 77.4 | 80.1 | -| **84.3** |
| OCRBench | 794 | **852** | 785 | 845 |
| MTVQA | - | - | -| **26.3** |
| VCR<sub>en easy</sub> | - | 73.88 | 83.60 | **89.70** |
| VCR<sub>zh easy</sub> | - | 10.18| 1.10 | **59.94** |
| RealWorldQA | 64.4 | - | - | **70.1** |
| MME<sub>sum</sub> | 2210.3 | **2348.4** | 2003.4| 2326.8 |
| MMBench-EN<sub>test</sub> | 81.7 | - | - | **83.0** |
| MMBench-CN<sub>test</sub> | **81.2** | - | - | 80.5 |
| MMBench-V1.1<sub>test</sub> | 79.4 | 78.0 | 76.0| **80.7** |
| MMT-Bench<sub>test</sub> | - | - | - |**63.7** |
| MMStar | **61.5** | 57.5 | 54.8 | 60.7 |
| MMVet<sub>GPT-4-Turbo</sub> | 54.2 | 60.0 | **66.9** | 62.0 |
| HallBench<sub>avg</sub> | 45.2 | 48.1 | 46.1| **50.6** |
| MathVista<sub>testmini</sub> | 58.3 | **60.6** | 52.4 | 58.2 |
| MathVision | - | - | - | **16.3** |
### Video Benchmarks
| Benchmark | Internvl2-8B | LLaVA-OneVision-7B | MiniCPM-V 2.6 | **Qwen2-VL-7B** |
| :--- | :---: | :---: | :---: | :---: |
| MVBench | 66.4 | 56.7 | - | **67.0** |
| PerceptionTest<sub>test</sub> | - | 57.1 | - | **62.3** |
| EgoSchema<sub>test</sub> | - | 60.1 | - | **66.7** |
| Video-MME<sub>wo/w subs</sub> | 54.0/56.9 | 58.2/- | 60.9/63.6 | **63.3**/**69.0** |
## Requirements
The code of Qwen2-VL has been in the latest Hugging face transformers and we advise you to build from source with command `pip install git+https://github.com/huggingface/transformers`, or you might encounter the following error:
```
KeyError: 'qwen2_vl'
```
## Quickstart
We offer a toolkit to help you handle various types of visual input more conveniently. This includes base64, URLs, and interleaved images and videos. You can install it using the following command:
```bash
pip install qwen-vl-utils
```
Here we show a code snippet to show you how to use the chat model with `transformers` and `qwen_vl_utils`:
```python
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
# default: Load the model on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2-VL-7B-Instruct", torch_dtype="auto", device_map="auto"
)
# We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
# model = Qwen2VLForConditionalGeneration.from_pretrained(
# "Qwen/Qwen2-VL-7B-Instruct",
# torch_dtype=torch.bfloat16,
# attn_implementation="flash_attention_2",
# device_map="auto",
# )
# default processer
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct")
# The default range for the number of visual tokens per image in the model is 4-16384. You can set min_pixels and max_pixels according to your needs, such as a token count range of 256-1280, to balance speed and memory usage.
# min_pixels = 256*28*28
# max_pixels = 1280*28*28
# processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels)
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
},
{"type": "text", "text": "Describe this image."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
<details>
<summary>Without qwen_vl_utils</summary>
```python
from PIL import Image
import requests
import torch
from torchvision import io
from typing import Dict
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
# Load the model in half-precision on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2-VL-7B-Instruct", torch_dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct")
# Image
url = "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
conversation = [
{
"role": "user",
"content": [
{
"type": "image",
},
{"type": "text", "text": "Describe this image."},
],
}
]
# Preprocess the inputs
text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
# Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>Describe this image.<|im_end|>\n<|im_start|>assistant\n'
inputs = processor(
text=[text_prompt], images=[image], padding=True, return_tensors="pt"
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
output_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids = [
output_ids[len(input_ids) :]
for input_ids, output_ids in zip(inputs.input_ids, output_ids)
]
output_text = processor.batch_decode(
generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True
)
print(output_text)
```
</details>
<details>
<summary>Multi image inference</summary>
```python
# Messages containing multiple images and a text query
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/image1.jpg"},
{"type": "image", "image": "file:///path/to/image2.jpg"},
{"type": "text", "text": "Identify the similarities between these images."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
</details>
<details>
<summary>Video inference</summary>
```python
# Messages containing a images list as a video and a text query
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": [
"file:///path/to/frame1.jpg",
"file:///path/to/frame2.jpg",
"file:///path/to/frame3.jpg",
"file:///path/to/frame4.jpg",
],
"fps": 1.0,
},
{"type": "text", "text": "Describe this video."},
],
}
]
# Messages containing a video and a text query
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": "file:///path/to/video1.mp4",
"max_pixels": 360 * 420,
"fps": 1.0,
},
{"type": "text", "text": "Describe this video."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
</details>
<details>
<summary>Batch inference</summary>
```python
# Sample messages for batch inference
messages1 = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/image1.jpg"},
{"type": "image", "image": "file:///path/to/image2.jpg"},
{"type": "text", "text": "What are the common elements in these pictures?"},
],
}
]
messages2 = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who are you?"},
]
# Combine messages for batch processing
messages = [messages1, messages1]
# Preparation for batch inference
texts = [
processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True)
for msg in messages
]
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=texts,
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Batch Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_texts = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_texts)
```
</details>
### More Usage Tips
For input images, we support local files, base64, and URLs. For videos, we currently only support local files.
```python
# You can directly insert a local file path, a URL, or a base64-encoded image into the position where you want in the text.
## Local file path
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/your/image.jpg"},
{"type": "text", "text": "Describe this image."},
],
}
]
## Image URL
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "http://path/to/your/image.jpg"},
{"type": "text", "text": "Describe this image."},
],
}
]
## Base64 encoded image
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "data:image;base64,/9j/..."},
{"type": "text", "text": "Describe this image."},
],
}
]
```
#### Image Resolution for performance boost
The model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs, such as a token count range of 256-1280, to balance speed and memory usage.
```python
min_pixels = 256 * 28 * 28
max_pixels = 1280 * 28 * 28
processor = AutoProcessor.from_pretrained(
"Qwen/Qwen2-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels
)
```
Besides, We provide two methods for fine-grained control over the image size input to the model:
1. Define min_pixels and max_pixels: Images will be resized to maintain their aspect ratio within the range of min_pixels and max_pixels.
2. Specify exact dimensions: Directly set `resized_height` and `resized_width`. These values will be rounded to the nearest multiple of 28.
```python
# min_pixels and max_pixels
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "file:///path/to/your/image.jpg",
"resized_height": 280,
"resized_width": 420,
},
{"type": "text", "text": "Describe this image."},
],
}
]
# resized_height and resized_width
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "file:///path/to/your/image.jpg",
"min_pixels": 50176,
"max_pixels": 50176,
},
{"type": "text", "text": "Describe this image."},
],
}
]
```
## Limitations
While Qwen2-VL are applicable to a wide range of visual tasks, it is equally important to understand its limitations. Here are some known restrictions:
1. Lack of Audio Support: The current model does **not comprehend audio information** within videos.
2. Data timeliness: Our image dataset is **updated until June 2023**, and information subsequent to this date may not be covered.
3. Constraints in Individuals and Intellectual Property (IP): The model's capacity to recognize specific individuals or IPs is limited, potentially failing to comprehensively cover all well-known personalities or brands.
4. Limited Capacity for Complex Instruction: When faced with intricate multi-step instructions, the model's understanding and execution capabilities require enhancement.
5. Insufficient Counting Accuracy: Particularly in complex scenes, the accuracy of object counting is not high, necessitating further improvements.
6. Weak Spatial Reasoning Skills: Especially in 3D spaces, the model's inference of object positional relationships is inadequate, making it difficult to precisely judge the relative positions of objects.
These limitations serve as ongoing directions for model optimization and improvement, and we are committed to continually enhancing the model's performance and scope of application.
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{Qwen2VL,
title={Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution},
author={Wang, Peng and Bai, Shuai and Tan, Sinan and Wang, Shijie and Fan, Zhihao and Bai, Jinze and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Fan, Yang and Dang, Kai and Du, Mengfei and Ren, Xuancheng and Men, Rui and Liu, Dayiheng and Zhou, Chang and Zhou, Jingren and Lin, Junyang},
journal={arXiv preprint arXiv:2409.12191},
year={2024}
}
@article{Qwen-VL,
title={Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond},
author={Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren},
journal={arXiv preprint arXiv:2308.12966},
year={2023}
}
``` |
bowilleatyou/8ce341fb-16fe-4667-bb99-4ce37c31d58c | bowilleatyou | 2025-04-02T00:29:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-01T19:26:42Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Yuseifer/mt5-small-RecetasdelaAbuela-es | Yuseifer | 2025-04-02T00:28:50Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| summarization | 2025-04-01T02:04:40Z | ---
library_name: transformers
license: apache-2.0
base_model: google/mt5-small
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-RecetasdelaAbuela-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-RecetasdelaAbuela-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1833
- Rouge1: 80.0
- Rouge2: 61.5385
- Rougel: 80.0
- Rougelsum: 80.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|:------:|:---------:|
| 1.4237 | 1.0 | 2024 | 0.2535 | 80.0 | 61.5385 | 80.0 | 80.0 |
| 0.2757 | 2.0 | 4048 | 0.2099 | 80.0 | 61.5385 | 80.0 | 80.0 |
| 0.2417 | 3.0 | 6072 | 0.1991 | 80.0 | 61.5385 | 80.0 | 80.0 |
| 0.224 | 4.0 | 8096 | 0.1927 | 80.0 | 61.5385 | 80.0 | 80.0 |
| 0.2165 | 5.0 | 10120 | 0.1875 | 80.0 | 61.5385 | 80.0 | 80.0 |
| 0.2076 | 6.0 | 12144 | 0.1858 | 80.0 | 61.5385 | 80.0 | 80.0 |
| 0.2042 | 7.0 | 14168 | 0.1837 | 80.0 | 61.5385 | 80.0 | 80.0 |
| 0.2005 | 8.0 | 16192 | 0.1833 | 80.0 | 61.5385 | 80.0 | 80.0 |
### Framework versions
- Transformers 4.50.2
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
MinaMila/llama_instbase_Adult_4ep_55 | MinaMila | 2025-04-02T00:28:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T00:24:52Z | ---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Chrisdxxx/aurana-tarot | Chrisdxxx | 2025-04-02T00:27:15Z | 0 | 0 | null | [
"safetensors",
"t5",
"license:openrail++",
"region:us"
]
| null | 2025-04-02T00:26:16Z | ---
license: openrail++
---
|
kk-aivio/3e9f363e-5401-4824-a5d7-f618636f3df3 | kk-aivio | 2025-04-02T00:26:00Z | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:NousResearch/Yarn-Mistral-7b-64k",
"base_model:adapter:NousResearch/Yarn-Mistral-7b-64k",
"region:us"
]
| null | 2025-04-02T00:25:08Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: NousResearch/Yarn-Mistral-7b-64k
model-index:
- name: kk-aivio/3e9f363e-5401-4824-a5d7-f618636f3df3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kk-aivio/3e9f363e-5401-4824-a5d7-f618636f3df3
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
xw17/Llama-3.2-3B-Instruct_finetuned_3_def_lora | xw17 | 2025-04-02T00:25:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T03:56:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MetaBoi/meta_arxiv_v2 | MetaBoi | 2025-04-02T00:21:34Z | 64 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-30T00:13:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Cereloop/qwen_junior_maths_q4_k_m_v1 | Cereloop | 2025-04-02T00:17:57Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-02T00:17:34Z | ---
base_model: unsloth/qwen2.5-math-1.5b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Cereloop
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-math-1.5b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MinaMila/llama_instbase_Adult_3ep_55 | MinaMila | 2025-04-02T00:16:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T00:13:31Z | ---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
xw17/Llama-3.2-3B-Instruct_finetuned_1_def_lora | xw17 | 2025-04-02T00:15:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T03:47:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hymanshu/gemma-3-1b-gsm8k-grpo-finetune | hymanshu | 2025-04-02T00:09:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-1b-it",
"base_model:finetune:unsloth/gemma-3-1b-it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T00:09:10Z | ---
base_model: unsloth/gemma-3-1b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** hymanshu
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
airdrop4u00/airdrop4u00 | airdrop4u00 | 2025-04-02T00:09:00Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
]
| null | 2025-04-02T00:08:59Z | ---
license: bigscience-openrail-m
---
|
hymanshu/gemma-3-1b-gsm8k-grpo | hymanshu | 2025-04-02T00:08:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3_text",
"trl",
"en",
"base_model:unsloth/gemma-3-1b-it",
"base_model:finetune:unsloth/gemma-3-1b-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-02T00:08:35Z | ---
base_model: unsloth/gemma-3-1b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** hymanshu
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
gainantar/codet5p-lora-conala | gainantar | 2025-04-02T00:01:24Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Salesforce/codet5p-220m",
"base_model:adapter:Salesforce/codet5p-220m",
"region:us"
]
| null | 2025-04-02T00:01:22Z | ---
base_model: Salesforce/codet5p-220m
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_6_lr_0.002-gguf | RichardErkhov | 2025-04-02T00:00:37Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-01T23:00:56Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Phi3_intent_v48_1_w_unknown_6_lr_0.002 - GGUF
- Model creator: https://huggingface.co/magnifi/
- Original model: https://huggingface.co/magnifi/Phi3_intent_v48_1_w_unknown_6_lr_0.002/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Phi3_intent_v48_1_w_unknown_6_lr_0.002.Q2_K.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_6_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_6_lr_0.002.Q2_K.gguf) | Q2_K | 1.35GB |
| [Phi3_intent_v48_1_w_unknown_6_lr_0.002.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_6_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_6_lr_0.002.IQ3_XS.gguf) | IQ3_XS | 1.49GB |
| [Phi3_intent_v48_1_w_unknown_6_lr_0.002.IQ3_S.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_6_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_6_lr_0.002.IQ3_S.gguf) | IQ3_S | 1.57GB |
| [Phi3_intent_v48_1_w_unknown_6_lr_0.002.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_6_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_6_lr_0.002.Q3_K_S.gguf) | Q3_K_S | 1.57GB |
| [Phi3_intent_v48_1_w_unknown_6_lr_0.002.IQ3_M.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_6_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_6_lr_0.002.IQ3_M.gguf) | IQ3_M | 1.65GB |
| [Phi3_intent_v48_1_w_unknown_6_lr_0.002.Q3_K.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_6_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_6_lr_0.002.Q3_K.gguf) | Q3_K | 1.75GB |
| [Phi3_intent_v48_1_w_unknown_6_lr_0.002.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_6_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_6_lr_0.002.Q3_K_M.gguf) | Q3_K_M | 1.75GB |
| [Phi3_intent_v48_1_w_unknown_6_lr_0.002.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_6_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_6_lr_0.002.Q3_K_L.gguf) | Q3_K_L | 1.9GB |
| [Phi3_intent_v48_1_w_unknown_6_lr_0.002.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_6_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_6_lr_0.002.IQ4_XS.gguf) | IQ4_XS | 1.93GB |
| [Phi3_intent_v48_1_w_unknown_6_lr_0.002.Q4_0.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_6_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_6_lr_0.002.Q4_0.gguf) | Q4_0 | 2.03GB |
| [Phi3_intent_v48_1_w_unknown_6_lr_0.002.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_6_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_6_lr_0.002.IQ4_NL.gguf) | IQ4_NL | 2.04GB |
| [Phi3_intent_v48_1_w_unknown_6_lr_0.002.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_6_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_6_lr_0.002.Q4_K_S.gguf) | Q4_K_S | 2.04GB |
| [Phi3_intent_v48_1_w_unknown_6_lr_0.002.Q4_K.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_6_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_6_lr_0.002.Q4_K.gguf) | Q4_K | 2.16GB |
| [Phi3_intent_v48_1_w_unknown_6_lr_0.002.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_6_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_6_lr_0.002.Q4_K_M.gguf) | Q4_K_M | 2.16GB |
| [Phi3_intent_v48_1_w_unknown_6_lr_0.002.Q4_1.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_6_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_6_lr_0.002.Q4_1.gguf) | Q4_1 | 2.24GB |
| [Phi3_intent_v48_1_w_unknown_6_lr_0.002.Q5_0.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_6_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_6_lr_0.002.Q5_0.gguf) | Q5_0 | 2.46GB |
| [Phi3_intent_v48_1_w_unknown_6_lr_0.002.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_6_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_6_lr_0.002.Q5_K_S.gguf) | Q5_K_S | 2.46GB |
| [Phi3_intent_v48_1_w_unknown_6_lr_0.002.Q5_K.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_6_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_6_lr_0.002.Q5_K.gguf) | Q5_K | 2.53GB |
| [Phi3_intent_v48_1_w_unknown_6_lr_0.002.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_6_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_6_lr_0.002.Q5_K_M.gguf) | Q5_K_M | 2.53GB |
| [Phi3_intent_v48_1_w_unknown_6_lr_0.002.Q5_1.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_6_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_6_lr_0.002.Q5_1.gguf) | Q5_1 | 2.68GB |
| [Phi3_intent_v48_1_w_unknown_6_lr_0.002.Q6_K.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_6_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_6_lr_0.002.Q6_K.gguf) | Q6_K | 2.92GB |
| [Phi3_intent_v48_1_w_unknown_6_lr_0.002.Q8_0.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v48_1_w_unknown_6_lr_0.002-gguf/blob/main/Phi3_intent_v48_1_w_unknown_6_lr_0.002.Q8_0.gguf) | Q8_0 | 3.78GB |
Original model description:
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** magnifi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
miike-ai/qwen-coder-python-0.5b | miike-ai | 2025-04-01T23:59:28Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/Qwen2.5-Coder-0.5B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-Coder-0.5B-Instruct",
"region:us"
]
| null | 2025-04-01T23:59:14Z | ---
base_model: unsloth/Qwen2.5-Coder-0.5B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
fguzelant/bert-base-uncased-finetuned-rte-run_3_best | fguzelant | 2025-04-01T23:58:59Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-04-01T22:37:46Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-rte-run_3_best
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-rte-run_3_best
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6393
- Accuracy: 0.6968
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9.137260772790819e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 20 | 0.6517 | 0.6209 |
| No log | 2.0 | 40 | 0.6393 | 0.6968 |
| No log | 3.0 | 60 | 0.7254 | 0.6859 |
| No log | 4.0 | 80 | 0.8398 | 0.6679 |
| No log | 5.0 | 100 | 0.9155 | 0.6895 |
### Framework versions
- Transformers 4.50.2
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
xw17/Llama-3.2-1B-Instruct_finetuned_4_def_lora | xw17 | 2025-04-01T23:56:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T03:31:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinaMila/llama_instbase_Adult_1ep_55 | MinaMila | 2025-04-01T23:54:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-01T23:51:30Z | ---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
chenggong1995/Qwen-2.5-Base-7B-mixed-gen8-scale-ghpo-hint0.3-epoch1 | chenggong1995 | 2025-04-01T23:53:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:chenggong1995/MATH-lighteval-om220k-Mixed-ghpo",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-7B",
"base_model:finetune:Qwen/Qwen2.5-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-01T09:01:55Z | ---
base_model: Qwen/Qwen2.5-7B
datasets: chenggong1995/MATH-lighteval-om220k-Mixed-ghpo
library_name: transformers
model_name: Qwen-2.5-Base-7B-mixed-gen8-scale-ghpo-hint0.3-epoch1
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-Base-7B-mixed-gen8-scale-ghpo-hint0.3-epoch1
This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) on the [chenggong1995/MATH-lighteval-om220k-Mixed-ghpo](https://huggingface.co/datasets/chenggong1995/MATH-lighteval-om220k-Mixed-ghpo) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chenggong1995/Qwen-2.5-Base-7B-mixed-gen8-scale-ghpo-hint0.3-epoch1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/gongc1995-city-university-of-hong-kong/huggingface/runs/vzr3whhd)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/Homer-7B-v0.1-GGUF | mradermacher | 2025-04-01T23:53:27Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:newsbang/Homer-7B-v0.1",
"base_model:quantized:newsbang/Homer-7B-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-01T23:18:26Z | ---
base_model: newsbang/Homer-7B-v0.1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/newsbang/Homer-7B-v0.1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Homer-7B-v0.1-GGUF/resolve/main/Homer-7B-v0.1.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Homer-7B-v0.1-GGUF/resolve/main/Homer-7B-v0.1.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Homer-7B-v0.1-GGUF/resolve/main/Homer-7B-v0.1.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Homer-7B-v0.1-GGUF/resolve/main/Homer-7B-v0.1.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Homer-7B-v0.1-GGUF/resolve/main/Homer-7B-v0.1.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Homer-7B-v0.1-GGUF/resolve/main/Homer-7B-v0.1.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Homer-7B-v0.1-GGUF/resolve/main/Homer-7B-v0.1.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Homer-7B-v0.1-GGUF/resolve/main/Homer-7B-v0.1.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Homer-7B-v0.1-GGUF/resolve/main/Homer-7B-v0.1.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Homer-7B-v0.1-GGUF/resolve/main/Homer-7B-v0.1.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Homer-7B-v0.1-GGUF/resolve/main/Homer-7B-v0.1.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Homer-7B-v0.1-GGUF/resolve/main/Homer-7B-v0.1.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RichardErkhov/magnifi_-_Phi3_intent_v47_3_w_unknown_7_lr_0.002-gguf | RichardErkhov | 2025-04-01T23:53:03Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-01T22:51:09Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Phi3_intent_v47_3_w_unknown_7_lr_0.002 - GGUF
- Model creator: https://huggingface.co/magnifi/
- Original model: https://huggingface.co/magnifi/Phi3_intent_v47_3_w_unknown_7_lr_0.002/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Phi3_intent_v47_3_w_unknown_7_lr_0.002.Q2_K.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v47_3_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v47_3_w_unknown_7_lr_0.002.Q2_K.gguf) | Q2_K | 1.35GB |
| [Phi3_intent_v47_3_w_unknown_7_lr_0.002.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v47_3_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v47_3_w_unknown_7_lr_0.002.IQ3_XS.gguf) | IQ3_XS | 1.49GB |
| [Phi3_intent_v47_3_w_unknown_7_lr_0.002.IQ3_S.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v47_3_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v47_3_w_unknown_7_lr_0.002.IQ3_S.gguf) | IQ3_S | 1.57GB |
| [Phi3_intent_v47_3_w_unknown_7_lr_0.002.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v47_3_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v47_3_w_unknown_7_lr_0.002.Q3_K_S.gguf) | Q3_K_S | 1.57GB |
| [Phi3_intent_v47_3_w_unknown_7_lr_0.002.IQ3_M.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v47_3_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v47_3_w_unknown_7_lr_0.002.IQ3_M.gguf) | IQ3_M | 1.65GB |
| [Phi3_intent_v47_3_w_unknown_7_lr_0.002.Q3_K.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v47_3_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v47_3_w_unknown_7_lr_0.002.Q3_K.gguf) | Q3_K | 1.75GB |
| [Phi3_intent_v47_3_w_unknown_7_lr_0.002.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v47_3_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v47_3_w_unknown_7_lr_0.002.Q3_K_M.gguf) | Q3_K_M | 1.75GB |
| [Phi3_intent_v47_3_w_unknown_7_lr_0.002.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v47_3_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v47_3_w_unknown_7_lr_0.002.Q3_K_L.gguf) | Q3_K_L | 1.9GB |
| [Phi3_intent_v47_3_w_unknown_7_lr_0.002.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v47_3_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v47_3_w_unknown_7_lr_0.002.IQ4_XS.gguf) | IQ4_XS | 1.93GB |
| [Phi3_intent_v47_3_w_unknown_7_lr_0.002.Q4_0.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v47_3_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v47_3_w_unknown_7_lr_0.002.Q4_0.gguf) | Q4_0 | 2.03GB |
| [Phi3_intent_v47_3_w_unknown_7_lr_0.002.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v47_3_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v47_3_w_unknown_7_lr_0.002.IQ4_NL.gguf) | IQ4_NL | 2.04GB |
| [Phi3_intent_v47_3_w_unknown_7_lr_0.002.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v47_3_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v47_3_w_unknown_7_lr_0.002.Q4_K_S.gguf) | Q4_K_S | 2.04GB |
| [Phi3_intent_v47_3_w_unknown_7_lr_0.002.Q4_K.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v47_3_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v47_3_w_unknown_7_lr_0.002.Q4_K.gguf) | Q4_K | 2.16GB |
| [Phi3_intent_v47_3_w_unknown_7_lr_0.002.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v47_3_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v47_3_w_unknown_7_lr_0.002.Q4_K_M.gguf) | Q4_K_M | 2.16GB |
| [Phi3_intent_v47_3_w_unknown_7_lr_0.002.Q4_1.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v47_3_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v47_3_w_unknown_7_lr_0.002.Q4_1.gguf) | Q4_1 | 2.24GB |
| [Phi3_intent_v47_3_w_unknown_7_lr_0.002.Q5_0.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v47_3_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v47_3_w_unknown_7_lr_0.002.Q5_0.gguf) | Q5_0 | 2.46GB |
| [Phi3_intent_v47_3_w_unknown_7_lr_0.002.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v47_3_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v47_3_w_unknown_7_lr_0.002.Q5_K_S.gguf) | Q5_K_S | 2.46GB |
| [Phi3_intent_v47_3_w_unknown_7_lr_0.002.Q5_K.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v47_3_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v47_3_w_unknown_7_lr_0.002.Q5_K.gguf) | Q5_K | 2.53GB |
| [Phi3_intent_v47_3_w_unknown_7_lr_0.002.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v47_3_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v47_3_w_unknown_7_lr_0.002.Q5_K_M.gguf) | Q5_K_M | 2.53GB |
| [Phi3_intent_v47_3_w_unknown_7_lr_0.002.Q5_1.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v47_3_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v47_3_w_unknown_7_lr_0.002.Q5_1.gguf) | Q5_1 | 2.68GB |
| [Phi3_intent_v47_3_w_unknown_7_lr_0.002.Q6_K.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v47_3_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v47_3_w_unknown_7_lr_0.002.Q6_K.gguf) | Q6_K | 2.92GB |
| [Phi3_intent_v47_3_w_unknown_7_lr_0.002.Q8_0.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v47_3_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v47_3_w_unknown_7_lr_0.002.Q8_0.gguf) | Q8_0 | 3.78GB |
Original model description:
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** magnifi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
inrainbws/vit_r16_mlora | inrainbws | 2025-04-01T23:51:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-01T01:23:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Kort/r42 | Kort | 2025-04-01T23:49:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-01T23:23:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
giulianad/llama-3.2-2B-GRAFT-Text2KG-LoRA-v4 | giulianad | 2025-04-01T23:48:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-01T23:47:59Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** giulianad
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
bhushan4829/llama3-symptom-disease-prediction | bhushan4829 | 2025-04-01T23:43:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T23:54:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
JacksonBrune/524be5ce-36b3-46de-bcce-1d281ec2cb79 | JacksonBrune | 2025-04-01T23:43:15Z | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:furiosa-ai/mlperf-gpt-j-6b",
"base_model:adapter:furiosa-ai/mlperf-gpt-j-6b",
"region:us"
]
| null | 2025-04-01T23:42:34Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: furiosa-ai/mlperf-gpt-j-6b
model-index:
- name: JacksonBrune/524be5ce-36b3-46de-bcce-1d281ec2cb79
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# JacksonBrune/524be5ce-36b3-46de-bcce-1d281ec2cb79
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1787
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
MinaMila/llama_instbase_Adult_15ep_42 | MinaMila | 2025-04-01T23:42:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-01T23:39:12Z | ---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ibrahkadabra/classifier_2.0 | ibrahkadabra | 2025-04-01T23:41:18Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-01T23:39:57Z | ---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ibrahkadabra
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
KoichiYasuoka/roberta-base-turkish-ud-goeswith | KoichiYasuoka | 2025-04-01T23:38:38Z | 0 | 0 | null | [
"pytorch",
"roberta",
"turkish",
"token-classification",
"pos",
"dependency-parsing",
"tr",
"dataset:universal_dependencies",
"base_model:TURKCELL/roberta-base-turkish-uncased",
"base_model:finetune:TURKCELL/roberta-base-turkish-uncased",
"license:mit",
"region:us"
]
| token-classification | 2025-04-01T23:34:42Z | ---
language:
- "tr"
tags:
- "turkish"
- "token-classification"
- "pos"
- "dependency-parsing"
base_model: TURKCELL/roberta-base-turkish-uncased
datasets:
- "universal_dependencies"
license: "mit"
pipeline_tag: "token-classification"
---
# roberta-base-turkish-ud-goeswith
## Model Description
This is a RoBERTa model for POS-tagging and dependency-parsing (using `goeswith` for subwords), derived from [roberta-base-turkish-uncased](https://huggingface.co/TURKCELL/roberta-base-turkish-uncased).
## How to Use
```py
from transformers import pipeline
nlp=pipeline("universal-dependencies","KoichiYasuoka/roberta-base-turkish-ud-goeswith",trust_remote_code=True,aggregation_strategy="simple")
print(nlp("Ay dağın diğer tarafında yükseldi"))
```
|
kukjuhn/output2 | kukjuhn | 2025-04-01T23:34:32Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-04-01T23:33:48Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: output2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2677
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.119 | 1.0 | 7979 | 0.2677 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.8.0.dev20250327+cu128
- Datasets 3.5.0
- Tokenizers 0.21.1
|
mradermacher/maids-GGUF | mradermacher | 2025-04-01T23:33:07Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:QIOvO/maids",
"base_model:quantized:QIOvO/maids",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-01T22:41:44Z | ---
base_model: QIOvO/maids
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/QIOvO/maids
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/maids-GGUF/resolve/main/maids.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/maids-GGUF/resolve/main/maids.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/maids-GGUF/resolve/main/maids.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/maids-GGUF/resolve/main/maids.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/maids-GGUF/resolve/main/maids.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/maids-GGUF/resolve/main/maids.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/maids-GGUF/resolve/main/maids.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/maids-GGUF/resolve/main/maids.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/maids-GGUF/resolve/main/maids.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/maids-GGUF/resolve/main/maids.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/maids-GGUF/resolve/main/maids.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/maids-GGUF/resolve/main/maids.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
MinaMila/llama_instbase_Adult_14ep_42 | MinaMila | 2025-04-01T23:31:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-01T23:28:24Z | ---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mergekit-community/mergekit-dare_ties-itrjkoc | mergekit-community | 2025-04-01T23:31:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"base_model:ReadyArt/Gaslight-24B-v1.0",
"base_model:merge:ReadyArt/Gaslight-24B-v1.0",
"base_model:mergekit-community/mergekit-model_stock-mhvxkzw",
"base_model:merge:mergekit-community/mergekit-model_stock-mhvxkzw",
"base_model:mergekit-community/mergekit-model_stock-tiwlqms",
"base_model:merge:mergekit-community/mergekit-model_stock-tiwlqms",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-01T23:22:29Z | ---
base_model:
- mergekit-community/mergekit-model_stock-mhvxkzw
- mergekit-community/mergekit-model_stock-tiwlqms
- ReadyArt/Gaslight-24B-v1.0
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [ReadyArt/Gaslight-24B-v1.0](https://huggingface.co/ReadyArt/Gaslight-24B-v1.0) as a base.
### Models Merged
The following models were included in the merge:
* [mergekit-community/mergekit-model_stock-mhvxkzw](https://huggingface.co/mergekit-community/mergekit-model_stock-mhvxkzw)
* [mergekit-community/mergekit-model_stock-tiwlqms](https://huggingface.co/mergekit-community/mergekit-model_stock-tiwlqms)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: float32
out_dtype: bfloat16
merge_method: dare_ties
base_model: ReadyArt/Gaslight-24B-v1.0
models:
- model: mergekit-community/mergekit-model_stock-tiwlqms
parameters:
density: 1
weight: 1
- model: mergekit-community/mergekit-model_stock-mhvxkzw
parameters:
density: 0.75
weight: 0.75
parameters:
normalize: true
int8_mask: true
```
|
bartowski/katanemo_Arch-Function-Chat-3B-GGUF | bartowski | 2025-04-01T23:30:29Z | 0 | 0 | null | [
"gguf",
"text-generation",
"en",
"base_model:katanemo/Arch-Function-Chat-3B",
"base_model:quantized:katanemo/Arch-Function-Chat-3B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2025-04-01T22:10:10Z | ---
quantized_by: bartowski
pipeline_tag: text-generation
base_model_relation: quantized
license_name: katanemo-research
base_model: katanemo/Arch-Function-Chat-3B
language:
- en
license: other
license_link: https://huggingface.co/katanemo/Arch-Function-Chat-3B/blob/main/LICENSE
---
## Llamacpp imatrix Quantizations of Arch-Function-Chat-3B by katanemo
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b5010">b5010</a> for quantization.
Original model: https://huggingface.co/katanemo/Arch-Function-Chat-3B
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
Run them directly with [llama.cpp](https://github.com/ggerganov/llama.cpp), or any other llama.cpp based project
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [Arch-Function-Chat-3B-bf16.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-3B-GGUF/blob/main/katanemo_Arch-Function-Chat-3B-bf16.gguf) | bf16 | 6.18GB | false | Full BF16 weights. |
| [Arch-Function-Chat-3B-Q8_0.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-3B-GGUF/blob/main/katanemo_Arch-Function-Chat-3B-Q8_0.gguf) | Q8_0 | 3.29GB | false | Extremely high quality, generally unneeded but max available quant. |
| [Arch-Function-Chat-3B-Q6_K_L.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-3B-GGUF/blob/main/katanemo_Arch-Function-Chat-3B-Q6_K_L.gguf) | Q6_K_L | 2.61GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [Arch-Function-Chat-3B-Q6_K.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-3B-GGUF/blob/main/katanemo_Arch-Function-Chat-3B-Q6_K.gguf) | Q6_K | 2.54GB | false | Very high quality, near perfect, *recommended*. |
| [Arch-Function-Chat-3B-Q5_K_L.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-3B-GGUF/blob/main/katanemo_Arch-Function-Chat-3B-Q5_K_L.gguf) | Q5_K_L | 2.30GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [Arch-Function-Chat-3B-Q5_K_M.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-3B-GGUF/blob/main/katanemo_Arch-Function-Chat-3B-Q5_K_M.gguf) | Q5_K_M | 2.22GB | false | High quality, *recommended*. |
| [Arch-Function-Chat-3B-Q5_K_S.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-3B-GGUF/blob/main/katanemo_Arch-Function-Chat-3B-Q5_K_S.gguf) | Q5_K_S | 2.17GB | false | High quality, *recommended*. |
| [Arch-Function-Chat-3B-Q4_K_L.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-3B-GGUF/blob/main/katanemo_Arch-Function-Chat-3B-Q4_K_L.gguf) | Q4_K_L | 2.01GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [Arch-Function-Chat-3B-Q4_1.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-3B-GGUF/blob/main/katanemo_Arch-Function-Chat-3B-Q4_1.gguf) | Q4_1 | 2.00GB | false | Legacy format, similar performance to Q4_K_S but with improved tokens/watt on Apple silicon. |
| [Arch-Function-Chat-3B-Q4_K_M.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-3B-GGUF/blob/main/katanemo_Arch-Function-Chat-3B-Q4_K_M.gguf) | Q4_K_M | 1.93GB | false | Good quality, default size for most use cases, *recommended*. |
| [Arch-Function-Chat-3B-Q4_K_S.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-3B-GGUF/blob/main/katanemo_Arch-Function-Chat-3B-Q4_K_S.gguf) | Q4_K_S | 1.83GB | false | Slightly lower quality with more space savings, *recommended*. |
| [Arch-Function-Chat-3B-Q4_0.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-3B-GGUF/blob/main/katanemo_Arch-Function-Chat-3B-Q4_0.gguf) | Q4_0 | 1.83GB | false | Legacy format, offers online repacking for ARM and AVX CPU inference. |
| [Arch-Function-Chat-3B-IQ4_NL.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-3B-GGUF/blob/main/katanemo_Arch-Function-Chat-3B-IQ4_NL.gguf) | IQ4_NL | 1.83GB | false | Similar to IQ4_XS, but slightly larger. Offers online repacking for ARM CPU inference. |
| [Arch-Function-Chat-3B-Q3_K_XL.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-3B-GGUF/blob/main/katanemo_Arch-Function-Chat-3B-Q3_K_XL.gguf) | Q3_K_XL | 1.78GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [Arch-Function-Chat-3B-IQ4_XS.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-3B-GGUF/blob/main/katanemo_Arch-Function-Chat-3B-IQ4_XS.gguf) | IQ4_XS | 1.74GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Arch-Function-Chat-3B-Q3_K_L.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-3B-GGUF/blob/main/katanemo_Arch-Function-Chat-3B-Q3_K_L.gguf) | Q3_K_L | 1.71GB | false | Lower quality but usable, good for low RAM availability. |
| [Arch-Function-Chat-3B-Q3_K_M.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-3B-GGUF/blob/main/katanemo_Arch-Function-Chat-3B-Q3_K_M.gguf) | Q3_K_M | 1.59GB | false | Low quality. |
| [Arch-Function-Chat-3B-IQ3_M.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-3B-GGUF/blob/main/katanemo_Arch-Function-Chat-3B-IQ3_M.gguf) | IQ3_M | 1.49GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Arch-Function-Chat-3B-Q3_K_S.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-3B-GGUF/blob/main/katanemo_Arch-Function-Chat-3B-Q3_K_S.gguf) | Q3_K_S | 1.45GB | false | Low quality, not recommended. |
| [Arch-Function-Chat-3B-IQ3_XS.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-3B-GGUF/blob/main/katanemo_Arch-Function-Chat-3B-IQ3_XS.gguf) | IQ3_XS | 1.39GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Arch-Function-Chat-3B-Q2_K_L.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-3B-GGUF/blob/main/katanemo_Arch-Function-Chat-3B-Q2_K_L.gguf) | Q2_K_L | 1.35GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [Arch-Function-Chat-3B-IQ3_XXS.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-3B-GGUF/blob/main/katanemo_Arch-Function-Chat-3B-IQ3_XXS.gguf) | IQ3_XXS | 1.28GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Arch-Function-Chat-3B-Q2_K.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-3B-GGUF/blob/main/katanemo_Arch-Function-Chat-3B-Q2_K.gguf) | Q2_K | 1.27GB | false | Very low quality but surprisingly usable. |
| [Arch-Function-Chat-3B-IQ2_M.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-3B-GGUF/blob/main/katanemo_Arch-Function-Chat-3B-IQ2_M.gguf) | IQ2_M | 1.14GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
## Downloading using huggingface-cli
<details>
<summary>Click to view download instructions</summary>
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/katanemo_Arch-Function-Chat-3B-GGUF --include "katanemo_Arch-Function-Chat-3B-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/katanemo_Arch-Function-Chat-3B-GGUF --include "katanemo_Arch-Function-Chat-3B-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (katanemo_Arch-Function-Chat-3B-Q8_0) or download them all in place (./)
</details>
## ARM/AVX information
Previously, you would download Q4_0_4_4/4_8/8_8, and these would have their weights interleaved in memory in order to improve performance on ARM and AVX machines by loading up more data in one pass.
Now, however, there is something called "online repacking" for weights. details in [this PR](https://github.com/ggerganov/llama.cpp/pull/9921). If you use Q4_0 and your hardware would benefit from repacking weights, it will do it automatically on the fly.
As of llama.cpp build [b4282](https://github.com/ggerganov/llama.cpp/releases/tag/b4282) you will not be able to run the Q4_0_X_X files and will instead need to use Q4_0.
Additionally, if you want to get slightly better quality for , you can use IQ4_NL thanks to [this PR](https://github.com/ggerganov/llama.cpp/pull/10541) which will also repack the weights for ARM, though only the 4_4 for now. The loading time may be slower but it will result in an overall speed incrase.
<details>
<summary>Click to view Q4_0_X_X information (deprecated</summary>
I'm keeping this section to show the potential theoretical uplift in performance from using the Q4_0 with online repacking.
<details>
<summary>Click to view benchmarks on an AVX2 system (EPYC7702)</summary>
| model | size | params | backend | threads | test | t/s | % (vs Q4_0) |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 ± 1.03 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 ± 0.19 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 ± 0.44 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 ± 0.27 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 ± 0.69 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 ± 0.03 | 100% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 ± 1.74 | 147% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 ± 0.20 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 ± 1.81 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 ± 0.99 | 48% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 ± 3.04 | 83% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 ± 3.59 | 90% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 ± 3.53 | 133% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 ± 45.63 | 100% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 ± 5.00 | 124% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 ± 0.05 | 111% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 ± 0.09 | 110% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 ± 0.31 | 105% |
Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation
</details>
</details>
## Which file should I choose?
<details>
<summary>Click here for details</summary>
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
</details>
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
Thank you ZeroWw for the inspiration to experiment with embed/output.
Thank you to LM Studio for sponsoring my work.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
jhyun0414/20250401-Llama-3.1-8B-Instruct-llama_label-5.0 | jhyun0414 | 2025-04-01T23:29:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-01T23:24:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vivek1192/merged_medicalQAlinear-hindi | vivek1192 | 2025-04-01T23:27:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2203.05482",
"base_model:BioMistral/BioMistral-7B",
"base_model:merge:BioMistral/BioMistral-7B",
"base_model:OdiaGenAI/mistral_hindi_7b_base_v1",
"base_model:merge:OdiaGenAI/mistral_hindi_7b_base_v1",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:merge:mistralai/Mistral-7B-v0.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-01T23:24:34Z | ---
base_model:
- OdiaGenAI/mistral_hindi_7b_base_v1
- mistralai/Mistral-7B-v0.1
- BioMistral/BioMistral-7B
library_name: transformers
tags:
- mergekit
- merge
---
# merged_models
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [OdiaGenAI/mistral_hindi_7b_base_v1](https://huggingface.co/OdiaGenAI/mistral_hindi_7b_base_v1)
* [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
* [BioMistral/BioMistral-7B](https://huggingface.co/BioMistral/BioMistral-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: float16
merge_method: linear
modules:
default:
slices:
- sources:
- layer_range: [0, 32]
model: BioMistral/BioMistral-7B
parameters:
density: 0.5
weight: 0.5
- layer_range: [0, 32]
model: OdiaGenAI/mistral_hindi_7b_base_v1
parameters:
density: 0.5
weight: 0.5
- layer_range: [0, 32]
model: mistralai/Mistral-7B-v0.1
parameters:
density: 0.5
weight: 0.5
```
|
onnx-community/gemma-3-1b-it-ONNX | onnx-community | 2025-04-01T23:24:34Z | 1,404 | 9 | transformers.js | [
"transformers.js",
"onnx",
"gemma3_text",
"text-generation",
"conversational",
"base_model:google/gemma-3-1b-it",
"base_model:quantized:google/gemma-3-1b-it",
"license:gemma",
"region:us"
]
| text-generation | 2025-03-12T07:06:20Z | ---
pipeline_tag: text-generation
base_model:
- google/gemma-3-1b-it
library_name: transformers.js
license: gemma
new_version: onnx-community/gemma-3-1b-it-ONNX-GQA
---
## Usage
### ONNXRuntime
```py
from transformers import AutoConfig, AutoTokenizer
import onnxruntime
import numpy as np
# 1. Load config, processor, and model
path_to_model = "./gemma-3-1b-it-ONNX"
config = AutoConfig.from_pretrained(path_to_model)
tokenizer = AutoTokenizer.from_pretrained(path_to_model)
decoder_session = onnxruntime.InferenceSession(f"{path_to_model}/onnx/model.onnx")
## Set config values
num_key_value_heads = config.num_key_value_heads
head_dim = config.head_dim
num_hidden_layers = config.num_hidden_layers
eos_token_id = 106 # 106 is for <end_of_turn>
# 2. Prepare inputs
## Create input messages
messages = [
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "user", "content": "Write me a poem about Machine Learning." },
]
## Apply tokenizer
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="np")
## Prepare decoder inputs
batch_size = inputs['input_ids'].shape[0]
past_key_values = {
f'past_key_values.{layer}.{kv}': np.zeros([batch_size, num_key_value_heads, 0, head_dim], dtype=np.float32)
for layer in range(num_hidden_layers)
for kv in ('key', 'value')
}
input_ids = inputs['input_ids']
position_ids = np.tile(np.arange(1, input_ids.shape[-1] + 1), (batch_size, 1))
# 3. Generation loop
max_new_tokens = 1024
generated_tokens = np.array([[]], dtype=np.int64)
for i in range(max_new_tokens):
logits, *present_key_values = decoder_session.run(None, dict(
input_ids=input_ids,
position_ids=position_ids,
**past_key_values,
))
## Update values for next generation loop
input_ids = logits[:, -1].argmax(-1, keepdims=True)
position_ids = position_ids[:, -1:] + 1
for j, key in enumerate(past_key_values):
past_key_values[key] = present_key_values[j]
generated_tokens = np.concatenate([generated_tokens, input_ids], axis=-1)
if (input_ids == eos_token_id).all():
break
## (Optional) Streaming
print(tokenizer.decode(input_ids[0]), end='', flush=True)
print()
# 4. Output result
print(tokenizer.batch_decode(generated_tokens))
```
<details>
<summary>See example output</summary>
```
Okay, here’s a poem about Machine Learning, aiming for a balance of technical and evocative language:
**The Silent Learner**
The data streams, a boundless flow,
A river vast, where patterns grow.
No human hand to guide the way,
Just algorithms, come what may.
Machine Learning, a subtle art,
To teach a system, a brand new start.
With weights and biases, finely tuned,
It seeks the truth, beneath the moon.
It learns from errors, big and small,
Adjusting swiftly, standing tall.
From pixels bright to voices clear,
It builds a model, banishing fear.
Of blind prediction, cold and stark,
It finds the meaning, leaves its mark.
A network deep, a complex grace,
Discovering insights, time and space.
It sees the trends, the subtle hue,
Predicting futures, fresh and new.
A silent learner, ever keen,
A digital mind, unseen, serene.
So let the code begin to gleam,
A blossoming of a learning dream.
Machine Learning, a wondrous sight,
Shaping the future, shining bright.
---
Would you like me to:
* Adjust the tone or style? (e.g., more technical, more metaphorical)
* Focus on a specific aspect of ML (e.g., neural networks, data analysis)?
* Create a different length or format?
```
</details>
### Transformers.js
```js
import { pipeline } from "@huggingface/transformers";
// Create a text generation pipeline
const generator = await pipeline(
"text-generation",
"onnx-community/gemma-3-1b-it-ONNX",
{ dtype: "q4" },
);
// Define the list of messages
const messages = [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Write me a poem about Machine Learning." },
];
// Generate a response
const output = await generator(messages, { max_new_tokens: 512, do_sample: false });
console.log(output[0].generated_text.at(-1).content);
``` |
nurzhanit/whisper-omg-2.5.4 | nurzhanit | 2025-04-01T23:22:41Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"whisper",
"hf-asr-leaderboard",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:nurzhanit/whisper-enhanced-ml",
"base_model:finetune:nurzhanit/whisper-enhanced-ml",
"model-index",
"region:us"
]
| null | 2025-04-01T20:32:34Z | ---
language:
- hi
base_model: nurzhanit/whisper-enhanced-ml
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Hi - Sanchit Gandhi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: default
split: None
args: 'config: hi, split: test'
metrics:
- name: Wer
type: wer
value: 0.5728133950209298
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - Sanchit Gandhi
This model is a fine-tuned version of [nurzhanit/whisper-enhanced-ml](https://huggingface.co/nurzhanit/whisper-enhanced-ml) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0019
- Wer: 0.5728
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 25
- training_steps: 250
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 0.0722 | 1.4706 | 50 | 0.0438 | 6.0586 |
| 0.0318 | 2.9412 | 100 | 0.0132 | 2.1370 |
| 0.0098 | 4.4118 | 150 | 0.0051 | 1.0355 |
| 0.0058 | 5.8824 | 200 | 0.0025 | 0.6389 |
| 0.0018 | 7.3529 | 250 | 0.0019 | 0.5728 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.2
- Tokenizers 0.19.1
|
bartowski/katanemo_Arch-Function-Chat-7B-GGUF | bartowski | 2025-04-01T23:16:42Z | 0 | 0 | null | [
"gguf",
"text-generation",
"en",
"base_model:katanemo/Arch-Function-Chat-7B",
"base_model:quantized:katanemo/Arch-Function-Chat-7B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2025-04-01T22:10:04Z | ---
quantized_by: bartowski
pipeline_tag: text-generation
base_model_relation: quantized
license_name: katanemo-research
base_model: katanemo/Arch-Function-Chat-7B
language:
- en
license: other
license_link: https://huggingface.co/katanemo/Arch-Function-Chat-7B/blob/main/LICENSE
---
## Llamacpp imatrix Quantizations of Arch-Function-Chat-7B by katanemo
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b5010">b5010</a> for quantization.
Original model: https://huggingface.co/katanemo/Arch-Function-Chat-7B
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
Run them directly with [llama.cpp](https://github.com/ggerganov/llama.cpp), or any other llama.cpp based project
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [Arch-Function-Chat-7B-bf16.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-7B-GGUF/blob/main/katanemo_Arch-Function-Chat-7B-bf16.gguf) | bf16 | 15.24GB | false | Full BF16 weights. |
| [Arch-Function-Chat-7B-Q8_0.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-7B-GGUF/blob/main/katanemo_Arch-Function-Chat-7B-Q8_0.gguf) | Q8_0 | 8.10GB | false | Extremely high quality, generally unneeded but max available quant. |
| [Arch-Function-Chat-7B-Q6_K_L.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-7B-GGUF/blob/main/katanemo_Arch-Function-Chat-7B-Q6_K_L.gguf) | Q6_K_L | 6.52GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [Arch-Function-Chat-7B-Q6_K.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-7B-GGUF/blob/main/katanemo_Arch-Function-Chat-7B-Q6_K.gguf) | Q6_K | 6.25GB | false | Very high quality, near perfect, *recommended*. |
| [Arch-Function-Chat-7B-Q5_K_L.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-7B-GGUF/blob/main/katanemo_Arch-Function-Chat-7B-Q5_K_L.gguf) | Q5_K_L | 5.78GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [Arch-Function-Chat-7B-Q5_K_M.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-7B-GGUF/blob/main/katanemo_Arch-Function-Chat-7B-Q5_K_M.gguf) | Q5_K_M | 5.44GB | false | High quality, *recommended*. |
| [Arch-Function-Chat-7B-Q5_K_S.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-7B-GGUF/blob/main/katanemo_Arch-Function-Chat-7B-Q5_K_S.gguf) | Q5_K_S | 5.32GB | false | High quality, *recommended*. |
| [Arch-Function-Chat-7B-Q4_K_L.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-7B-GGUF/blob/main/katanemo_Arch-Function-Chat-7B-Q4_K_L.gguf) | Q4_K_L | 5.09GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [Arch-Function-Chat-7B-Q4_1.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-7B-GGUF/blob/main/katanemo_Arch-Function-Chat-7B-Q4_1.gguf) | Q4_1 | 4.87GB | false | Legacy format, similar performance to Q4_K_S but with improved tokens/watt on Apple silicon. |
| [Arch-Function-Chat-7B-Q4_K_M.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-7B-GGUF/blob/main/katanemo_Arch-Function-Chat-7B-Q4_K_M.gguf) | Q4_K_M | 4.68GB | false | Good quality, default size for most use cases, *recommended*. |
| [Arch-Function-Chat-7B-Q3_K_XL.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-7B-GGUF/blob/main/katanemo_Arch-Function-Chat-7B-Q3_K_XL.gguf) | Q3_K_XL | 4.57GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [Arch-Function-Chat-7B-Q4_K_S.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-7B-GGUF/blob/main/katanemo_Arch-Function-Chat-7B-Q4_K_S.gguf) | Q4_K_S | 4.46GB | false | Slightly lower quality with more space savings, *recommended*. |
| [Arch-Function-Chat-7B-Q4_0.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-7B-GGUF/blob/main/katanemo_Arch-Function-Chat-7B-Q4_0.gguf) | Q4_0 | 4.44GB | false | Legacy format, offers online repacking for ARM and AVX CPU inference. |
| [Arch-Function-Chat-7B-IQ4_NL.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-7B-GGUF/blob/main/katanemo_Arch-Function-Chat-7B-IQ4_NL.gguf) | IQ4_NL | 4.44GB | false | Similar to IQ4_XS, but slightly larger. Offers online repacking for ARM CPU inference. |
| [Arch-Function-Chat-7B-IQ4_XS.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-7B-GGUF/blob/main/katanemo_Arch-Function-Chat-7B-IQ4_XS.gguf) | IQ4_XS | 4.22GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Arch-Function-Chat-7B-Q3_K_L.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-7B-GGUF/blob/main/katanemo_Arch-Function-Chat-7B-Q3_K_L.gguf) | Q3_K_L | 4.09GB | false | Lower quality but usable, good for low RAM availability. |
| [Arch-Function-Chat-7B-Q3_K_M.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-7B-GGUF/blob/main/katanemo_Arch-Function-Chat-7B-Q3_K_M.gguf) | Q3_K_M | 3.81GB | false | Low quality. |
| [Arch-Function-Chat-7B-IQ3_M.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-7B-GGUF/blob/main/katanemo_Arch-Function-Chat-7B-IQ3_M.gguf) | IQ3_M | 3.57GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Arch-Function-Chat-7B-Q2_K_L.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-7B-GGUF/blob/main/katanemo_Arch-Function-Chat-7B-Q2_K_L.gguf) | Q2_K_L | 3.55GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [Arch-Function-Chat-7B-Q3_K_S.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-7B-GGUF/blob/main/katanemo_Arch-Function-Chat-7B-Q3_K_S.gguf) | Q3_K_S | 3.49GB | false | Low quality, not recommended. |
| [Arch-Function-Chat-7B-IQ3_XS.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-7B-GGUF/blob/main/katanemo_Arch-Function-Chat-7B-IQ3_XS.gguf) | IQ3_XS | 3.35GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Arch-Function-Chat-7B-IQ3_XXS.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-7B-GGUF/blob/main/katanemo_Arch-Function-Chat-7B-IQ3_XXS.gguf) | IQ3_XXS | 3.11GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Arch-Function-Chat-7B-Q2_K.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-7B-GGUF/blob/main/katanemo_Arch-Function-Chat-7B-Q2_K.gguf) | Q2_K | 3.02GB | false | Very low quality but surprisingly usable. |
| [Arch-Function-Chat-7B-IQ2_M.gguf](https://huggingface.co/bartowski/katanemo_Arch-Function-Chat-7B-GGUF/blob/main/katanemo_Arch-Function-Chat-7B-IQ2_M.gguf) | IQ2_M | 2.78GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
## Downloading using huggingface-cli
<details>
<summary>Click to view download instructions</summary>
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/katanemo_Arch-Function-Chat-7B-GGUF --include "katanemo_Arch-Function-Chat-7B-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/katanemo_Arch-Function-Chat-7B-GGUF --include "katanemo_Arch-Function-Chat-7B-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (katanemo_Arch-Function-Chat-7B-Q8_0) or download them all in place (./)
</details>
## ARM/AVX information
Previously, you would download Q4_0_4_4/4_8/8_8, and these would have their weights interleaved in memory in order to improve performance on ARM and AVX machines by loading up more data in one pass.
Now, however, there is something called "online repacking" for weights. details in [this PR](https://github.com/ggerganov/llama.cpp/pull/9921). If you use Q4_0 and your hardware would benefit from repacking weights, it will do it automatically on the fly.
As of llama.cpp build [b4282](https://github.com/ggerganov/llama.cpp/releases/tag/b4282) you will not be able to run the Q4_0_X_X files and will instead need to use Q4_0.
Additionally, if you want to get slightly better quality for , you can use IQ4_NL thanks to [this PR](https://github.com/ggerganov/llama.cpp/pull/10541) which will also repack the weights for ARM, though only the 4_4 for now. The loading time may be slower but it will result in an overall speed incrase.
<details>
<summary>Click to view Q4_0_X_X information (deprecated</summary>
I'm keeping this section to show the potential theoretical uplift in performance from using the Q4_0 with online repacking.
<details>
<summary>Click to view benchmarks on an AVX2 system (EPYC7702)</summary>
| model | size | params | backend | threads | test | t/s | % (vs Q4_0) |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 ± 1.03 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 ± 0.19 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 ± 0.44 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 ± 0.27 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 ± 0.69 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 ± 0.03 | 100% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 ± 1.74 | 147% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 ± 0.20 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 ± 1.81 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 ± 0.99 | 48% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 ± 3.04 | 83% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 ± 3.59 | 90% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 ± 3.53 | 133% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 ± 45.63 | 100% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 ± 5.00 | 124% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 ± 0.05 | 111% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 ± 0.09 | 110% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 ± 0.31 | 105% |
Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation
</details>
</details>
## Which file should I choose?
<details>
<summary>Click here for details</summary>
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
</details>
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
Thank you ZeroWw for the inspiration to experiment with embed/output.
Thank you to LM Studio for sponsoring my work.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
Manavshah/bert-tiny-reward-model | Manavshah | 2025-04-01T23:16:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:prajjwal1/bert-tiny",
"base_model:finetune:prajjwal1/bert-tiny",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-04-01T23:16:01Z | ---
library_name: transformers
license: mit
base_model: prajjwal1/bert-tiny
tags:
- generated_from_trainer
model-index:
- name: bert-tiny-reward-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-tiny-reward-model
This model is a fine-tuned version of [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 100
- eval_batch_size: 100
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 400
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.50.2
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
mradermacher/NanoLM-0.3B-Instruct-v1-GGUF | mradermacher | 2025-04-01T23:15:26Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"zh",
"base_model:Mxode/NanoLM-0.3B-Instruct-v1",
"base_model:quantized:Mxode/NanoLM-0.3B-Instruct-v1",
"license:gpl-3.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-01T23:11:43Z | ---
base_model: Mxode/NanoLM-0.3B-Instruct-v1
language:
- en
- zh
library_name: transformers
license: gpl-3.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Mxode/NanoLM-0.3B-Instruct-v1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NanoLM-0.3B-Instruct-v1-GGUF/resolve/main/NanoLM-0.3B-Instruct-v1.Q3_K_S.gguf) | Q3_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/NanoLM-0.3B-Instruct-v1-GGUF/resolve/main/NanoLM-0.3B-Instruct-v1.Q2_K.gguf) | Q2_K | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/NanoLM-0.3B-Instruct-v1-GGUF/resolve/main/NanoLM-0.3B-Instruct-v1.IQ4_XS.gguf) | IQ4_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/NanoLM-0.3B-Instruct-v1-GGUF/resolve/main/NanoLM-0.3B-Instruct-v1.Q3_K_M.gguf) | Q3_K_M | 0.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NanoLM-0.3B-Instruct-v1-GGUF/resolve/main/NanoLM-0.3B-Instruct-v1.Q3_K_L.gguf) | Q3_K_L | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/NanoLM-0.3B-Instruct-v1-GGUF/resolve/main/NanoLM-0.3B-Instruct-v1.Q4_K_S.gguf) | Q4_K_S | 0.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NanoLM-0.3B-Instruct-v1-GGUF/resolve/main/NanoLM-0.3B-Instruct-v1.Q4_K_M.gguf) | Q4_K_M | 0.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NanoLM-0.3B-Instruct-v1-GGUF/resolve/main/NanoLM-0.3B-Instruct-v1.Q5_K_S.gguf) | Q5_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/NanoLM-0.3B-Instruct-v1-GGUF/resolve/main/NanoLM-0.3B-Instruct-v1.Q5_K_M.gguf) | Q5_K_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/NanoLM-0.3B-Instruct-v1-GGUF/resolve/main/NanoLM-0.3B-Instruct-v1.Q6_K.gguf) | Q6_K | 0.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/NanoLM-0.3B-Instruct-v1-GGUF/resolve/main/NanoLM-0.3B-Instruct-v1.Q8_0.gguf) | Q8_0 | 0.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/NanoLM-0.3B-Instruct-v1-GGUF/resolve/main/NanoLM-0.3B-Instruct-v1.f16.gguf) | f16 | 0.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/vicuna-7b-v1.3-attention-sparsity-10-GGUF | mradermacher | 2025-04-01T23:15:23Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:wang7776/vicuna-7b-v1.3-attention-sparsity-10",
"base_model:quantized:wang7776/vicuna-7b-v1.3-attention-sparsity-10",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-01T22:35:13Z | ---
base_model: wang7776/vicuna-7b-v1.3-attention-sparsity-10
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/wang7776/vicuna-7b-v1.3-attention-sparsity-10
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/vicuna-7b-v1.3-attention-sparsity-10-GGUF/resolve/main/vicuna-7b-v1.3-attention-sparsity-10.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/vicuna-7b-v1.3-attention-sparsity-10-GGUF/resolve/main/vicuna-7b-v1.3-attention-sparsity-10.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/vicuna-7b-v1.3-attention-sparsity-10-GGUF/resolve/main/vicuna-7b-v1.3-attention-sparsity-10.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/vicuna-7b-v1.3-attention-sparsity-10-GGUF/resolve/main/vicuna-7b-v1.3-attention-sparsity-10.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/vicuna-7b-v1.3-attention-sparsity-10-GGUF/resolve/main/vicuna-7b-v1.3-attention-sparsity-10.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/vicuna-7b-v1.3-attention-sparsity-10-GGUF/resolve/main/vicuna-7b-v1.3-attention-sparsity-10.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/vicuna-7b-v1.3-attention-sparsity-10-GGUF/resolve/main/vicuna-7b-v1.3-attention-sparsity-10.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/vicuna-7b-v1.3-attention-sparsity-10-GGUF/resolve/main/vicuna-7b-v1.3-attention-sparsity-10.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/vicuna-7b-v1.3-attention-sparsity-10-GGUF/resolve/main/vicuna-7b-v1.3-attention-sparsity-10.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/vicuna-7b-v1.3-attention-sparsity-10-GGUF/resolve/main/vicuna-7b-v1.3-attention-sparsity-10.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/vicuna-7b-v1.3-attention-sparsity-10-GGUF/resolve/main/vicuna-7b-v1.3-attention-sparsity-10.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/vicuna-7b-v1.3-attention-sparsity-10-GGUF/resolve/main/vicuna-7b-v1.3-attention-sparsity-10.f16.gguf) | f16 | 13.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Bielkik78bv0.1-Home-llm-GGUF | mradermacher | 2025-04-01T23:13:10Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:gacekk/Bielkik78bv0.1-Home-llm",
"base_model:quantized:gacekk/Bielkik78bv0.1-Home-llm",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-01T22:50:21Z | ---
base_model: gacekk/Bielkik78bv0.1-Home-llm
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/gacekk/Bielkik78bv0.1-Home-llm
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Bielkik78bv0.1-Home-llm-GGUF/resolve/main/Bielkik78bv0.1-Home-llm.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Bielkik78bv0.1-Home-llm-GGUF/resolve/main/Bielkik78bv0.1-Home-llm.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Bielkik78bv0.1-Home-llm-GGUF/resolve/main/Bielkik78bv0.1-Home-llm.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Bielkik78bv0.1-Home-llm-GGUF/resolve/main/Bielkik78bv0.1-Home-llm.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Bielkik78bv0.1-Home-llm-GGUF/resolve/main/Bielkik78bv0.1-Home-llm.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Bielkik78bv0.1-Home-llm-GGUF/resolve/main/Bielkik78bv0.1-Home-llm.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Bielkik78bv0.1-Home-llm-GGUF/resolve/main/Bielkik78bv0.1-Home-llm.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Bielkik78bv0.1-Home-llm-GGUF/resolve/main/Bielkik78bv0.1-Home-llm.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Bielkik78bv0.1-Home-llm-GGUF/resolve/main/Bielkik78bv0.1-Home-llm.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Bielkik78bv0.1-Home-llm-GGUF/resolve/main/Bielkik78bv0.1-Home-llm.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Bielkik78bv0.1-Home-llm-GGUF/resolve/main/Bielkik78bv0.1-Home-llm.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Bielkik78bv0.1-Home-llm-GGUF/resolve/main/Bielkik78bv0.1-Home-llm.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
matheusvolpato/matheusvolpato | matheusvolpato | 2025-04-01T23:12:08Z | 0 | 0 | null | [
"license:other",
"region:us"
]
| null | 2025-04-01T22:25:37Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
samoline/dd97b852-10d1-486f-9829-257bdfc50b87 | samoline | 2025-04-01T23:11:30Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Hermes-2-Pro-Mistral-7B",
"base_model:adapter:NousResearch/Hermes-2-Pro-Mistral-7B",
"license:apache-2.0",
"region:us"
]
| null | 2025-04-01T22:35:34Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Hermes-2-Pro-Mistral-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: dd97b852-10d1-486f-9829-257bdfc50b87
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Hermes-2-Pro-Mistral-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5c0fefe1b3e478e0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5c0fefe1b3e478e0_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: false
group_by_length: false
hub_model_id: samoline/dd97b852-10d1-486f-9829-257bdfc50b87
hub_repo: samoline
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 4
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 4
lora_target_linear: true
lr_scheduler: cosine
max_steps: 2
micro_batch_size: 1
mlflow_experiment_name: /tmp/5c0fefe1b3e478e0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: samoline-nan
wandb_mode: online
wandb_name: ea620405-2d7e-4869-b996-f0c83b806674
wandb_project: Gradients-On-Demand
wandb_run: dev
wandb_runid: ea620405-2d7e-4869-b996-f0c83b806674
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# dd97b852-10d1-486f-9829-257bdfc50b87
This model is a fine-tuned version of [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0000 | 1 | nan |
| 0.0 | 0.0000 | 2 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
danie94-lml/sdxl-lora-cracks | danie94-lml | 2025-04-01T23:10:21Z | 0 | 0 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2025-04-01T18:43:05Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - danie94-lml/sdxl-lora-cracks
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were fine-tuned on the danie94-lml/crack_dataset dataset. You can find some example images in the following.




LoRA for the text encoder was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
MinaMila/llama_instbase_Adult_12ep_42 | MinaMila | 2025-04-01T23:10:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-01T23:06:59Z | ---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
t3resa/detr-resnet-50-sku110k-400-queries-v1 | t3resa | 2025-04-01T23:05:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"detr",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| object-detection | 2025-04-01T22:09:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
epoch=112;mAP=0.567
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
t3resa/detr-resnet-50-sku110k-400-queries-v0 | t3resa | 2025-04-01T23:04:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"detr",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| object-detection | 2025-04-01T16:57:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
epoch=90;mAP=0.566
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Heisenger/babyai-ppo-experiments-2025-04-01_22-57-07 | Heisenger | 2025-04-01T22:58:35Z | 0 | 0 | peft | [
"peft",
"pytorch",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-3B-Instruct",
"region:us"
]
| null | 2025-04-01T22:57:11Z | ---
base_model: meta-llama/Llama-3.2-3B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
RichardErkhov/magnifi_-_Phi3_intent_v46_2_w_unknown_7_lr_0.002-gguf | RichardErkhov | 2025-04-01T22:55:26Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-01T21:54:20Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Phi3_intent_v46_2_w_unknown_7_lr_0.002 - GGUF
- Model creator: https://huggingface.co/magnifi/
- Original model: https://huggingface.co/magnifi/Phi3_intent_v46_2_w_unknown_7_lr_0.002/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Phi3_intent_v46_2_w_unknown_7_lr_0.002.Q2_K.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v46_2_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v46_2_w_unknown_7_lr_0.002.Q2_K.gguf) | Q2_K | 1.35GB |
| [Phi3_intent_v46_2_w_unknown_7_lr_0.002.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v46_2_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v46_2_w_unknown_7_lr_0.002.IQ3_XS.gguf) | IQ3_XS | 1.49GB |
| [Phi3_intent_v46_2_w_unknown_7_lr_0.002.IQ3_S.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v46_2_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v46_2_w_unknown_7_lr_0.002.IQ3_S.gguf) | IQ3_S | 1.57GB |
| [Phi3_intent_v46_2_w_unknown_7_lr_0.002.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v46_2_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v46_2_w_unknown_7_lr_0.002.Q3_K_S.gguf) | Q3_K_S | 1.57GB |
| [Phi3_intent_v46_2_w_unknown_7_lr_0.002.IQ3_M.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v46_2_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v46_2_w_unknown_7_lr_0.002.IQ3_M.gguf) | IQ3_M | 1.65GB |
| [Phi3_intent_v46_2_w_unknown_7_lr_0.002.Q3_K.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v46_2_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v46_2_w_unknown_7_lr_0.002.Q3_K.gguf) | Q3_K | 1.75GB |
| [Phi3_intent_v46_2_w_unknown_7_lr_0.002.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v46_2_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v46_2_w_unknown_7_lr_0.002.Q3_K_M.gguf) | Q3_K_M | 1.75GB |
| [Phi3_intent_v46_2_w_unknown_7_lr_0.002.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v46_2_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v46_2_w_unknown_7_lr_0.002.Q3_K_L.gguf) | Q3_K_L | 1.9GB |
| [Phi3_intent_v46_2_w_unknown_7_lr_0.002.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v46_2_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v46_2_w_unknown_7_lr_0.002.IQ4_XS.gguf) | IQ4_XS | 1.93GB |
| [Phi3_intent_v46_2_w_unknown_7_lr_0.002.Q4_0.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v46_2_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v46_2_w_unknown_7_lr_0.002.Q4_0.gguf) | Q4_0 | 2.03GB |
| [Phi3_intent_v46_2_w_unknown_7_lr_0.002.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v46_2_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v46_2_w_unknown_7_lr_0.002.IQ4_NL.gguf) | IQ4_NL | 2.04GB |
| [Phi3_intent_v46_2_w_unknown_7_lr_0.002.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v46_2_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v46_2_w_unknown_7_lr_0.002.Q4_K_S.gguf) | Q4_K_S | 2.04GB |
| [Phi3_intent_v46_2_w_unknown_7_lr_0.002.Q4_K.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v46_2_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v46_2_w_unknown_7_lr_0.002.Q4_K.gguf) | Q4_K | 2.16GB |
| [Phi3_intent_v46_2_w_unknown_7_lr_0.002.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v46_2_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v46_2_w_unknown_7_lr_0.002.Q4_K_M.gguf) | Q4_K_M | 2.16GB |
| [Phi3_intent_v46_2_w_unknown_7_lr_0.002.Q4_1.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v46_2_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v46_2_w_unknown_7_lr_0.002.Q4_1.gguf) | Q4_1 | 2.24GB |
| [Phi3_intent_v46_2_w_unknown_7_lr_0.002.Q5_0.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v46_2_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v46_2_w_unknown_7_lr_0.002.Q5_0.gguf) | Q5_0 | 2.46GB |
| [Phi3_intent_v46_2_w_unknown_7_lr_0.002.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v46_2_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v46_2_w_unknown_7_lr_0.002.Q5_K_S.gguf) | Q5_K_S | 2.46GB |
| [Phi3_intent_v46_2_w_unknown_7_lr_0.002.Q5_K.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v46_2_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v46_2_w_unknown_7_lr_0.002.Q5_K.gguf) | Q5_K | 2.53GB |
| [Phi3_intent_v46_2_w_unknown_7_lr_0.002.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v46_2_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v46_2_w_unknown_7_lr_0.002.Q5_K_M.gguf) | Q5_K_M | 2.53GB |
| [Phi3_intent_v46_2_w_unknown_7_lr_0.002.Q5_1.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v46_2_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v46_2_w_unknown_7_lr_0.002.Q5_1.gguf) | Q5_1 | 2.68GB |
| [Phi3_intent_v46_2_w_unknown_7_lr_0.002.Q6_K.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v46_2_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v46_2_w_unknown_7_lr_0.002.Q6_K.gguf) | Q6_K | 2.92GB |
| [Phi3_intent_v46_2_w_unknown_7_lr_0.002.Q8_0.gguf](https://huggingface.co/RichardErkhov/magnifi_-_Phi3_intent_v46_2_w_unknown_7_lr_0.002-gguf/blob/main/Phi3_intent_v46_2_w_unknown_7_lr_0.002.Q8_0.gguf) | Q8_0 | 3.78GB |
Original model description:
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** magnifi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Manal0809/Mistral_nemo_calibrated_f1enhanced_full_oldinstruct_best | Manal0809 | 2025-04-01T22:55:13Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit",
"base_model:adapter:unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit",
"region:us"
]
| null | 2025-04-01T22:55:04Z | ---
base_model: unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
mradermacher/Mistral-7B-Instruct-v0.2-attention-sparsity-30-GGUF | mradermacher | 2025-04-01T22:54:11Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"finetuned",
"en",
"base_model:notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-30",
"base_model:quantized:notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-30",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-01T22:22:18Z | ---
base_model: notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-30
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- finetuned
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-30
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Instruct-v0.2-attention-sparsity-30-GGUF/resolve/main/Mistral-7B-Instruct-v0.2-attention-sparsity-30.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Instruct-v0.2-attention-sparsity-30-GGUF/resolve/main/Mistral-7B-Instruct-v0.2-attention-sparsity-30.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Instruct-v0.2-attention-sparsity-30-GGUF/resolve/main/Mistral-7B-Instruct-v0.2-attention-sparsity-30.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Instruct-v0.2-attention-sparsity-30-GGUF/resolve/main/Mistral-7B-Instruct-v0.2-attention-sparsity-30.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Instruct-v0.2-attention-sparsity-30-GGUF/resolve/main/Mistral-7B-Instruct-v0.2-attention-sparsity-30.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Instruct-v0.2-attention-sparsity-30-GGUF/resolve/main/Mistral-7B-Instruct-v0.2-attention-sparsity-30.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Instruct-v0.2-attention-sparsity-30-GGUF/resolve/main/Mistral-7B-Instruct-v0.2-attention-sparsity-30.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Instruct-v0.2-attention-sparsity-30-GGUF/resolve/main/Mistral-7B-Instruct-v0.2-attention-sparsity-30.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Instruct-v0.2-attention-sparsity-30-GGUF/resolve/main/Mistral-7B-Instruct-v0.2-attention-sparsity-30.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Instruct-v0.2-attention-sparsity-30-GGUF/resolve/main/Mistral-7B-Instruct-v0.2-attention-sparsity-30.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Instruct-v0.2-attention-sparsity-30-GGUF/resolve/main/Mistral-7B-Instruct-v0.2-attention-sparsity-30.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Instruct-v0.2-attention-sparsity-30-GGUF/resolve/main/Mistral-7B-Instruct-v0.2-attention-sparsity-30.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
bowilleatyou/da30fee6-eb14-42bd-ba3b-e266cd643b60 | bowilleatyou | 2025-04-01T22:51:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-01T16:16:55Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rdruce/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-frisky_lanky_hippo | rdruce | 2025-04-01T22:49:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am frisky lanky hippo",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-01T22:26:14Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-frisky_lanky_hippo
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am frisky lanky hippo
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-frisky_lanky_hippo
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="rdruce/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-frisky_lanky_hippo", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.50.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
vivek1192/merged_medicalQAsce-hindi | vivek1192 | 2025-04-01T22:47:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2408.07990",
"base_model:BioMistral/BioMistral-7B",
"base_model:merge:BioMistral/BioMistral-7B",
"base_model:OdiaGenAI/mistral_hindi_7b_base_v1",
"base_model:merge:OdiaGenAI/mistral_hindi_7b_base_v1",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:merge:mistralai/Mistral-7B-v0.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-01T22:44:36Z | ---
base_model:
- mistralai/Mistral-7B-v0.1
- BioMistral/BioMistral-7B
- OdiaGenAI/mistral_hindi_7b_base_v1
library_name: transformers
tags:
- mergekit
- merge
---
# merged_models
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [BioMistral/BioMistral-7B](https://huggingface.co/BioMistral/BioMistral-7B)
* [OdiaGenAI/mistral_hindi_7b_base_v1](https://huggingface.co/OdiaGenAI/mistral_hindi_7b_base_v1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: mistralai/Mistral-7B-v0.1
dtype: float16
merge_method: sce
modules:
default:
slices:
- sources:
- layer_range: [0, 32]
model: BioMistral/BioMistral-7B
parameters:
density: 0.5
weight: 0.5
- layer_range: [0, 32]
model: OdiaGenAI/mistral_hindi_7b_base_v1
parameters:
density: 0.5
weight: 0.5
- layer_range: [0, 32]
model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: 1.0
normalize: 0.0
```
|
svgeek/QwQ-32B | svgeek | 2025-04-01T22:47:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"arxiv:2309.00071",
"arxiv:2412.15115",
"base_model:Qwen/Qwen2.5-32B",
"base_model:finetune:Qwen/Qwen2.5-32B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-01T22:06:22Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/QWQ-32B/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-32B
tags:
- chat
library_name: transformers
---
# svgeek/QwQ-32B
Qwen/QwQ-32B with config.json update to support long context.
# QwQ-32B
<a href="https://chat.qwenlm.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Introduction
QwQ is the reasoning model of the Qwen series. Compared with conventional instruction-tuned models, QwQ, which is capable of thinking and reasoning, can achieve significantly enhanced performance in downstream tasks, especially hard problems. QwQ-32B is the medium-sized reasoning model, which is capable of achieving competitive performance against state-of-the-art reasoning models, e.g., DeepSeek-R1, o1-mini.
<p align="center">
<img width="100%" src="figures/benchmark.jpg">
</p>
**This repo contains the QwQ 32B model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training (Supervised Finetuning and Reinforcement Learning)
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
- Number of Parameters: 32.5B
- Number of Paramaters (Non-Embedding): 31.0B
- Number of Layers: 64
- Number of Attention Heads (GQA): 40 for Q and 8 for KV
- Context Length: Full 131,072 tokens
- For prompts exceeding 8,192 tokens in length, you must enable YaRN as outlined in [this section](#usage-guidelines).
**Note:** For the best experience, please review the [usage guidelines](#usage-guidelines) before deploying QwQ models.
You can try our [demo](https://huggingface.co/spaces/Qwen/QwQ-32B-Demo) or access QwQ models via [QwenChat](https://chat.qwen.ai).
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwq-32b/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
QwQ is based on Qwen2.5, whose code has been in the latest Hugging face `transformers`. We advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/QwQ-32B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "How many r's are in the word \"strawberry\""
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
### Usage Guidelines
To achieve optimal performance, we recommend the following settings:
1. **Enforce Thoughtful Output**: Ensure the model starts with "\<think\>\n" to prevent generating empty thinking content, which can degrade output quality. If you use `apply_chat_template` and set `add_generation_prompt=True`, this is already automatically implemented, but it may cause the response to lack the \<think\> tag at the beginning. This is normal behavior.
2. **Sampling Parameters**:
- Use Temperature=0.6, TopP=0.95, MinP=0 instead of Greedy decoding to avoid endless repetitions.
- Use TopK between 20 and 40 to filter out rare token occurrences while maintaining the diversity of the generated output.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may result in occasional language mixing and a slight decrease in performance.
3. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. This feature is already implemented in `apply_chat_template`.
4. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g.,`\"answer\": \"C\"`." in the prompt.
5. **Handle Long Inputs**: For inputs exceeding 8,192 tokens, enable [YaRN](https://arxiv.org/abs/2309.00071) to improve the model's ability to capture long-sequence information effectively.
For supported frameworks, you could add the following to `config.json` to enable YaRN:
```json
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
For deployment, we recommend using vLLM. Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
We advise adding the `rope_scaling` configuration only when processing long contexts is required.
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwq-32b/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwq32b,
title = {QwQ-32B: Embracing the Power of Reinforcement Learning},
url = {https://qwenlm.github.io/blog/qwq-32b/},
author = {Qwen Team},
month = {March},
year = {2025}
}
@article{qwen2.5,
title={Qwen2.5 Technical Report},
author={An Yang and Baosong Yang and Beichen Zhang and Binyuan Hui and Bo Zheng and Bowen Yu and Chengyuan Li and Dayiheng Liu and Fei Huang and Haoran Wei and Huan Lin and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Yang and Jiaxi Yang and Jingren Zhou and Junyang Lin and Kai Dang and Keming Lu and Keqin Bao and Kexin Yang and Le Yu and Mei Li and Mingfeng Xue and Pei Zhang and Qin Zhu and Rui Men and Runji Lin and Tianhao Li and Tianyi Tang and Tingyu Xia and Xingzhang Ren and Xuancheng Ren and Yang Fan and Yang Su and Yichang Zhang and Yu Wan and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zihan Qiu},
journal={arXiv preprint arXiv:2412.15115},
year={2024}
}
```
|
Subsets and Splits