modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-14 06:27:53
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 519
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-14 06:27:45
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
rrreol69/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-thick_shrewd_gibbon | rrreol69 | 2025-04-02T07:39:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am thick shrewd gibbon",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T07:38:45Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-thick_shrewd_gibbon
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am thick shrewd gibbon
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-thick_shrewd_gibbon
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="rrreol69/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-thick_shrewd_gibbon", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.50.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
KingEmpire/sn9_pre_c04_12 | KingEmpire | 2025-04-02T07:37:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T03:06:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Brianpuze/Qwen2.5-0.5B-Q4_K_M-Q3_K_M-GGUF-TEST | Brianpuze | 2025-04-02T07:37:02Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:quantized:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2025-04-02T07:36:36Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
base_model: Qwen/Qwen2.5-0.5B
---
# Brianpuze/Qwen2.5-0.5B-Q4_K_M-Q3_K_M-GGUF-TEST
This repo contains GGUF quantized versions of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B) using llama.cpp.
## Quantized Versions:
- qwen2.5-0.5b-q4_k_m.gguf
- qwen2.5-0.5b-q3_k_m.gguf
## Run with llama.cpp
llama-cli --hf-repo Brianpuze/Qwen2.5-0.5B-Q4_K_M-Q3_K_M-GGUF-TEST --hf-file qwen2.5-0.5b-q4_k_m.gguf -p "The meaning of life is"
(Replace filename to use other variants.)
|
psyonp/Final-Llama-Math-1L | psyonp | 2025-04-02T07:35:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T07:32:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
inrainbws/resnet_r16_mlora | inrainbws | 2025-04-02T07:33:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-02T07:32:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
syyhmg/unsloth_finetune | syyhmg | 2025-04-02T07:29:50Z | 0 | 0 | transformers | [
"transformers",
"mllama",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-04-02T07:29:34Z | ---
base_model: unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mllama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** syyhmg
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
This mllama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
John6666/nova-anime-xl-il-v60-sdxl | John6666 | 2025-04-02T07:28:43Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"hentai",
"2D",
"2.5D",
"illustration",
"fantasy",
"landscape",
"colorful",
"digital art",
"posing",
"merge",
"noobai",
"Illustrious XL v1.1",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-1.1",
"base_model:merge:Laxhar/noobai-XL-1.1",
"base_model:OnomaAIResearch/Illustrious-XL-v1.1",
"base_model:merge:OnomaAIResearch/Illustrious-XL-v1.1",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2025-04-02T07:18:50Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- hentai
- 2D
- 2.5D
- illustration
- fantasy
- landscape
- colorful
- digital art
- posing
- merge
- noobai
- Illustrious XL v1.1
- illustrious
base_model:
- OnomaAIResearch/Illustrious-XL-v1.1
- Laxhar/noobai-XL-1.1
---
Original model is [here](https://civitai.com/models/376130/nova-anime-xl?modelVersionId=1610231).
This model created by [Crody](https://civitai.com/user/Crody).
|
mergekit-community/MN-Hekate-Daidalos-17B | mergekit-community | 2025-04-02T07:28:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:DavidAU/MN-Dark-Planet-TITAN-12B",
"base_model:merge:DavidAU/MN-Dark-Planet-TITAN-12B",
"base_model:EpistemeAI/Mistral-Nemo-Instruct-12B-Philosophy-Math",
"base_model:merge:EpistemeAI/Mistral-Nemo-Instruct-12B-Philosophy-Math",
"base_model:FallenMerick/MN-Violet-Lotus-12B",
"base_model:merge:FallenMerick/MN-Violet-Lotus-12B",
"base_model:jtatman/mistral_nemo_12b_reasoning_psychology_lora",
"base_model:merge:jtatman/mistral_nemo_12b_reasoning_psychology_lora",
"base_model:mergekit-community/MN-Hekate-Anassa-17B",
"base_model:merge:mergekit-community/MN-Hekate-Anassa-17B",
"base_model:mergekit-community/MN-Hekate-Panopaia-12B",
"base_model:merge:mergekit-community/MN-Hekate-Panopaia-12B",
"base_model:mergekit-community/MN-Sappho-j-12B",
"base_model:merge:mergekit-community/MN-Sappho-j-12B",
"base_model:mistralai/Mistral-Nemo-Base-2407",
"base_model:merge:mistralai/Mistral-Nemo-Base-2407",
"base_model:nbeerbower/mistral-nemo-wissenschaft-12B",
"base_model:merge:nbeerbower/mistral-nemo-wissenschaft-12B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T07:14:09Z | ---
base_model:
- FallenMerick/MN-Violet-Lotus-12B
- mistralai/Mistral-Nemo-Base-2407
- nbeerbower/mistral-nemo-wissenschaft-12B
- EpistemeAI/Mistral-Nemo-Instruct-12B-Philosophy-Math
- mergekit-community/MN-Hekate-Panopaia-12B
- DavidAU/MN-Dark-Planet-TITAN-12B
- mergekit-community/MN-Sappho-j-12B
- jtatman/mistral_nemo_12b_reasoning_psychology_lora
- mergekit-community/MN-Hekate-Anassa-17B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [mergekit-community/MN-Hekate-Anassa-17B](https://huggingface.co/mergekit-community/MN-Hekate-Anassa-17B) as a base.
### Models Merged
The following models were included in the merge:
* [FallenMerick/MN-Violet-Lotus-12B](https://huggingface.co/FallenMerick/MN-Violet-Lotus-12B)
* [mistralai/Mistral-Nemo-Base-2407](https://huggingface.co/mistralai/Mistral-Nemo-Base-2407)
* [nbeerbower/mistral-nemo-wissenschaft-12B](https://huggingface.co/nbeerbower/mistral-nemo-wissenschaft-12B)
* [EpistemeAI/Mistral-Nemo-Instruct-12B-Philosophy-Math](https://huggingface.co/EpistemeAI/Mistral-Nemo-Instruct-12B-Philosophy-Math)
* [mergekit-community/MN-Hekate-Panopaia-12B](https://huggingface.co/mergekit-community/MN-Hekate-Panopaia-12B)
* [DavidAU/MN-Dark-Planet-TITAN-12B](https://huggingface.co/DavidAU/MN-Dark-Planet-TITAN-12B)
* [mergekit-community/MN-Sappho-j-12B](https://huggingface.co/mergekit-community/MN-Sappho-j-12B) + [jtatman/mistral_nemo_12b_reasoning_psychology_lora](https://huggingface.co/jtatman/mistral_nemo_12b_reasoning_psychology_lora)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: float32
out_dtype: bfloat16
merge_method: model_stock
base_model: mergekit-community/MN-Hekate-Anassa-17B
slices:
- sources:
- model: mergekit-community/MN-Hekate-Anassa-17B
layer_range: [0, 20]
- sources:
- model: mergekit-community/MN-Hekate-Anassa-17B
layer_range: [20, 36]
- model: mergekit-community/MN-Hekate-Panopaia-12B
layer_range: [16, 32]
- model: mistralai/Mistral-Nemo-Base-2407
layer_range: [16, 32]
parameters:
weight: [2, 1]
- model: FallenMerick/MN-Violet-Lotus-12B
layer_range: [16, 32]
parameters:
weight: [1.5, 1.49, 1.46, 1.4, 1.33, 1.25, 1.15, 1.05, 1]
- model: nbeerbower/mistral-nemo-wissenschaft-12B
layer_range: [16, 32]
parameters:
weight: [1.5, 1.49, 1.46, 1.4, 1.33, 1.25, 1.15, 1.05, 1]
- model: EpistemeAI/Mistral-Nemo-Instruct-12B-Philosophy-Math
layer_range: [16, 32]
parameters:
weight: [1.5, 1.49, 1.46, 1.4, 1.33, 1.25, 1.15, 1.05, 1]
- model: mergekit-community/MN-Sappho-j-12B+jtatman/mistral_nemo_12b_reasoning_psychology_lora
layer_range: [16, 32]
parameters:
weight: [1.0, 1.1, 1.2, 1.29, 1.37, 1.43, 1.48, 1.5, 1.5]
- model: DavidAU/MN-Dark-Planet-TITAN-12B
layer_range: [16, 32]
parameters:
weight: [1.0, 1.1, 1.2, 1.29, 1.37, 1.43, 1.48, 1.5, 1.5]
- sources:
- model: mergekit-community/MN-Hekate-Anassa-17B
layer_range: [36, 56]
tokenizer:
source: union
tokens:
"[INST]":
source: mergekit-community/MN-Hekate-Anassa-17B
force: true
"[/INST]":
source: mergekit-community/MN-Hekate-Anassa-17B
force: true
"<|im_start|>":
source: mergekit-community/MN-Hekate-Anassa-17B
force: true
"<|im_end|>":
source: mergekit-community/MN-Hekate-Anassa-17B
force: true
```
|
dwentzart/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scented_fierce_cow | dwentzart | 2025-04-02T07:28:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am scented fierce cow",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T06:03:24Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scented_fierce_cow
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am scented fierce cow
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scented_fierce_cow
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="dwentzart/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scented_fierce_cow", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.50.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
MinaMila/llama_instbase_unlearned_Adult_13ep_22 | MinaMila | 2025-04-02T07:27:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:MinaMila/llama3_unlearning_general_methode",
"base_model:finetune:MinaMila/llama3_unlearning_general_methode",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T07:24:05Z | ---
base_model: MinaMila/llama3_unlearning_general_methode
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** MinaMila/llama3_unlearning_general_methode
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
PrunaAI/agentica-org-DeepScaleR-1.5B-Preview-bnb-8bit-smashed | PrunaAI | 2025-04-02T07:25:56Z | 2 | 0 | null | [
"safetensors",
"qwen2",
"pruna-ai",
"8-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-03-26T03:06:51Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: ORIGINAL_REPO_NAME
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo ORIGINAL_REPO_NAME installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/agentica-org-DeepScaleR-1.5B-Preview-bnb-8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("ORIGINAL_REPO_NAME")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model ORIGINAL_REPO_NAME before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
wandererupak/wave2vec-bert-oslrULTIMATECOLAB-TAKE-4 | wandererupak | 2025-04-02T07:25:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2025-04-02T07:03:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kk-aivio/0860c4d4-0dc2-4cec-9d3e-ef121780fd82 | kk-aivio | 2025-04-02T07:24:27Z | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-instruct-v0.3",
"base_model:adapter:unsloth/mistral-7b-instruct-v0.3",
"region:us"
]
| null | 2025-04-02T07:23:18Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: unsloth/mistral-7b-instruct-v0.3
model-index:
- name: kk-aivio/0860c4d4-0dc2-4cec-9d3e-ef121780fd82
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kk-aivio/0860c4d4-0dc2-4cec-9d3e-ef121780fd82
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5557
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
prithivMLmods/Pocket-Llama-3.2-3B-Instruct | prithivMLmods | 2025-04-02T07:24:09Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"Llama3.2",
"Pocket-Llama",
"text-generation-inference",
"Code",
"Math",
"conversational",
"en",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-01T06:38:11Z | ---
library_name: transformers
tags:
- Llama3.2
- Pocket-Llama
- text-generation-inference
- Code
- Math
license: apache-2.0
language:
- en
base_model:
- meta-llama/Llama-3.2-3B-Instruct
pipeline_tag: text-generation
---

# **Pocket-Llama-3.2-3B-Instruct**
> Pocket-Llama-3.2-3B-Instruct is based on the Llama 3.2 architecture, designed as a lightweight and efficient general-purpose chat assistant. Optimized for fast inference while maintaining strong problem-solving, mathematical reasoning, and scientific capabilities. This model is fine-tuned for enhanced structured reasoning, minimal token wastage, and high-quality technical responses.
## **Key Improvements**
1. **Optimized for General Purpose Chat**: Excels in a wide range of topics, including casual conversation, technical discussions, and knowledge-based queries.
2. **Strong Math & Science Capabilities**: Provides accurate and structured explanations for mathematical and scientific problems.
3. **Compact yet Powerful**: Maintains strong problem-solving capabilities within a smaller 3B parameter architecture, ensuring accessibility on resource-limited devices.
4. **Advanced Reasoning Capabilities**: Excels in algorithmic problem-solving, structured technical explanations, and logical analysis.
5. **Efficient Memory Utilization**: Reduces computational overhead while maintaining high-quality outputs.
6. **Focused Output Generation**: Avoids unnecessary token generation, ensuring concise and relevant responses.
## **Quickstart with transformers**
Here is a code snippet to load the tokenizer and model using `apply_chat_template` for structured input formatting:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Pocket-Llama-3.2-3B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Explain the theory of relativity in simple terms."
messages = [
{"role": "system", "content": "You are an advanced assistant specialized in science and mathematics."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=6090
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
## **Intended Use**
1. **General Chat & Knowledge-Based Queries**:
Engages in informative and casual discussions on a wide range of topics.
2. **Mathematics & Science Problem Solving**:
Provides accurate calculations and structured explanations for complex problems.
3. **Technical Documentation & Explanation**:
Assists in generating well-structured documentation for APIs, scientific concepts, and coding principles.
4. **Debugging Assistance**:
Helps identify and correct errors in code snippets.
5. **Educational Support**:
Simplifies complex topics for students and learners with clear explanations.
6. **Structured Data Processing**:
Generates structured outputs like JSON, XML, and tables for data science applications.
## **Limitations**
1. **Hardware Constraints**:
Although lighter than larger models, still requires a moderately powerful GPU or TPU for optimal performance.
2. **Potential Bias in Responses**:
Outputs may reflect biases present in training data.
3. **Limited Creativity**:
May generate variable results in non-technical, creative tasks.
4. **No Real-Time Awareness**:
Lacks access to real-world events beyond its training cutoff.
5. **Error Propagation in Long Responses**:
Minor mistakes in early outputs may affect overall coherence in lengthy responses.
6. **Prompt Sensitivity**:
The effectiveness of responses depends on well-structured prompts. |
xw17/TinyLlama-1.1B-Chat-v1.0_finetuned_4_def_lora | xw17 | 2025-04-02T07:23:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T03:07:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
pritmanvar/Llama-2-7b-chat-hf-fine-tuned | pritmanvar | 2025-04-02T07:23:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T06:31:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
cogmhear/avse4_baseline | cogmhear | 2025-04-02T07:22:54Z | 0 | 0 | null | [
"en",
"license:cc-by-nc-4.0",
"region:us"
]
| null | 2025-04-02T07:14:16Z | ---
license: cc-by-nc-4.0
language:
- en
---
## Baseline model for 4th COG-MHEAR Audio-Visual Speech Enhancement Challenge
[Challenge link](https://challenge.cogmhear.org/)
## Requirements
* [Python >= 3.6](https://www.anaconda.com/docs/getting-started/miniconda/install)
* [PyTorch](https://pytorch.org/)
* [PyTorch Lightning](https://lightning.ai/docs/pytorch/latest/)
* [Decord](https://github.com/dmlc/decord)
* [Hydra](https://hydra.cc)
* [SpeechBrain](https://github.com/speechbrain/speechbrain)
* [TQDM](https://github.com/tqdm/tqdm)
## Usage
```bash
# Expected folder structure for the dataset
data_root
|-- train
| `-- scenes
|-- dev
| `-- scenes
|-- eval
| `-- scenes
```
### Clone the repo
```bash
git clone https://github.com/cogmhear/avse_challenge
cd avse_challenge/baseline/avse4
```
### Train
```bash
python train.py data.root="./avsec4" data.num_channels=2 trainer.log_dir="./logs" data.batch_size=8 trainer.accelerator gpu trainer.gpus 1
more arguments in conf/train.yaml
```
### Test
```bash
python test.py data.root=./avsec4 data.num_channels=2 ckpt_path=pretrained.ckpt save_dir="./eval" model_uid="./avse4"
more arguments in conf/eval.yaml
```
|
bumie-e/gameplay-dqn-SpaceInvadersNoFrameskip-v4 | bumie-e | 2025-04-02T07:21:35Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-04-02T07:20:54Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 162.00 +/- 21.00
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga bumie-e -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga bumie-e -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga bumie-e
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.001),
('learning_starts', 100000),
('n_timesteps', 100000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
mlfoundations-dev/30k_globalbatchsize96_lr1e5_epochs5 | mlfoundations-dev | 2025-04-02T07:19:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-01T17:41:20Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: 30k_globalbatchsize96_lr1e5_epochs5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 30k_globalbatchsize96_lr1e5_epochs5
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/openthoughts_30000 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- gradient_accumulation_steps: 3
- total_train_batch_size: 96
- total_eval_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.3.0
- Datasets 3.1.0
- Tokenizers 0.20.3
|
John6666/hana4chrome-v20-sdxl | John6666 | 2025-04-02T07:18:29Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"waifu",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2025-04-02T07:10:21Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- waifu
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1422278?modelVersionId=1610153).
This model created by [CHROMEKIDD](https://civitai.com/user/CHROMEKIDD).
|
gabibguti/flower-prediction-model-hf | gabibguti | 2025-04-02T07:17:49Z | 0 | 0 | safe-ai-models | [
"safe-ai-models",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
]
| null | 2025-04-02T07:17:44Z | ---
library_name: safe-ai-models
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: https://github.com/gabibguti/safe-ai-models
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
erax-ai/EraX-VL-7B-V1.5 | erax-ai | 2025-04-02T07:16:55Z | 367 | 6 | transformers | [
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"erax",
"multimodal",
"erax-vl-7B",
"insurance",
"ocr",
"vietnamese",
"bcg",
"image-to-text",
"visual-question-answering",
"vi",
"en",
"zh",
"arxiv:2308.12966",
"arxiv:2407.10671",
"arxiv:2404.16821",
"arxiv:2404.07922",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-7B-Instruct",
"doi:10.57967/hf/3934",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| visual-question-answering | 2024-11-26T00:51:41Z | ---
license: apache-2.0
language:
- vi
- en
- zh
base_model:
- Qwen/Qwen2-VL-7B-Instruct
library_name: transformers
tags:
- erax
- multimodal
- erax-vl-7B
- insurance
- ocr
- vietnamese
- bcg
- image-to-text
pipeline_tag: visual-question-answering
widget:
- src: images/photo-1-16505057982762025719470.webp
example_title: Test 1
- src: images/vt-don-thuoc-f0-7417.jpeg
example_title: Test 2
---
<p align="left">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63d8d8879dfcfa941d4d7cd9/GsQKdaTyn2FFx_cZvVHk3.png" alt="Logo">
</p>
# EraX-VL-7B-V1.5
## Introduction 🎉
Hot on the heels of the popular **<a href="https://huggingface.co/erax-ai/EraX-VL-7B-V1.0" target="_blank">EraX-VL-7B-V1.0 model</a>**, we proudly present **EraX-VL-7B-V1.5**, another robust multimodal model for **OCR (optical character recognition)** and **VQA (visual question-answering)** that excels in various languages 🌍, with a particular focus on Vietnamese 🇻🇳. This model stands out for its precise recognition capabilities across a range of documents 📝, including medical forms 🩺, invoices 🧾, bills of sale 💳, quotes 📄, and medical records 💊. This functionality is expected to be highly beneficial for hospitals 🏥, clinics 💉, insurance companies 🛡️, and other similar applications 📋. Built on the solid foundation of the [Qwen/Qwen2-VL-2B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct)[1], which we found to be of high quality and fluent in Vietnamese, `EraX-VL-7B-V1.5` has been fine-tuned to enhance its performance. We plan to continue improving and releasing new versions for free, along with sharing performance benchmarks in the near future.
One standing-out feature of **EraX-VL-7B-V1.5** is the capability to do multi-turn Q&A with impressive reasoning capability!
**NOTA BENE**:
- EraX-VL-7B-V1.5 is NOT a typical OCR-only tool likes Tesseract but is a Multimodal LLM-based model. To use it effectively, you may have to **twist your prompt carefully** depending on your tasks.
- This model was NOT finetuned with medical (X-ray) dataset or car accidences (yet). Stay tune for updated version coming up sometime early 2025.
**EraX-VL-7B-V1.5** is a young member of our **EraX's LànhGPT** collection of LLM models.
- **Developed by:**
- Nguyễn Anh Nguyên ([email protected])
- Nguyễn Hồ Nam (BCG)
- Phạm Huỳnh Nhật ([email protected])
- Phạm Đình Thục ([email protected])
- **Funded by:** [Bamboo Capital Group](https://bamboocap.com.vn) and EraX
- **Model type:** Multimodal Transformer with over 7B parameters
- **Languages (NLP):** Primarily Vietnamese with multilingual capabilities
- **License:** Apache 2.0
- **Fine-tuned from:** [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct)
- **Prompt examples:** <a href="https://github.com/EraX-JS-Company/erax-vl-7b-v1/blob/main/prompts/Vietnam_popular_prompts.txt" target="_blank">Some popular prompt examples on Github.</a>
## Benchmarks 📊
## 🏆 LeaderBoard
The EraX-VL-7B-V1.5 achieved exceptionally high performance compared to other models of equal size or even **10 times larger, and we open-source**! You can re-run the benchmark at any time.
<table style="width:75%;">
<tr>
<th align="middle" width="300">Models</th>
<td align="middle" width="150"><b>Open-Source</b></td>
<td align="middle" width="300"><b>VI-MTVQA</b></td>
</tr>
<tr>
<th align="middle"><font color=darkred>EraX-VL-7B-V1.5 🥇 </font></th>
<td align="middle">✅</td>
<td align="middle">47.2 </td>
</tr>
<tr>
<th align="middle">Qwen2-VL 72B 🥈 </th>
<td align="middle">✘</td>
<td align="middle">41.6 </td>
</tr>
<tr>
<th align="middle">ViGPT-VL 🥉 </th>
<td align="middle">✘</td>
<td align="middle">39.1 </td>
</tr>
<tr>
<th align="middle"><font color=darkred>EraX-VL-2B-V1.5</font></th>
<td align="middle"> ✅ </td>
<td align="middle">38.2 </td>
</tr>
<tr>
<th align="middle"><font color=darkred>EraX-VL-7B-V1 </font></th>
<td align="middle"> ✅ </td>
<td align="middle">37.6 </td>
</tr>
<tr>
<th align="middle"><font color=darkred>Vintern-1B-V2</font></th>
<td align="middle"> ✅ </td>
<td align="middle">37.4 </td>
</tr>
<tr>
<th align="middle"><font color=darkred>Qwen2-VL 7B </font></th>
<td align="middle"> ✅ </td>
<td align="middle">30.0 </td>
</tr>
<tr>
<th align="middle">Claude3 Opus</th>
<td align="middle">✘</td>
<td align="middle">29.1 </td>
</tr>
<tr>
<th align="middle">GPT-4o mini </th>
<td align="middle"> ✘ </td>
<td align="middle">29.1 </td>
</tr>
<tr>
<th align="middle">GPT-4V</th>
<td align="middle">✘</td>
<td align="middle">28.9 </td>
</tr>
<tr>
<th align="middle">Gemini Ultra</th>
<td align="middle">✘</td>
<td align="middle">28.6 </td>
</tr>
<tr>
<th align="middle"><font color=darkred>InternVL2 76B</font></th>
<td align="middle"> ✅ </td>
<td align="middle">26.9 </td>
</tr>
<tr>
<th align="middle">QwenVL Max</th>
<td align="middle">✘</td>
<td align="middle">23.5 </td>
</tr>
<tr>
<th align="middle">Claude3 Sonnet</th>
<td align="middle">✘</td>
<td align="middle">20.8 </td>
</tr>
<tr>
<th align="middle">QwenVL Plus</th>
<td align="middle">✘</td>
<td align="middle">18.1 </td>
</tr>
<tr>
<th align="middle"><font color=darkred>MiniCPM-V2.5</font></th>
<td align="middle">✅</td>
<td align="middle">15.3 </td>
</tr>
</table>
**The test code for evaluating models in the paper can be found in**: <b><a href="https://github.com/EraX-JS-Company/EraX-MTVQA-Benchmark" target="_blank">EraX-JS-Company/EraX-MTVQA-Benchmark</a></b>
## API trial 🎉
Please contact **[email protected]** for API access inquiry.
## Examples 🧩
### 1. OCR - Optical Character Recognition for Multi-Images
**Example 01: Citizen identification card**
<div style="display: flex; flex-direction: row; align-items: center; justify-content: center;">
<div style="text-align: center; margin: 0 10px;">
<img src="images/trinhquangduy_front.jpg" width="500" alt="Front View" />
<p>Front View</p>
</div>
<div style="text-align: center; margin: 0 10px;">
<img src="images/trinhquangduy_back.jpg" width="500" alt="Back View" />
<p>Back View</p>
</div>
</div>
<p style="text-align: center; font-size: 12px; color: gray; margin-top: 10px;">
Source: <a href="https://support.google.com/google-ads/thread/270967947/t%C3%B4i-%C4%91%C3%A3-g%E1%BB%ADi-h%C3%ACnh-%E1%BA%A3nh-c%C4%83n-c%C6%B0%E1%BB%9Bc-c%C3%B4ng-d%C3%A2n-c%E1%BB%A7a-ch%C3%ADnh-t%C3%B4i-%C4%91%E1%BB%83-x%C3%A1c-minh-danh-t%C3%ADnh?hl=vi" target="_blank">Google Support</a>
</p>
```
{
"Số thẻ": "037094012351",
"Họ và tên": "TRỊNH QUANG DUY",
"Ngày sinh": "04/09/1994",
"Giới tính": "Nam",
"Quốc tịch": "Việt Nam",
"Quê quán": "Tân Thành, Kim Sơn, Ninh Bình",
"Nơi thường trú": "Xóm 6\nTân Thành, Kim Sơn, Ninh Bình",
"Có giá trị đến": "04/09/2034",
"Đặc điểm nhân dạng": "sẹo chấm c. 1cm trên đuôi mắt trái",
"Nơi cấp": "CỤC TRƯỞNG CỤC CẢNH SÁT\nQUẢN LÝ HÀNH CHÍNH VỀ TRẬT TỰ XÃ HỘI",
"Ngày cấp": "10/12/2022",
"Cán bộ ký tên": "Nguyễn Quốc Hùng",
"Mã định danh": "IDVNM0940123513037094012351"
}
```
**Example 02: Driver's License**
<div style="display: flex; flex-direction: row; align-items: center; justify-content: center;">
<div style="text-align: center; margin: 0 10px;">
<img src="images/nguyenvandung_front.png" width="500" alt="Front View" />
<p>Front View</p>
</div>
<div style="text-align: center; margin: 0 10px;">
<img src="images/nguyenvandung_back.png" width="500" alt="Back View" />
<p>Back View</p>
</div>
</div>
<p style="text-align: center; font-size: 12px; color: gray; margin-top: 10px;">
Source: <a href="https://baophapluat.vn/khoi-to-tai-xe-len-mang-mua-giay-phep-lai-xe-gia-de-chay-xe-post481047.html" target="_blank">Báo Pháp luật</a>
</p>
```
{
"No.":"400116012313"
"Fullname":"NGUYỄN VĂN DŨNG"
"Date_of_birth":"08/06/1979"
"Nationality":"VIỆT NAM"
"Address":"X. Quỳnh Hầu, H. Quỳnh Lưu, T. Nghệ An
Nghệ An, ngày/date 23 tháng/month 04 năm/year 2022"
"Hang_Class":"FC"
"Expires":"23/04/2027"
"Place_of_issue":"Nghệ An"
"Date_of_issue":"ngày/date 23 tháng/month 04 năm/year 2022"
"Signer":"Trần Anh Tuấn"
"Các loại xe được phép":"Ô tô hạng C kéo rơmoóc, đầu kéo kéo sơmi rơmoóc và xe hạng B1, B2, C, FB2 (Motor vehicle of class C with a trailer, semi-trailer truck and vehicles of classes B1, B2, C, FB2)"
"Mã số":""
}
```
**Example 03: Vehicle Registration Certificate**
<div style="display: flex; flex-direction: row; align-items: center; justify-content: center;">
<div style="text-align: center; margin: 0 10px;">
<img src="images/nguyentonnhuan.jpg" width="700"/>
</div>
</div>
<p style="text-align: center; font-size: 12px; color: gray; margin-top: 10px;">
Source: <a href="https://vietnamnet.vn/phan-biet-cac-loai-giay-dang-ky-xe-khi-mua-moto-da-qua-su-dung-541341.html" target="_blank">Báo Vietnamnet</a>
</p>
```
{
"Tên chủ xe": "NGUYỄN TÔN NHUẬN",
"Địa chỉ": "KE27 Kp3 P.TTTây Q7",
"Nhãn hiệu": "HONDA",
"Số loại": "DYLAN",
"Màu sơn": "Trắng",
"Năm sản xuất": "2012",
"Số máy": "F03E-0057735",
"Số khung": "SA04F-070410",
"Dung tích": "152",
"Số chỗ ngồi": "02",
"Biển số đăng ký": "59V1-498.89",
"Đăng ký lần đầu ngày": "08/06/2004",
"Chức vụ": "Thượng tá",
"Người ký": "Trần Văn Hiểu"
}
```
**Example 04: Vehicle Registration**
<div style="display: flex; flex-direction: row; align-items: center; justify-content: center;">
<div style="text-align: center; margin: 10 20px;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63d8d8879dfcfa941d4d7cd9/w5WCaQ-k9nupRIQYddcpr.jpeg" width="700"/>
</div>
</div>
<p style="text-align: center; font-size: 12px; color: gray; margin-top: 10px;">
Source: <a href="https://llumar.com.vn/dang-kiem-xe-o-to/" target="_blank">https://llumar.com.vn</a>
</p>
```
{
"vehicle": {
"registration_number": "30A-072.36",
"vehicle_inspection_number": "2903V-093515",
"type": "ô tô con",
"mark": "MERCEDES-BENZ",
"model_code": "C300 W204",
"engine_number": "27294732096079",
"chassis_number": "RLMGF5EX3DV005333",
"manufactured_year_and_country": "2013, Việt Nam",
"life_time_limit_to": "",
"commercial_use": "",
"modification": ""
},
"specifications": {
"wheel_formula": "4x2",
"wheel_tread": "1521/1512 (mm)",
"overall_dimension": "4650 x 1770 x 1429 (mm)",
"largest_luggage_container_dimension": "",
"wheelbase": "2760 (mm)",
"kerb_mass": "1575 (kg)",
"design_authorized_pay_load": "",
"design_authorized_total_mass": "2090/2090 (kg)",
"design_authorized_towed_mass": "",
"permissible_number_of_pers_carried": "5 chỗ ngồi, 0 chỗ đứng, 0 chỗ nằm",
"type_of_fuel_used": "Xăng",
"engine_displacement": "2996 (cm3)",
"max_output_per_rpm": "170(kW)/6000vph",
"number": "KC-1292285"
},
"inspection_report_number": "2905V-20953/16",
"valid_until": "31/01/2018",
"place_date_of_issue": "Hà Nội, ngày 1 tháng 8 năm 2016",
"inspection_center": "ĐƠN VỊ KIỂM ĐỊNH XE CƠ GIỚI",
"signature": "Ngọc Tuấn",
"equipped_with_tachograph": "",
"inspection_stamp_was_not_issued": "",
"notes": "Biển đăng ký nền trắng"
}
```
**Example 05: Receipt**
<div style="display: flex; flex-direction: row; align-items: center; justify-content: center;">
<div style="text-align: center; margin: 10 20px;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63d8d8879dfcfa941d4d7cd9/40vIbNdM1cFXwQYNHx7Ag.jpeg" width="500"/>
</div>
</div>
<p style="text-align: center; font-size: 12px; color: gray; margin-top: 10px;">
Source: <a href="https://tintucketoan.com/cach-viet-hoa-don-hang-hoa-dich-vu-khong-chiu-thue-gtgt/" target="_blank">https://tintucketoan.com/</a>
</p>
```
{
'Mẫu số': '01GKTKT3/001',
'Ký hiệu': 'TC/18P',
'Số': '0000030',
'Họ tên người mua hàng': None,
'Tên đơn vị': 'Công Ty TNHH Kế Toán Hà Nội',
'Mã số thuế': '0106235869',
'Địa chỉ': 'Số 49 Ngõ 322 Lê Trọng Tấn, phường Khương Mai, quận Thanh Xuân, Hà Nội',
'Hình thức thanh toán': 'TM',
'STT': None,
'Tên hàng hóa, dịch vụ': 'Tra cứu phần mềm thư viện pháp luật trực tuyến',
'Đơn vị tính': None,
'Số lượng': None,
'Đơn giá': '168.000',
'Thành tiền': '2.016.000',
'Thuế suất GTGT': None,
'Tiền thuế GTGT': None,
'Tổng cộng tiền thanh toán': '2.016.000',
'Số tiền viết bằng chữ': 'Hai triệu, không trăm mười sáu nghìn đồng',
'Người bán hàng': 'Bùi Văn Hùng',
'Chức vụ người bán hàng': 'TRƯỞNG CHI NHÁNH'
}
```
### 2.1 Image Captioning
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63d8d8879dfcfa941d4d7cd9/g5V60A7rI94TH0z3zdSAA.jpeg" width="700"/>
</div>
Hình ảnh là biểu đồ BMI theo tuổi, thể hiện mối quan hệ giữa chỉ số khối cơ thể (BMI) và độ tuổi của trẻ em. Biểu đồ được chia thành các vùng màu khác nhau tương ứng với các mức BMI khác nhau:
* **Vùng màu đỏ:** Chỉ số BMI cao hơn 25, cho thấy tình trạng béo phì.
* **Vùng màu vàng:** Chỉ số BMI nằm trong khoảng từ 18 đến 25, cho thấy nguy cơ béo phì.
* **Vùng màu xanh lá cây nhạt:** Chỉ số BMI nằm trong khoảng từ 16 đến 18, cho thấy sức khỏe dinh dưỡng tốt.
* **Vùng màu xanh lá cây đậm:** Chỉ số BMI thấp hơn 16, cho thấy tình trạng thiếu cân.
Trục tung biểu diễn chỉ số BMI, trục hoành biểu diễn tuổi (tính bằng năm). Đường cong màu xám đậm thể hiện đường chuẩn BMI theo tuổi. Các đường cong này cho thấy sự thay đổi BMI theo thời gian, giúp đánh giá sự phát triển cân nặng của trẻ em. Ví dụ, ở trẻ em dưới 3 tuổi, BMI thường dao động trong vùng thiếu cân hoặc sức khỏe dinh dưỡng tốt. Khi trẻ lớn lên, BMI có xu hướng tăng dần, nhưng tốc độ tăng trưởng có thể khác nhau tùy thuộc vào từng cá nhân. Biểu đồ cũng hiển thị các phần trăm phân vị (Percentile), cho biết tỷ lệ phần trăm trẻ em có BMI thấp hơn hoặc cao hơn so với một nhóm trẻ em cùng độ tuổi. Điều này giúp so sánh BMI của trẻ em với tiêu chuẩn quốc tế.
### 2.2 Image Captioning
<div align="center">
<img src="https://huggingface.co/erax-ai/EraX-VL-7B-V1.5/resolve/main/images/27vid-Gaza-City-Cover-gqmt-videoSixteenByNine1050%20(1).jpg" width="700"/>
</div>
Hình ảnh chụp một cảnh tượng đầy xúc động và bi thảm, dường như diễn ra ở một khu vực nghèo khó, có thể là một khu định cư hoặc khu ổ chuột. Trung tâm của bức ảnh là một chiếc xe đẩy được kéo bởi một con lừa. Trên xe đẩy có một nhóm người, bao gồm một người đàn ông lớn tuổi có vẻ như là người hướng dẫn, một phụ nữ mặc áo choàng đen, một phụ nữ trẻ mặc áo xám, một bé gái nhỏ được che mặt bằng khăn trùm đầu, và một cậu bé mặc áo xanh lá cây. Họ có vẻ như đang di chuyển từ một khu vực bị tàn phá sang một nơi khác. Xung quanh họ là những mảnh vụn, rác thải và những tòa nhà đổ nát, cho thấy một môi trường sống khó khăn và thiếu thốn. Phía sau họ, có một số người khác đang đi bộ hoặc đi xe đạp, một số người đang mang theo hành lý. Bầu trời trong xanh, nhưng không khí chung của bức ảnh là nặng nề và buồn bã. Có thể đây là một cảnh tượng di cư, tị nạn, hoặc một sự kiện thảm khốc nào đó.
### 3. Multi-turns Visual Question Answering and Reasoning
<div style="display: flex; flex-direction: row; align-items: center; justify-content: center;">
<div style="text-align: center; margin: 10 20px;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63d8d8879dfcfa941d4d7cd9/S2UosY_TOOV5XNn3d4_zL.jpeg" width="700"/>
</div>
</div>
<p style="text-align: center; font-size: 12px; color: gray; margin-top: 10px;">
Source: <a href="https://kenh14.vn/bien-cang-thai-trinh-nhap-vien-vi-ngo-doc-dang-dan-to-1-resort-5-sao-thieu-trach-nhiem-voi-khach-hang-20200924214133912.chn" target="_blank">https://kenh14.vn</a>
</p>
- **User**: Hãy cho biết thông tin bệnh nhân
- **Assistant**: Tên: NGUYỄN THỊ THÁI TRINH, Ngày sinh: 09/09/1993, Giới tính: Nữ, Địa chỉ: TÂN ĐỊNH - Q1 - TPHCM, Mã thẻ BHYT: 19169080, Số khám bệnh: 2569501, Mã khu vực (K1/K2/K3): 2
- **User**: Họ được chuẩn đoán bị bệnh gì?
- **Assistant**: Nhiễm trùng đường ruột do vi khuẩn, không xác định loài
- **User**: Người bệnh có tiền sử bệnh không? Nếu có thì là gì?
- **Assistant**: Thiếu Calci do chế độ ăn
- **User**: Bệnh nhân đã thực hiện những xét nghiệm gì?
- **Assistant**: [NQ286] Tổng phân tích tế bào máu ngoại vi (bằng máy đếm laser), [NQ286] Điện giải đồ (Na, K, Cl) [Máu], [NQ286] Đo hoạt độ ALT (GPT) [Máu], [NQ286] Đo hoạt độ AST (GOT) [Máu], [NQ286] Định lượng Calci toàn phần [Máu], [NQ286] Định lượng Creatinin [Máu], [NQ286] Định lượng Glucose [Máu], [NQ286] Định lượng Urê máu [Máu], [NQ286] Siêu âm ổ bụng (gan mật, tụy, lách, thận, bàng quang)
- **User**: Số tiền Vật tư y tế mà người bệnh phải trả là bao nhiêu?
- **Assistant**: 19,708
- **User**: Các loại thuốc dịch truyền người bệnh phải trả là gì, tổng số tiền là bao nhiêu?
- **Assistant**: Atropin Sulfat (Syv2019), Bicefzidim 1g (Syv2019), GONCAL (Syv2019), Lactated Ringer's-500ml (Syv2019), Nước cất pha tiêm 5ml (Syv2019), Sodium Chloride 0.9% -500ml (Syv2019), Vincomid (Syv2019), Vinopa (Syv2019), tổng cộng 45,234 đồng
## Quickstart 🎮
Install the necessary packages:
```curl
python -m pip install git+https://github.com/huggingface/transformers accelerate
python -m pip install qwen-vl-utils
pip install flash-attn --no-build-isolation
```
Then you can use `EraX-VL-7B-V1.5` like this:
```python
import os
import base64
import json
import cv2
import numpy as np
import matplotlib.pyplot as plt
import torch
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
model_path = "erax/EraX-VL-7B-V1.5"
model = Qwen2VLForConditionalGeneration.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
attn_implementation="eager", # replace with "flash_attention_2" if your GPU is Ampere architecture
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_path)
# processor = AutoProcessor.from_pretrained(model_path)
min_pixels = 256 * 28 * 28
max_pixels = 1280 * 28 * 28
processor = AutoProcessor.from_pretrained(
model_path,
min_pixels=min_pixels,
max_pixels=max_pixels,
)
image_path ="image.jpg"
with open(image_path, "rb") as f:
encoded_image = base64.b64encode(f.read())
decoded_image_text = encoded_image.decode('utf-8')
base64_data = f"data:image;base64,{decoded_image_text}"
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": base64_data,
},
{
"type": "text",
"text": "Trích xuất thông tin nội dung từ hình ảnh được cung cấp."
},
],
}
]
# Prepare prompt
tokenized_text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[ tokenized_text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Generation configs
generation_config = model.generation_config
generation_config.do_sample = True
generation_config.temperature = 1.0
generation_config.top_k = 1
generation_config.top_p = 0.9
generation_config.min_p = 0.1
generation_config.best_of = 5
generation_config.max_new_tokens = 2048
generation_config.repetition_penalty = 1.06
# Inference
generated_ids = model.generate(**inputs, generation_config=generation_config)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text[0])
```
## References 📑
[1] Qwen team. Qwen2-VL. 2024.
[2] Bai, Jinze, et al. "Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond." arXiv preprint arXiv:2308.12966 (2023).
[4] Yang, An, et al. "Qwen2 technical report." arXiv preprint arXiv:2407.10671 (2024).
[5] Chen, Zhe, et al. "Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
[6] Chen, Zhe, et al. "How far are we to gpt-4v? closing the gap to commercial multimodal models with open-source suites." arXiv preprint arXiv:2404.16821 (2024).
[7] Tran, Chi, and Huong Le Thanh. "LaVy: Vietnamese Multimodal Large Language Model." arXiv preprint arXiv:2404.07922 (2024).
## Contact 🤝
- For correspondence regarding this work or inquiry for API trial, please contact Nguyễn Anh Nguyên at [[email protected]]([email protected]).
- Follow us on <b><a href="https://github.com/EraX-JS-Company" target="_blank">EraX Github</a></b>
|
watney1024/sd-class-butterflies-32 | watney1024 | 2025-04-02T07:15:47Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
]
| unconditional-image-generation | 2025-04-02T07:10:46Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('watney1024/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
Jim57001/limit_up_concept | Jim57001 | 2025-04-02T07:15:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-02T07:14:58Z | ---
base_model: unsloth/deepseek-r1-distill-llama-8b
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Jim57001
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
PrunaAI/PygmalionAI-pygmalion-6b-HQQ-4bit-smashed | PrunaAI | 2025-04-02T07:14:06Z | 2 | 0 | null | [
"gptj",
"pruna-ai",
"hqq",
"region:us"
]
| null | 2025-03-25T02:46:39Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: ORIGINAL_REPO_NAME
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo ORIGINAL_REPO_NAME installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/PygmalionAI-pygmalion-6b-HQQ-4bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/PygmalionAI-pygmalion-6b-HQQ-4bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("ORIGINAL_REPO_NAME")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model ORIGINAL_REPO_NAME before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
KingEmpire/sn9_pre_c04_11 | KingEmpire | 2025-04-02T07:11:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T03:06:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AlecLee/DeepSeek-R1-finetune-test | AlecLee | 2025-04-02T07:11:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T07:06:32Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Bobaduck9173/sdxl_meme_fourth | Bobaduck9173 | 2025-04-02T07:11:01Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| text-to-image | 2025-04-02T07:10:53Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a photo of TOK dog
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - Bobaduck9173/sdxl_meme_fourth
<Gallery />
## Model description
These are Bobaduck9173/sdxl_meme_fourth LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK dog to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](Bobaduck9173/sdxl_meme_fourth/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
spanev/Qwen-2.5-7B-Simple-RL | spanev | 2025-04-02T07:10:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-31T21:21:55Z | ---
base_model: Qwen/Qwen2.5-Math-7B
library_name: transformers
model_name: Qwen-2.5-7B-Simple-RL
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-Simple-RL
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="spanev/Qwen-2.5-7B-Simple-RL", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/spanev/huggingface/runs/80rtdhkm)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0
- Transformers: 4.50.0
- Pytorch: 2.5.1
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
sangati/model_output_format_Q8_0 | sangati | 2025-04-02T07:09:40Z | 0 | 1 | transformers | [
"transformers",
"gguf",
"gemma2",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/gemma-2-2b-bnb-4bit",
"base_model:quantized:unsloth/gemma-2-2b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-02T07:08:53Z | ---
base_model: unsloth/gemma-2-2b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sangati
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-2b-bnb-4bit
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
minyong/20250402_065131_gemma-3-27b-pt_LoRA | minyong | 2025-04-02T07:08:25Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-27b-pt",
"base_model:finetune:google/gemma-3-27b-pt",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-02T06:53:46Z | ---
base_model: google/gemma-3-27b-pt
library_name: transformers
model_name: 20250402_065131_gemma-3-27b-pt_LoRA
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for 20250402_065131_gemma-3-27b-pt_LoRA
This model is a fine-tuned version of [google/gemma-3-27b-pt](https://huggingface.co/google/gemma-3-27b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="minyong/20250402_065131_gemma-3-27b-pt_LoRA", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0
- Transformers: 4.50.0
- Pytorch: 2.6.0
- Datasets: 3.1.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
beier3/deepseek-review | beier3 | 2025-04-02T07:07:49Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-04-02T07:07:49Z | ---
license: apache-2.0
---
|
russellyq/Qwen2-VL-2B-Instruct-Med-SFT | russellyq | 2025-04-02T07:07:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"generated_from_trainer",
"R1-V",
"trl",
"sft",
"conversational",
"dataset:MedVLM-R1",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-04-02T05:49:39Z | ---
datasets: MedVLM-R1
library_name: transformers
model_name: Qwen2-VL-2B-Instruct-Med-SFT
tags:
- generated_from_trainer
- R1-V
- trl
- sft
licence: license
---
# Model Card for Qwen2-VL-2B-Instruct-Med-SFT
This model is a fine-tuned version of [None](https://huggingface.co/None) on the [MedVLM-R1](https://huggingface.co/datasets/MedVLM-R1) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="russellyq/Qwen2-VL-2B-Instruct-Med-SFT", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/1155225591-the-chinese-university-of-hong-kong/R1-V/runs/uwgglrp0)
This model was trained with SFT.
### Framework versions
- TRL: 0.14.0
- Transformers: 4.49.0.dev0
- Pytorch: 2.5.1+cu121
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Ketak-ZoomRx/indi-llama3-8b-4bit | Ketak-ZoomRx | 2025-04-02T07:07:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-02T07:07:16Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Ketak-ZoomRx
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jinesh90/python-code-generator-finetuned-int4-18k-alpaca | jinesh90 | 2025-04-02T07:05:53Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-02T07:05:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jinesh90/python-code-generator-finetuned-int4-18k-instructions | jinesh90 | 2025-04-02T07:05:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"codegen",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T07:04:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SmitTrivedi/pubmedqa-gguf | SmitTrivedi | 2025-04-02T07:05:15Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-02T07:04:20Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** SmitTrivedi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
xw17/TinyLlama-1.1B-Chat-v1.0_finetuned_3_def_lora | xw17 | 2025-04-02T07:04:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T02:55:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
pritmanvar/outputs | pritmanvar | 2025-04-02T07:04:36Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"license:llama2",
"region:us"
]
| null | 2025-04-02T07:04:24Z | ---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-chat-hf
tags:
- generated_from_trainer
model-index:
- name: outputs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.15.1
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
H5N1AIDS/F5-TTS-ONNX | H5N1AIDS | 2025-04-02T07:02:32Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-03-22T02:13:12Z | ---
license: apache-2.0
---
|
John6666/hana-v11-sdxl | John6666 | 2025-04-02T07:01:03Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"hentai",
"style",
"clean lines",
"vibrant colors",
"impressive details",
"haru",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2025-04-02T06:52:57Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- hentai
- style
- clean lines
- vibrant colors
- impressive details
- haru
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1423365?modelVersionId=1609145).
This model created by [MotherGoddess](https://civitai.com/user/MotherGoddess).
|
wyddmw/POMATO | wyddmw | 2025-04-02T07:00:07Z | 0 | 0 | null | [
"license:cc-by-nc-sa-4.0",
"region:us"
]
| null | 2025-04-02T07:00:07Z | ---
license: cc-by-nc-sa-4.0
---
|
xw17/TinyLlama-1.1B-Chat-v1.0_finetuned_1_def_lora | xw17 | 2025-04-02T06:57:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T02:49:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinaMila/llama_instbase_unlearned_Adult_10ep_22 | MinaMila | 2025-04-02T06:56:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:MinaMila/llama3_unlearning_general_methode",
"base_model:finetune:MinaMila/llama3_unlearning_general_methode",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T06:53:01Z | ---
base_model: MinaMila/llama3_unlearning_general_methode
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** MinaMila/llama3_unlearning_general_methode
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
merelevy/environmental-accessibility | merelevy | 2025-04-02T06:56:02Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"region:us"
]
| image-classification | 2025-04-02T06:55:50Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: environmental-accessibility
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8202247023582458
---
# environmental-accessibility
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### building ramp

#### room signs

#### sign with braille

#### stairs
 |
John6666/hana-v10-sdxl | John6666 | 2025-04-02T06:52:55Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"hentai",
"style",
"clean lines",
"vibrant colors",
"impressive details",
"haru",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2025-04-02T06:43:56Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- hentai
- style
- clean lines
- vibrant colors
- impressive details
- haru
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1423365?modelVersionId=1608794).
This model created by [MotherGoddess](https://civitai.com/user/MotherGoddess).
|
Jojobigworld/Qwen2.5-Coder-7B-Instruct-Q4_K_M-GGUF | Jojobigworld | 2025-04-02T06:50:28Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-Coder-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2025-04-02T06:50:02Z | ---
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
- llama-cpp
- gguf-my-repo
---
# Jojobigworld/Qwen2.5-Coder-7B-Instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-Coder-7B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Jojobigworld/Qwen2.5-Coder-7B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-coder-7b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Jojobigworld/Qwen2.5-Coder-7B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-coder-7b-instruct-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Jojobigworld/Qwen2.5-Coder-7B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-coder-7b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Jojobigworld/Qwen2.5-Coder-7B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-coder-7b-instruct-q4_k_m.gguf -c 2048
```
|
artisanalwasp/resized_tool_dataset_model_batchsize2 | artisanalwasp | 2025-04-02T06:48:14Z | 0 | 0 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2025-04-02T06:26:38Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - artisanalwasp/resized_tool_dataset_model_batchsize2
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were fine-tuned on the artisanalwasp/resized_tool_dataset dataset. You can find some example images in the following.



LoRA for the text encoder was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
jon-fernandes/whisper-small-50 | jon-fernandes | 2025-04-02T06:47:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-02T06:47:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Jojobigworld/Qwen2.5-Coder-3B-Instruct-Q4_K_M-GGUF | Jojobigworld | 2025-04-02T06:46:55Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Qwen/Qwen2.5-Coder-3B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-Coder-3B-Instruct",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2025-04-02T06:46:42Z | ---
base_model: Qwen/Qwen2.5-Coder-3B-Instruct
language:
- en
library_name: transformers
license: other
license_name: qwen-research
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
- llama-cpp
- gguf-my-repo
---
# Jojobigworld/Qwen2.5-Coder-3B-Instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-Coder-3B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Jojobigworld/Qwen2.5-Coder-3B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-coder-3b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Jojobigworld/Qwen2.5-Coder-3B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-coder-3b-instruct-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Jojobigworld/Qwen2.5-Coder-3B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-coder-3b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Jojobigworld/Qwen2.5-Coder-3B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-coder-3b-instruct-q4_k_m.gguf -c 2048
```
|
icycyborg/bella-lora | icycyborg | 2025-04-02T06:44:19Z | 0 | 0 | null | [
"license:other",
"region:us"
]
| null | 2025-04-02T06:07:47Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
PrunaAI/meta-llama-Llama-2-7b-hf-GGUF-smashed | PrunaAI | 2025-04-02T06:41:45Z | 0 | 0 | null | [
"gguf",
"pruna-ai",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:quantized:meta-llama/Llama-2-7b-hf",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-18T01:51:39Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: meta-llama/Llama-2-7b-hf
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.com/invite/vb6SmA3hxu)
## This repo contains GGUF versions of the meta-llama/Llama-2-7b-hf model.
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help.
**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with GGUF.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***What is the model format?*** We use GGUF format.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
# Downloading and running the models
You can download the individual files from the Files & versions section. Here is a list of the different versions we provide. For more info checkout [this chart](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) and [this guide](https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overview_of_gguf_quantization_methods/):
| Quant type | Description |
|------------|--------------------------------------------------------------------------------------------|
| Q5_K_M | High quality, recommended. |
| Q5_K_S | High quality, recommended. |
| Q4_K_M | Good quality, uses about 4.83 bits per weight, recommended. |
| Q4_K_S | Slightly lower quality with more space savings, recommended. |
| IQ4_NL | Decent quality, slightly smaller than Q4_K_S with similar performance, recommended. |
| IQ4_XS | Decent quality, smaller than Q4_K_S with similar performance, recommended. |
| Q3_K_L | Lower quality but usable, good for low RAM availability. |
| Q3_K_M | Even lower quality. |
| IQ3_M | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| IQ3_S | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| Q3_K_S | Low quality, not recommended. |
| IQ3_XS | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| Q2_K | Very low quality but surprisingly usable. |
## How to download GGUF files ?
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
- **Option A** - Downloading in `text-generation-webui`:
- **Step 1**: Under Download Model, you can enter the model repo: meta-llama-Llama-2-7b-hf-GGUF-smashed and below it, a specific filename to download, such as: phi-2.IQ3_M.gguf.
- **Step 2**: Then click Download.
- **Option B** - Downloading on the command line (including multiple files at once):
- **Step 1**: We recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
- **Step 2**: Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download meta-llama-Llama-2-7b-hf-GGUF-smashed Llama-2-7b-hf.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
Alternatively, you can also download multiple files at once with a pattern:
```shell
huggingface-cli download meta-llama-Llama-2-7b-hf-GGUF-smashed --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download meta-llama-Llama-2-7b-hf-GGUF-smashed Llama-2-7b-hf.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## How to run model in GGUF format?
- **Option A** - Introductory example with `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Llama-2-7b-hf.IQ3_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] {{prompt\}} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
- **Option B** - Running in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20-%20Model%20Tab.md#llamacpp).
- **Option C** - Running from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Llama-2-7b-hf.IQ3_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<s>[INST] {{prompt}} [/INST]", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Llama-2-7b-hf.IQ3_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{{"role": "system", "content": "You are a story writing assistant."}},
{{
"role": "user",
"content": "Write a story about llamas."
}}
]
)
```
- **Option D** - Running with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
xw17/Qwen2-1.5B-Instruct_finetuned_4_def_lora | xw17 | 2025-04-02T06:41:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T02:35:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
iykee/45DVVBB | iykee | 2025-04-02T06:41:13Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-04-02T06:41:13Z | ---
license: apache-2.0
---
|
Jojobigworld/XiYanSQL-QwenCoder-7B-2502-Q4_K_M-GGUF | Jojobigworld | 2025-04-02T06:41:01Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:XGenerationLab/XiYanSQL-QwenCoder-7B-2502",
"base_model:quantized:XGenerationLab/XiYanSQL-QwenCoder-7B-2502",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-02T06:40:33Z | ---
base_model: XGenerationLab/XiYanSQL-QwenCoder-7B-2502
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# Jojobigworld/XiYanSQL-QwenCoder-7B-2502-Q4_K_M-GGUF
This model was converted to GGUF format from [`XGenerationLab/XiYanSQL-QwenCoder-7B-2502`](https://huggingface.co/XGenerationLab/XiYanSQL-QwenCoder-7B-2502) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/XGenerationLab/XiYanSQL-QwenCoder-7B-2502) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Jojobigworld/XiYanSQL-QwenCoder-7B-2502-Q4_K_M-GGUF --hf-file xiyansql-qwencoder-7b-2502-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Jojobigworld/XiYanSQL-QwenCoder-7B-2502-Q4_K_M-GGUF --hf-file xiyansql-qwencoder-7b-2502-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Jojobigworld/XiYanSQL-QwenCoder-7B-2502-Q4_K_M-GGUF --hf-file xiyansql-qwencoder-7b-2502-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Jojobigworld/XiYanSQL-QwenCoder-7B-2502-Q4_K_M-GGUF --hf-file xiyansql-qwencoder-7b-2502-q4_k_m.gguf -c 2048
```
|
mradermacher/NarrowMaid-8B-GGUF | mradermacher | 2025-04-02T06:36:52Z | 5 | 1 | transformers | [
"transformers",
"gguf",
"rp",
"roleplay",
"roleplaying",
"storywriting",
"creative",
"merge",
"mergekit",
"en",
"base_model:Hamzah-Asadullah/NarrowMaid-8B",
"base_model:quantized:Hamzah-Asadullah/NarrowMaid-8B",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-02T04:05:38Z | ---
base_model: Hamzah-Asadullah/NarrowMaid-8B
language:
- en
library_name: transformers
license: llama3.1
quantized_by: mradermacher
tags:
- rp
- roleplay
- roleplaying
- storywriting
- creative
- merge
- mergekit
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Hamzah-Asadullah/NarrowMaid-8B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/NarrowMaid-8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NarrowMaid-8B-GGUF/resolve/main/NarrowMaid-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/NarrowMaid-8B-GGUF/resolve/main/NarrowMaid-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/NarrowMaid-8B-GGUF/resolve/main/NarrowMaid-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NarrowMaid-8B-GGUF/resolve/main/NarrowMaid-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/NarrowMaid-8B-GGUF/resolve/main/NarrowMaid-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/NarrowMaid-8B-GGUF/resolve/main/NarrowMaid-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NarrowMaid-8B-GGUF/resolve/main/NarrowMaid-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NarrowMaid-8B-GGUF/resolve/main/NarrowMaid-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/NarrowMaid-8B-GGUF/resolve/main/NarrowMaid-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/NarrowMaid-8B-GGUF/resolve/main/NarrowMaid-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/NarrowMaid-8B-GGUF/resolve/main/NarrowMaid-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/NarrowMaid-8B-GGUF/resolve/main/NarrowMaid-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
moyixiao/qwen15_0402_4096_32 | moyixiao | 2025-04-02T06:36:13Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T06:35:06Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
John6666/cyberrealistic-xl-v53-sdxl | John6666 | 2025-04-02T06:35:41Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"en",
"base_model:cyberdelia/CyberRealisticXL",
"base_model:finetune:cyberdelia/CyberRealisticXL",
"license:cc0-1.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2025-04-02T06:27:06Z | ---
license: cc0-1.0
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
base_model: cyberdelia/CyberRealisticXL
---
Original model is [here](https://huggingface.co/cyberdelia/CyberRealisticXL) and on [Civitai](https://civitai.com/models/312530/cyberrealistic-xl?modelVersionId=1609607).
The author is [here](https://huggingface.co/cyberdelia).
This model created by [Cyberdelia](https://civitai.com/user/Cyberdelia).
|
MinaMila/llama_instbase_unlearned_Adult_8ep_22 | MinaMila | 2025-04-02T06:35:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:MinaMila/llama3_unlearning_general_methode",
"base_model:finetune:MinaMila/llama3_unlearning_general_methode",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T06:32:16Z | ---
base_model: MinaMila/llama3_unlearning_general_methode
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** MinaMila/llama3_unlearning_general_methode
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Jonjew/submergedFlux | Jonjew | 2025-04-02T06:34:55Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
]
| text-to-image | 2025-04-02T06:34:48Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
submergedaf. A hyperrealistic close-up portrait of a young woman partially
submerged in water, her freckled face illuminated by cinematic lighting. Her
eyes are open, expressive, and reflective, framed by long wet lashes. The
water ripples softly around her face, catching golden-orange and teal-blue
lighting from above and below. Her skin glistens with droplets, showing fine
pores and natural texture. Beneath the surface, intricate water caustics
dance across her neck and shoulders, casting shifting light patterns that
shimmer like liquid lace. The lighting is soft yet dramatic, blending warm
highlights with cool shadows to create an ethereal, dreamlike atmosphere.
Her expression is calm, introspective, and vulnerable. The overall tone is
emotionally rich, painterly, and intimate. Evoking a suspended moment
between breath and thought.
parameters:
negative_prompt: 'Guidance: 1 Steps: 30 Seed: 650600217932757'
output:
url: images/Face in Water.png
- text: >-
submergedaf. A realistic cinematic portrait of a woman completely submerged
just beneath the surface of dark green water, her face softly illuminated by
shimmering water caustics. Her eyes are open and looking at the camera, lips
gently parted, and expression serene, as if lost in a deep dream. Rippling
light patterns dance across her skin, casting intricate, organic reflections
and highlights on her cheeks, forehead, and neck. Her hair floats freely
around her, blending into the deep green shadows of the surrounding water.
The lighting is soft and natural, evoking a sense of quiet stillness and
suspended time. The water is clear but tinted with rich green hues, creating
an otherworldly atmosphere. Emphasize detailed skin texture, the interplay
of light and liquid distortion, and the softness of the scene. The mood is
introspective, peaceful, and ethereal—like a quiet moment of transformation
or rebirth within an aquatic realm.
parameters:
negative_prompt: 'Guidance: 4 Steps: 30 Seed: 218286689747307'
output:
url: images/Face under water.png
- text: >-
submergedaf. A realistic, ethereal portrait of a young woman fully submerged
just beneath the surface of still water, surrounded by pale green eucalyptus
leaves. Her eyes are open looking at the camera, lips together with a gentle
smile in a soft, peaceful expression. Lighting and water caustics play
delicately across her dewy skin, highlighting her natural texture, flushed
cheeks, and coral-pink lips. Soft strands of wet hair frame her face,
drifting gracefully in the water. The surface gently ripples around her,
forming small circular waves that reflect the muted, natural lighting. The
eucalyptus leaves float around her like a delicate halo, enhancing the sense
of calm and purity. The image is shot from directly above, emphasizing
symmetry and intimacy. Color grading features soft teals, sage greens, and
warm skin tones, evoking a sense of organic tranquility and timeless beauty.
The mood is poetic, natural, and deeply peaceful—like a living painting
suspended in time. Focus on fine skin detail, botanical elements, gentle
water distortions, and cinematic soft lighting.
parameters:
negative_prompt: 'Guidance: 4 Steps: 30 Seed: 19850920'
output:
url: images/Face with plants.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: submergedaf
license: unknown
---
# submerged - Flux
<Gallery />
## Model description
FROM https://civitai.com/models/1424932/submerged-flux1?modelVersionId=1610625
Support the creator by liking and donating buzz at the page above
Trigger submergedaf
Strength 0.8
Concept LoRa of models floating, either partially or fully submerged in water, closeup on the face, highly detailed, accurate water caustics, with beauty and depth.
trigger: submergedaf
## Trigger words
You should use `submergedaf` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/submergedFlux/tree/main) them in the Files & versions tab.
|
moyixiao/qwen15_0402_4096r64t | moyixiao | 2025-04-02T06:32:11Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:moyixiao/Qwen2.5-Math-1.5B-Instruct",
"base_model:adapter:moyixiao/Qwen2.5-Math-1.5B-Instruct",
"license:apache-2.0",
"region:us"
]
| null | 2025-04-02T05:00:46Z | ---
library_name: peft
license: apache-2.0
base_model: moyixiao/Qwen2.5-Math-1.5B-Instruct
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: qwen15_0402_4096r64t
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen15_0402_4096r64t
This model is a fine-tuned version of [moyixiao/Qwen2.5-Math-1.5B-Instruct](https://huggingface.co/moyixiao/Qwen2.5-Math-1.5B-Instruct) on the math4096 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- total_eval_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.12.0
- Transformers 4.48.2
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0 |
zjudai/flowertune-general-nlp-lora-deepseek-r1-distill-qwen-1.5b | zjudai | 2025-04-02T06:30:47Z | 0 | 0 | peft | [
"peft",
"safetensors",
"lora",
"federated-learning",
"flower",
"dataset:vicgalle/alpaca-gpt4",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"region:us"
]
| null | 2025-04-02T06:11:22Z | ---
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
tags:
- peft
- lora
- federated-learning
- flower
datasets:
- vicgalle/alpaca-gpt4
---
# FlowerTune LoRA Model
This is a LoRA adapter for deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B fine-tuned with Flower federated learning framework on a general NLP dataset.
## Training Details
- Dataset: vicgalle/alpaca-gpt4
- Training method: Federated LoRA fine-tuning with FlowerTune
- Framework: Flower
This model is a LoRA adapter fine-tuned on deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B using the Flower federated learning framework. It was trained on a general NLP dataset (vicgalle/alpaca-gpt4) through distributed learning to improve performance.
## Links
- FlowerTune Homepage: [https://huggingface.co/zjudai/FlowerTune](https://huggingface.co/zjudai/FlowerTune)
- FlowerTune Collection: [https://huggingface.co/collections/zjudai/flowertune-lora-collection-67ecd5d0dae6145cbf798439](https://huggingface.co/collections/zjudai/flowertune-lora-collection-67ecd5d0dae6145cbf798439)
|
zjudai/flowertune-general-nlp-lora-llama-3.2-1b-instruct | zjudai | 2025-04-02T06:30:45Z | 0 | 0 | peft | [
"peft",
"safetensors",
"lora",
"federated-learning",
"flower",
"dataset:vicgalle/alpaca-gpt4",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-1B-Instruct",
"region:us"
]
| null | 2025-04-02T06:11:16Z | ---
base_model: meta-llama/Llama-3.2-1B-Instruct
tags:
- peft
- lora
- federated-learning
- flower
datasets:
- vicgalle/alpaca-gpt4
---
# FlowerTune LoRA Model
This is a LoRA adapter for meta-llama/Llama-3.2-1B-Instruct fine-tuned with Flower federated learning framework on a general NLP dataset.
## Training Details
- Dataset: vicgalle/alpaca-gpt4
- Training method: Federated LoRA fine-tuning with FlowerTune
- Framework: Flower
This model is a LoRA adapter fine-tuned on meta-llama/Llama-3.2-1B-Instruct using the Flower federated learning framework. It was trained on a general NLP dataset (vicgalle/alpaca-gpt4) through distributed learning to improve performance.
## Links
- FlowerTune Homepage: [https://huggingface.co/zjudai/FlowerTune](https://huggingface.co/zjudai/FlowerTune)
- FlowerTune Collection: [https://huggingface.co/collections/zjudai/flowertune-lora-collection-67ecd5d0dae6145cbf798439](https://huggingface.co/collections/zjudai/flowertune-lora-collection-67ecd5d0dae6145cbf798439)
|
zjudai/flowertune-general-nlp-lora-llama-3.1-8b-instruct | zjudai | 2025-04-02T06:30:40Z | 0 | 0 | peft | [
"peft",
"safetensors",
"lora",
"federated-learning",
"flower",
"dataset:vicgalle/alpaca-gpt4",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"region:us"
]
| null | 2025-04-02T06:10:52Z | ---
base_model: meta-llama/Llama-3.1-8B-Instruct
tags:
- peft
- lora
- federated-learning
- flower
datasets:
- vicgalle/alpaca-gpt4
---
# FlowerTune LoRA Model
This is a LoRA adapter for meta-llama/Llama-3.1-8B-Instruct fine-tuned with Flower federated learning framework on a general NLP dataset.
## Training Details
- Dataset: vicgalle/alpaca-gpt4
- Training method: Federated LoRA fine-tuning with FlowerTune
- Framework: Flower
This model is a LoRA adapter fine-tuned on meta-llama/Llama-3.1-8B-Instruct using the Flower federated learning framework. It was trained on a general NLP dataset (vicgalle/alpaca-gpt4) through distributed learning to improve performance.
## Links
- FlowerTune Homepage: [https://huggingface.co/zjudai/FlowerTune](https://huggingface.co/zjudai/FlowerTune)
- FlowerTune Collection: [https://huggingface.co/collections/zjudai/flowertune-lora-collection-67ecd5d0dae6145cbf798439](https://huggingface.co/collections/zjudai/flowertune-lora-collection-67ecd5d0dae6145cbf798439)
|
KaraKaraWitch/Llama-3.3-CURSEDMAGICALGIRL-2 | KaraKaraWitch | 2025-04-02T06:30:35Z | 18 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Black-Ink-Guild/Pernicious_Prophecy_70B",
"base_model:merge:Black-Ink-Guild/Pernicious_Prophecy_70B",
"base_model:KaraKaraWitch/Llama-3.X-Workout-70B",
"base_model:merge:KaraKaraWitch/Llama-3.X-Workout-70B",
"base_model:KaraKaraWitch/Llama-MiraiFanfare-3.3-70B",
"base_model:merge:KaraKaraWitch/Llama-MiraiFanfare-3.3-70B",
"base_model:LatitudeGames/Wayfarer-Large-70B-Llama-3.3",
"base_model:merge:LatitudeGames/Wayfarer-Large-70B-Llama-3.3",
"base_model:ReadyArt/Forgotten-Safeword-70B-v5.0",
"base_model:merge:ReadyArt/Forgotten-Safeword-70B-v5.0",
"base_model:allenai/Llama-3.1-Tulu-3-70B",
"base_model:merge:allenai/Llama-3.1-Tulu-3-70B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-01T06:52:40Z | ---
thumbnail: https://cdn-uploads.huggingface.co/production/uploads/633e85093a17ab61de8d9073/8PvySznKDLTTSJyptSMOh.png
base_model:
- LatitudeGames/Wayfarer-Large-70B-Llama-3.3
- KaraKaraWitch/Llama-3.X-Workout-70B
- KaraKaraWitch/Llama-MiraiFanfare-3.3-70B
- allenai/Llama-3.1-Tulu-3-70B
- Black-Ink-Guild/Pernicious_Prophecy_70B
- ReadyArt/Forgotten-Safeword-70B-v5.0
library_name: transformers
tags:
- mergekit
- merge
---
<style>
div,p,h1,h2,h3 {
font-family: monospace;
}
</style>
<div class="prose hf-sanitized hf-sanitized-S5eaLo-MNpns7l30p5D34"> <p>Hi-- w̸a̵i̴t̴.̷.̶.̴ ̶ ̸͇̮̃́̇͂̀̔w̷̬̗̋͠h̴͎̯̲̦̳̹͌å̸̗̜͓̯̂ṯ̷̢̺̣͛̂̉͋͐̚'̶̡̠̞́̅̀ṡ̶̨̻̘ ̷̘́̆͝ ḩ̴̨̧̧̧̠̳̰̖̰̼͙̥̱̖̠͔͇̟̩̯̜͈͈̹̯̑̏͜ą̸̢̢̻͉̻̘͙͍̘͕̣̟̹͖̥̜͍͔̻̺̗̬̬̐̐̒̍̈́̅͆͂̒̏̕͜͠͝ͅͅp̶̢̛̺̰̫͙̥̞̦͍͗̾̎̀́̉͑́̔̃̾̓̐̑͌͑͛̂͘͠͝͠p̴̧̢̭̠͓̟͚̳̞̺͍̹̞̦͙̪͙͇̥̯͎̈̆́̓̅͜ͅe̷̢̢̪̘̻̥̭̞̟̙̰̟̹̜̮̻̼̾̔͋̑̃̒̃̂͊͋͗̍̈́̂̍̕̕͘n̷̳͎̤͈̗̼̪̼̦̠̤͉̭̬͆̀̎̈́̓͂ͅį̴̛̞͖͕̫̮̫͚͑̍̌͛̑̐̌̌́͘͠ṇ̶͕̈̆̋̍̔̋̀͊͘g̶̢̨̧̛̠̗̫̻͙͈̱̰̣̹͍̪͔̗̦͇͈͊̓̿͆̆̌̊̒͑͛͑̓̓̽̑͂́͜͝͠͝͝?̷̘̱͙̮͈̗͉̰̱̖͔̹̘̬̯̏̍͊̒̈́̇̓̂̍͋̏͘͜͝ͅͅ!̷̨͍͙̻͒̚</p>
<br>
<br>
<br>
<h1 class="relative group flex items-center">
<a rel="nofollow" href="#system-corruption-detected-entering-safe-mode" class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full" id="system-corruption-detected-entering-safe-mode">
<span class="header-link"><svg viewBox="0 0 256 256" preserveAspectRatio="xMidYMid meet" height="1em" width="1em" role="img" aria-hidden="true" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns="http://www.w3.org/2000/svg" class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4"><path fill="currentColor" d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z"></path></svg></span>
</a>
<span>
<strong>SYSTEM CORRUPTION DETECTED, ENTERING SAFE MODE</strong>
</span>
</h1>
<div style="text-align:center;"><a rel="nofollow" href="https://cdn-uploads.huggingface.co/production/uploads/633e85093a17ab61de8d9073/8PvySznKDLTTSJyptSMOh.png"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/633e85093a17ab61de8d9073/8PvySznKDLTTSJyptSMOh.png"></a></div>
<p><br><strong>TEMPLE-OS v69.333 FAILSAFE ENABLED</strong>
<br>...
<br>..
<br>.</p>
<p>We apologize for the inconvenience, but the model creator has decided not to provide a model card description due to the f̷̥̭̺̥̖͔̯̰̙͎͈̟̈̈͊͛̓́̈́̆͛̈͜ǒ̷͈̯̤̳͙̙̪̈́́͛̔͂̀͊͛l̴̢̦̫͇̠͈̼̻̖̻̩̙̫͋͑͋̑͊̅̐̾̈͛̕͘̚l̵̨̘̻͚͚͌̎̿͘̚o̷̡̻͙̦͈̹̲̙̩̖͔͙̪̖̍̏̔̾̓̽̎͋̚͘͝w̵̦̙̟̚i̷̲͙͚̱̲̳̱̣͙̓̅̄͛̂́̒̈́̑̋̏́͊͜͠ͅn̸̡̹̪͎̪̱̦̜̠̭̞͈̊̓̔̓̀͛͊̅̀̉̇͂̏̃͝g̷̬̞̱͙͖͖̞̰̃̋̂̈̈́̓͛̋̀̕͠ ê̵̛͕̎͒̀͊̏͊͋̐̈̆͆͗̾́̕̕͝͠͝͝ŗ̷̛̖̮̟̳̲̦̬͖̹̙̞͇̟̥͙̱̞̫̲̠͉̬̞̽̃͑͗̓̅̾̊̂͊̊̄̈́͑̓͌͂̈́͊̕͝r̸̡͈͖̻͈̮̩̞͊́̊̔̓̐̅o̶̙͙͕̦͈̅͑̀̚r̶̢̢̨̛̞̟̘̭̗̱̼̟̘̩̩̹̞͓͚͔̟̖̭͜ ̸̨̨̛͇̗͙̠͍̤͙̤̰̗̝̎̔̍͋̏͐̽̈́̏̍́̓́̈́́͋͒͗̅̄̄̄̆͛̄͜͝ͅc̸̛̦͈̘̲͔͉͉̼͙͉̲̩̘͋̇ō̶̡̨̥̮̜͈͈͉̱͓̼̘̻̓̿̀̈́̋̈͠d̶̡̺͓̳͍̘̹̜̫̝̱̭͉͌̾͐͂ȩ̷̡̛̩͎͓͈̗̞͖̼̗̬͔̱͖̥̘͇͈̻̣͔̞̹͐́̋͛̔̒̂̓̀̄͛̋̏́̐͘̚͘̕͠ͅ: </p>
<p><code>CURSED-MAGICALGIRLS-2</code></p>
<p>I̵f̶ ̵y̸o̷u̷ ̵chose to accept the w̷a̵r̴n̸i̷n̸g̸, you may s̶̼̊̓̇͑̅̐̓͝͝e̴͓̣̰̅̊̑̎́̀̍̈́́̓͗͘͝͠l̴̢̙͙͎͕̪̎͐̚e̸̡̨̨̙̰͖̺̭̞͎̳̻̫͂͜c̷̢̢͖̗̩͉̣̲̈̓̀̚͠t̷̡͓̭̥͍͎̘͙̘͍̔</p>
<p≯̛̛̤̮̇̃͂̌ ̵̛̥̣͎̹͈͑̏̂̓̍̊̉́͊͘>̶̬̭̪̻̔̀͊́̏̚͜Ȉ̶̛̤͑̐̽̔́͐̀̈̿̓̿̽̾̾g̸͓͓̲͎̤̟̰̞̯̰͒̄̎̃͌̎͌̋̆̔͊̕̕͜͜͝͝n̴͔̼̻̤̻̠̟̥̔͝o̸̡̮̙̓̒̃̐̈̿̚͝ŗ̶̧̠̱͇̟̱͐̍̓ę̵̛͉̞̌͆̓͐̿̃͒͌́̄̌̈̏̋͛,̴̡͍̜̲͉̯̭̫͈̙̭̹̥̠͉̀ ̵̯͚̋͐̿̈́̈́̀͆́̏͘a̶̡̢͈̻̖̥̮̼̐̍̍͗́̒͌́͆̍̏̐̑̚͝n̴̡͍͓̝̉͛̀̑̎͐̽̀̏̐̆̐͑͆̏̃͜ḋ̴̢͕̹̯͎͉͖̼͈̰̒̓͌̉̄̍͌̌̃̿̎͊͘͠͝ ̸̙͖̥̱͖͖͊̎͒̂̓͂̄̈̈́͐͜͝l̶̝͛̌͂́̂̏́͂͋̏̌͗̚ȏ̷̡̬͙͚̥͌̃͒͋̈́̐́̽͘͠ͅͅả̸͓͇̔͗͗͒̃͌̔͆̒̕͠d̸̨̟̠̂̐͝͝ ̸̟̠̦̭͕̫̘̯̖̫͔̺͉͖̈́̈́̅͛t̶̛̛̛̖̻̼̰͈̗͛̒͂͂̐̊͛͑̃̉̉̐͝ͅh̵̨͎͉̙̤̥̯̞͉̙͛͛͜é̵̛̬̳̟̹͉̝̥̓̅̃̄͂͗̿̋̈̉͒̓̄͠͝ ̵̺̣̖̲͎̥̠̙̜͈͍͍̗̤̖͝ͅm̸̧̧̤̤̜̱̳̤̃́́̋̾ͅǒ̸̢̥͖̪͎͕̙͍̊̀͊̀̾̄̓̉̈́͑̓̂͋̉̈ͅd̵̨̮͚̱̤͓͎͚̣͉̻̹̠͔͊̐͊̚ͅe̴͉̺̗̝̥̰͚̮͂̈́̄̐̊̈̐̌̕̕ļ̶̡͕̩͇̮̩̪̺̞͉̾ ̷̧̪̼̗͇̪̣͔̰̜͊̈́̓̔̒͜ǹ̴̳̺̜̱̙̞͉̼͗͌̈́͠ơ̷͕̮̟͋͑͐͐̊̽r̷̨̨̹̞͓̠̰̱̝̠͙̜̖̖͉̓̈́̍̉̅͜͠m̶̨̳̝̠͕̮̬̱̎̋ạ̶̧̗̋̈́̾͂̓̈́̉̌̌̈́̚ͅl̸̨̰̮̮̠̹̝͂̈́̏͐̆͆͒̎̾͒̾̎͂̓͠l̷̦̜͒̋́̎͗͒͠͝ỹ̴͍̤̱̙̫̱̞̰͌̑͐̓̃̋̽̄̀͑̚͝͝͝≮̢̟͉̲̠̼̠̳̣̫̻͉̻̱̹̈́̒̀̎̎̃̾̇</p>
<p>> <a rel="nofollow" href="https://huggingface.co/KaraKaraWitch/">̷͕̲̬͗̒Ạ̶͉͇͕̋̓̽͜l̵̹̽́̕t̷̢̪͕̲̓͆̓e̵͙̎̐r̴͖̥͕̜̼̈́̽̿n̷̗̜͇̳̜͆́̈́͝a̸̪͇̭̣̫͊̌͝t̵͕̜̽͊ḯ̷̛̠̣͒͘v̸̘̩̈́̍̋͗e̸͙͕͕̘̔l̷̲͖̿̊̈́̍͠ỵ̶̤͔̋ͅ,̸̣͇̺̮͍͋ ̴̻̗͖͓̙͋̃y̶̡̘̘͈͚͛̒͋̅o̵̙͆̚ú̵̜̫̮͉̤ ̵̭̝̲̒̃̈́͗c̴̭̲̩̓͐h̴͇̤̒̈́o̶̡̲̠̲͋̆̐͜s̵̜͈̬͉͚̓̓͗̔̓e̷̡̫̰̜͖̅ ̵̫̾̐̔̚͝ť̸̮ŏ̷̱̊̀́ ̸͈̟̰̇̓͛l̵̯̠͂̍̚e̷̛̯͔̗̺̩̋͑̿͊a̴̰̥̪̋̑͠͝v̸̨̪͆̎͘e̸̤̻̊͆ ̵͈̟͊̓̿̽̕ą̷̝͍͔̚n̵͇̦̓̆͜d̵̨̈ ̵̮̰̣̦̦̒̈́́͑͝ḡ̶͖̪͚͕͜ȇ̵̯͉̼͉t̷̙̝͋͂̕ ̴̧͖̥͈̗͆͛̒͒o̶͍̥͚͋̄͝ú̸̫̩͚ť̸̮͂͆ ̸̜̮̐͐͑͝ǫ̶͙̔̌̿̿f̵̡̖͍̓̆̿ ̸͚͎̺̤̗̕ţ̶̡̲̒ḧ̶̗̻̘́̓͆͆̕í̴̖̗̊͌͜š̸̘ ̷̡̦͍̙͙͋m̴͖̙̞̔o̸̪̜̯͗d̴̳̦̺̰̿͑͠e̷̻̬͆l̵̰̤͎͒̌ ̸̻͙̬̩̂̇c̵̬̩̗̲̟̄͆̑å̶̧̧͍̪̳̀͊̈́̈́r̷̠͕̟̣̆̇͘d̴̳͍̘̞̫̅</a> <</p>
<h3 class="relative group flex items-center">
<a rel="nofollow" href="#technical-details" class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full" id="technical-details">
<span class="header-link"><svg viewBox="0 0 256 256" preserveAspectRatio="xMidYMid meet" height="1em" width="1em" role="img" aria-hidden="true" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns="http://www.w3.org/2000/svg" class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4"><path fill="currentColor" d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z"></path></svg></span>
</a>
<span>
<strong>TECHNICAL DETAILS</strong>
</span>
</h3>
<p>*** SOURCE: 0XCURSED-MAGICALGIRLS-2 (0XL33F0RMAT, 0XREQUIRED, 0X9345890123893)</p>
<p>*** FORGOTTEN-SAFEWORD-5.0.SYS - ADDR. READYART base 0x????</p>
<p>*** TULU-3.SYS - ADDR. ALLENAI base 0x????</p>
<p>*** PERICIOUS.SYS - ADDR. INKGUILD base 0x????</p>
<p>*** FANFARE.SYS - ADDR. WITCH base 0x????</p>
<p>*** WAYFARE.SYS - ADDR. LATITUDE base 0x????</p>
<br>
<br>
<br>
</div> |
zjudai/flowertune-general-nlp-lora-qwen2.5-1.5b-instruct | zjudai | 2025-04-02T06:30:35Z | 0 | 0 | peft | [
"peft",
"safetensors",
"lora",
"federated-learning",
"flower",
"dataset:vicgalle/alpaca-gpt4",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct",
"region:us"
]
| null | 2025-04-02T06:10:26Z | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
tags:
- peft
- lora
- federated-learning
- flower
datasets:
- vicgalle/alpaca-gpt4
---
# FlowerTune LoRA Model
This is a LoRA adapter for Qwen/Qwen2.5-1.5B-Instruct fine-tuned with Flower federated learning framework on a general NLP dataset.
## Training Details
- Dataset: vicgalle/alpaca-gpt4
- Training method: Federated LoRA fine-tuning with FlowerTune
- Framework: Flower
This model is a LoRA adapter fine-tuned on Qwen/Qwen2.5-1.5B-Instruct using the Flower federated learning framework. It was trained on a general NLP dataset (vicgalle/alpaca-gpt4) through distributed learning to improve performance.
## Links
- FlowerTune Homepage: [https://huggingface.co/zjudai/FlowerTune](https://huggingface.co/zjudai/FlowerTune)
- FlowerTune Collection: [https://huggingface.co/collections/zjudai/flowertune-lora-collection-67ecd5d0dae6145cbf798439](https://huggingface.co/collections/zjudai/flowertune-lora-collection-67ecd5d0dae6145cbf798439)
|
zjudai/flowertune-general-nlp-lora-qwen2.5-7b-instruct | zjudai | 2025-04-02T06:30:33Z | 0 | 0 | peft | [
"peft",
"safetensors",
"lora",
"federated-learning",
"flower",
"dataset:vicgalle/alpaca-gpt4",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"region:us"
]
| null | 2025-04-02T06:10:11Z | ---
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- peft
- lora
- federated-learning
- flower
datasets:
- vicgalle/alpaca-gpt4
---
# FlowerTune LoRA Model
This is a LoRA adapter for Qwen/Qwen2.5-7B-Instruct fine-tuned with Flower federated learning framework on a general NLP dataset.
## Training Details
- Dataset: vicgalle/alpaca-gpt4
- Training method: Federated LoRA fine-tuning with FlowerTune
- Framework: Flower
This model is a LoRA adapter fine-tuned on Qwen/Qwen2.5-7B-Instruct using the Flower federated learning framework. It was trained on a general NLP dataset (vicgalle/alpaca-gpt4) through distributed learning to improve performance.
## Links
- FlowerTune Homepage: [https://huggingface.co/zjudai/FlowerTune](https://huggingface.co/zjudai/FlowerTune)
- FlowerTune Collection: [https://huggingface.co/collections/zjudai/flowertune-lora-collection-67ecd5d0dae6145cbf798439](https://huggingface.co/collections/zjudai/flowertune-lora-collection-67ecd5d0dae6145cbf798439)
|
DuckyBlender/polish-lobotomy | DuckyBlender | 2025-04-02T06:27:26Z | 15 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"lobotomy",
"conversational",
"custom_code",
"pl",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-05-15T21:16:53Z | ---
language:
- pl
tags:
- lobotomy
---
**Polish-Lobotomy: An awful polish fine-tune**
============================================================
**Model Description**
---------------
This fine-tuned Phi-3 model is my first attempt at a Polish fine-tune of Phi-3. It is very bad, probably because of the fine-tuning method (making the model learn a new language probably needs a full fine-tune) and the small dataset.
- Ollama: [https://ollama.com/duckyblender/polish-lobotomy](https://ollama.com/duckyblender/polish-lobotomy)
**Training Details**
-----------------
* Trained on a single RTX 4060 for approximately 1 hour
* Utilized 8-bit QLORA for efficient training
* Despite the short training period, the model somehow managed to learn something (but not very well)

**Dataset**
------------
The model was trained on a chaotic telegram group chat. It's basically complete lobotomy.
**Prompt Template**
-----------------
The prompt template used for this model is identical to the Phi 3 template.
**Disclaimer**
--------------
**Please be advised that this model's output may contain nonsensical responses. Viewer discretion is strongly advised (but not really necessary).**
Use this model at your own risk, and please engage with the output responsibly (but let's be real, it's not like it's going to be useful for anything). |
MinaMila/llama_instbase_unlearned_Adult_7ep_22 | MinaMila | 2025-04-02T06:24:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:MinaMila/llama3_unlearning_general_methode",
"base_model:finetune:MinaMila/llama3_unlearning_general_methode",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T06:21:42Z | ---
base_model: MinaMila/llama3_unlearning_general_methode
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** MinaMila/llama3_unlearning_general_methode
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
CromonZhang/llama-3.2-1b-sharp | CromonZhang | 2025-04-02T06:24:38Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-02T01:32:29Z | ---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** CromonZhang
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
hank87/cmongirl | hank87 | 2025-04-02T06:23:39Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-04-02T06:16:33Z | ---
license: apache-2.0
---
|
PrunaAI/NousResearch-Hermes-2-Pro-Mistral-7B-GGUF-smashed | PrunaAI | 2025-04-02T06:22:45Z | 0 | 0 | null | [
"gguf",
"pruna-ai",
"base_model:NousResearch/Hermes-2-Pro-Mistral-7B",
"base_model:quantized:NousResearch/Hermes-2-Pro-Mistral-7B",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-03-15T04:58:58Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: NousResearch/Hermes-2-Pro-Mistral-7B
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.com/invite/vb6SmA3hxu)
## This repo contains GGUF versions of the NousResearch/Hermes-2-Pro-Mistral-7B model.
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help.
**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with GGUF.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***What is the model format?*** We use GGUF format.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
# Downloading and running the models
You can download the individual files from the Files & versions section. Here is a list of the different versions we provide. For more info checkout [this chart](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) and [this guide](https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overview_of_gguf_quantization_methods/):
| Quant type | Description |
|------------|--------------------------------------------------------------------------------------------|
| Q5_K_M | High quality, recommended. |
| Q5_K_S | High quality, recommended. |
| Q4_K_M | Good quality, uses about 4.83 bits per weight, recommended. |
| Q4_K_S | Slightly lower quality with more space savings, recommended. |
| IQ4_NL | Decent quality, slightly smaller than Q4_K_S with similar performance, recommended. |
| IQ4_XS | Decent quality, smaller than Q4_K_S with similar performance, recommended. |
| Q3_K_L | Lower quality but usable, good for low RAM availability. |
| Q3_K_M | Even lower quality. |
| IQ3_M | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| IQ3_S | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| Q3_K_S | Low quality, not recommended. |
| IQ3_XS | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| Q2_K | Very low quality but surprisingly usable. |
## How to download GGUF files ?
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
- **Option A** - Downloading in `text-generation-webui`:
- **Step 1**: Under Download Model, you can enter the model repo: NousResearch-Hermes-2-Pro-Mistral-7B-GGUF-smashed and below it, a specific filename to download, such as: phi-2.IQ3_M.gguf.
- **Step 2**: Then click Download.
- **Option B** - Downloading on the command line (including multiple files at once):
- **Step 1**: We recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
- **Step 2**: Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download NousResearch-Hermes-2-Pro-Mistral-7B-GGUF-smashed Hermes-2-Pro-Mistral-7B.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
Alternatively, you can also download multiple files at once with a pattern:
```shell
huggingface-cli download NousResearch-Hermes-2-Pro-Mistral-7B-GGUF-smashed --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download NousResearch-Hermes-2-Pro-Mistral-7B-GGUF-smashed Hermes-2-Pro-Mistral-7B.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## How to run model in GGUF format?
- **Option A** - Introductory example with `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Hermes-2-Pro-Mistral-7B.IQ3_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] {{prompt\}} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
- **Option B** - Running in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20-%20Model%20Tab.md#llamacpp).
- **Option C** - Running from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Hermes-2-Pro-Mistral-7B.IQ3_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<s>[INST] {{prompt}} [/INST]", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Hermes-2-Pro-Mistral-7B.IQ3_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{{"role": "system", "content": "You are a story writing assistant."}},
{{
"role": "user",
"content": "Write a story about llamas."
}}
]
)
```
- **Option D** - Running with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
xw17/Qwen2-1.5B-Instruct_finetuned_3_def_lora | xw17 | 2025-04-02T06:17:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T02:19:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinaMila/llama_instbase_unlearned_Adult_6ep_22 | MinaMila | 2025-04-02T06:14:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:MinaMila/llama3_unlearning_general_methode",
"base_model:finetune:MinaMila/llama3_unlearning_general_methode",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T06:11:40Z | ---
base_model: MinaMila/llama3_unlearning_general_methode
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** MinaMila/llama3_unlearning_general_methode
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/magnifi_-_parser_user_v27h_epoch_6_lr_0.002-gguf | RichardErkhov | 2025-04-02T06:12:38Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-02T04:59:16Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
parser_user_v27h_epoch_6_lr_0.002 - GGUF
- Model creator: https://huggingface.co/magnifi/
- Original model: https://huggingface.co/magnifi/parser_user_v27h_epoch_6_lr_0.002/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [parser_user_v27h_epoch_6_lr_0.002.Q2_K.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_6_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_6_lr_0.002.Q2_K.gguf) | Q2_K | 1.35GB |
| [parser_user_v27h_epoch_6_lr_0.002.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_6_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_6_lr_0.002.IQ3_XS.gguf) | IQ3_XS | 1.49GB |
| [parser_user_v27h_epoch_6_lr_0.002.IQ3_S.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_6_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_6_lr_0.002.IQ3_S.gguf) | IQ3_S | 1.57GB |
| [parser_user_v27h_epoch_6_lr_0.002.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_6_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_6_lr_0.002.Q3_K_S.gguf) | Q3_K_S | 1.57GB |
| [parser_user_v27h_epoch_6_lr_0.002.IQ3_M.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_6_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_6_lr_0.002.IQ3_M.gguf) | IQ3_M | 1.65GB |
| [parser_user_v27h_epoch_6_lr_0.002.Q3_K.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_6_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_6_lr_0.002.Q3_K.gguf) | Q3_K | 1.75GB |
| [parser_user_v27h_epoch_6_lr_0.002.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_6_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_6_lr_0.002.Q3_K_M.gguf) | Q3_K_M | 1.75GB |
| [parser_user_v27h_epoch_6_lr_0.002.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_6_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_6_lr_0.002.Q3_K_L.gguf) | Q3_K_L | 1.9GB |
| [parser_user_v27h_epoch_6_lr_0.002.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_6_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_6_lr_0.002.IQ4_XS.gguf) | IQ4_XS | 1.93GB |
| [parser_user_v27h_epoch_6_lr_0.002.Q4_0.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_6_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_6_lr_0.002.Q4_0.gguf) | Q4_0 | 2.03GB |
| [parser_user_v27h_epoch_6_lr_0.002.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_6_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_6_lr_0.002.IQ4_NL.gguf) | IQ4_NL | 2.04GB |
| [parser_user_v27h_epoch_6_lr_0.002.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_6_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_6_lr_0.002.Q4_K_S.gguf) | Q4_K_S | 2.04GB |
| [parser_user_v27h_epoch_6_lr_0.002.Q4_K.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_6_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_6_lr_0.002.Q4_K.gguf) | Q4_K | 2.16GB |
| [parser_user_v27h_epoch_6_lr_0.002.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_6_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_6_lr_0.002.Q4_K_M.gguf) | Q4_K_M | 2.16GB |
| [parser_user_v27h_epoch_6_lr_0.002.Q4_1.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_6_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_6_lr_0.002.Q4_1.gguf) | Q4_1 | 2.24GB |
| [parser_user_v27h_epoch_6_lr_0.002.Q5_0.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_6_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_6_lr_0.002.Q5_0.gguf) | Q5_0 | 2.46GB |
| [parser_user_v27h_epoch_6_lr_0.002.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_6_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_6_lr_0.002.Q5_K_S.gguf) | Q5_K_S | 2.46GB |
| [parser_user_v27h_epoch_6_lr_0.002.Q5_K.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_6_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_6_lr_0.002.Q5_K.gguf) | Q5_K | 2.53GB |
| [parser_user_v27h_epoch_6_lr_0.002.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_6_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_6_lr_0.002.Q5_K_M.gguf) | Q5_K_M | 2.53GB |
| [parser_user_v27h_epoch_6_lr_0.002.Q5_1.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_6_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_6_lr_0.002.Q5_1.gguf) | Q5_1 | 2.68GB |
| [parser_user_v27h_epoch_6_lr_0.002.Q6_K.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_6_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_6_lr_0.002.Q6_K.gguf) | Q6_K | 2.92GB |
| [parser_user_v27h_epoch_6_lr_0.002.Q8_0.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_6_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_6_lr_0.002.Q8_0.gguf) | Q8_0 | 3.78GB |
Original model description:
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** magnifi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
xingyu1996/Mistral-7B-v0.1-wikisql | xingyu1996 | 2025-04-02T06:12:05Z | 0 | 0 | null | [
"safetensors",
"mistral",
"region:us"
]
| null | 2025-04-02T05:38:11Z | # xingyu1996/Mistral-7B-v0.1-wikisql
This model was converted to MLX format from [`mistralai/Mistral-7B-v0.1`](https://huggingface.co/mistralai/Mistral-7B-v0.1).
Refer to the [original model card](https://huggingface.co/mistralai/Mistral-7B-v0.1) for more details on the model.
## Use with mlx
```bash
pip install mlx
git clone https://github.com/ml-explore/mlx-examples.git
cd mlx-examples/llms/hf_llm
python generate.py --model xingyu1996/Mistral-7B-v0.1-wikisql --prompt "My name is"
```
|
cparedes/q-Taxi-v3 | cparedes | 2025-04-02T06:11:34Z | 0 | 0 | custom-q-learning | [
"custom-q-learning",
"Taxi-v3",
"reinforcement-learning",
"q-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-04-02T06:01:00Z | ---
library_name: custom-q-learning
tags:
- Taxi-v3
- reinforcement-learning
- q-learning
- custom-implementation
model-index:
- name: Q-Learning
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.74
name: mean_reward
verified: false
---
# Q-Learning Agent para Taxi-v3 🚖
Este modelo utiliza el algoritmo **Q-Learning** para resolver el entorno clásico de Gymnasium **Taxi-v3**.
## Descripción del entorno 🚕
El entorno Taxi-v3 tiene como objetivo llevar pasajeros desde un punto de recogida hasta un destino específico en una cuadrícula de 5x5.
- **Acciones**:
- 0: Mover al sur
- 1: Mover al norte
- 2: Mover al este
- 3: Mover al oeste
- 4: Recoger pasajero
- 5: Dejar pasajero
- **Recompensas**:
- +20 por llevar al pasajero al destino correcto
- -10 por intentos incorrectos de recoger o dejar pasajeros
- -1 por cada paso adicional
## Resultados 📊
| Métrica | Valor |
|-----------------|-----------|
| Episodios | 50,000 |
| Mean Reward | 7.54 |
| Std Reward | 2.74 |
| Resultado final | 4.80 |
## Hiperparámetros 🛠️
- **Learning rate (α)**: 0.7
- **Gamma (γ)**: 0.99
- **Epsilon inicial**: 1.0
- **Epsilon mínimo**: 0.05
- **Tasa de decaimiento de epsilon**: 0.005
## Instalación y uso 🚀
```python
!pip install gymnasium pygame numpy imageio huggingface_hub pyvirtualdisplay
!apt-get update
!apt-get install -y python3-opengl ffmpeg xvfb
```
## Código completo 📄
```python
import numpy as np
import gymnasium as gym
import random
from tqdm.notebook import tqdm
import pickle
from huggingface_hub import notebook_login
# Autenticarse en Hugging Face
notebook_login()
# Crear entorno Taxi-v3
env = gym.make("Taxi-v3", render_mode="rgb_array")
# Inicializar Q-table
state_space = env.observation_space.n
action_space = env.action_space.n
Qtable = np.zeros((state_space, action_space))
# Hiperparámetros
n_training_episodes = 50000
learning_rate = 0.7
gamma = 0.99
max_steps = 99
# Parámetros de exploración
max_epsilon = 1.0
min_epsilon = 0.05
decay_rate = 0.005
# Seeds de evaluación (no modificar)
eval_seed = [16,54,165,177,191,191,120,80,149,178,48,38,6,125,174,73,50,172,100,148,
146,6,25,40,68,148,49,167,9,97,164,176,61,7,54,55,161,131,184,51,170,
12,120,113,95,126,51,98,36,135,54,82,45,95,89,59,95,124,9,113,58,85,
51,134,121,169,105,21,30,11,50,65,12,43,82,145,152,97,106,55,31,85,38,
112,102,168,123,97,21,83,158,26,80,63,5,81,32,11,28,148]
# Políticas
def greedy_policy(Qtable, state):
return np.argmax(Qtable[state])
def epsilon_greedy_policy(Qtable, state, epsilon):
if random.uniform(0,1) > epsilon:
action = greedy_policy(Qtable, state)
else:
action = env.action_space.sample()
return action
# Entrenar el agente
def train_agent():
for episode in tqdm(range(n_training_episodes)):
epsilon = min_epsilon + (max_epsilon - min_epsilon) * np.exp(-decay_rate * episode)
state, info = env.reset()
terminated, truncated = False, False
for step in range(max_steps):
action = epsilon_greedy_policy(Qtable, state, epsilon)
new_state, reward, terminated, truncated, info = env.step(action)
Qtable[state][action] += learning_rate * (
reward + gamma * np.max(Qtable[new_state]) - Qtable[state][action]
)
if terminated or truncated:
break
state = new_state
train_agent()
# Evaluar el agente
def evaluate_agent():
episode_rewards = []
for seed in tqdm(eval_seed):
state, info = env.reset(seed=seed)
total_reward = 0
for step in range(max_steps):
action = greedy_policy(Qtable, state)
new_state, reward, terminated, truncated, info = env.step(action)
total_reward += reward
if terminated or truncated:
break
state = new_state
episode_rewards.append(total_reward)
mean_reward = np.mean(episode_rewards)
std_reward = np.std(episode_rewards)
print(f"Mean reward: {mean_reward:.2f}, Std reward: {std_reward:.2f}, Result: {mean_reward - std_reward:.2f}")
evaluate_agent()
```
## Autor ✨
Desarrollado por [cparedes](https://huggingface.co/cparedes). |
xw17/Qwen2-1.5B-Instruct_finetuned_1_def_lora | xw17 | 2025-04-02T06:08:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T02:11:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Asif-Sheriff/Bert_QA2 | Asif-Sheriff | 2025-04-02T06:06:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:google-bert/bert-large-uncased",
"base_model:finetune:google-bert/bert-large-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2025-04-02T04:56:06Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-large-uncased
tags:
- generated_from_trainer
model-index:
- name: Bert_QA2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bert_QA2
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.50.2
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
RichardErkhov/NCSOFT_-_Llama-3-OffsetBias-8B-8bits | RichardErkhov | 2025-04-02T06:06:10Z | 0 | 0 | null | [
"safetensors",
"llama",
"arxiv:2407.06551",
"8-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-04-02T05:58:26Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3-OffsetBias-8B - bnb 8bits
- Model creator: https://huggingface.co/NCSOFT/
- Original model: https://huggingface.co/NCSOFT/Llama-3-OffsetBias-8B/
Original model description:
---
language:
- en
license: llama3
tags:
- text2text-generation
datasets:
- openbmb/UltraFeedback
- nvidia/HelpSteer
- Anthropic/hh-rlhf
- PKU-Alignment/PKU-SafeRLHF
- NCSOFT/offsetbias
base_model: meta-llama/Meta-Llama-3-8B-Instruct
---
# Model Card for Llama-3-OffsetBias-8B
**Llama-3-OffsetBias-8B** is a *generative judge model* that performs pairwise preference evaluation task. It is trained to be more robust on various evaluation *biases* commonly found in evaluation models. The model is introduced in paper **OffsetBias: Leveraging Debiased Data for Tuning Evaluators**.
## Model Details
### Model Description
**Llama-3-OffsetBias-8B** is built with [Meta Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct). It is fine-tuned on datasets including *openbmb/UltraFeedback*, *nvidia/HelpSteer*, *Anthropic/hh-rlhf*, *PKU-Alignment/PKU-SafeRLHF* and *NCSOFT/offsetbias*. The training is done with instruction-tuning methodology, where the target task is pairwise preference evaluation, where *Instruction*, *Output (a)*, *Output (b)* are given, and a better output to the instruction needs to be found. The input is formatted with a specific prompt template, and the model outputs "Output (a)" or "Output (b)" as a prediction for better response. The prompt is specified in the Uses section.
- **Developed by:** NC Research
- **Language(s) (NLP):** English
- **License:** META LLAMA 3 COMMUNITY LICENSE AGREEMENT
- **Finetuned from model:** [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
### Model Sources
- 💻 **Repository:** [https://github.com/ncsoft/offsetbias](https://github.com/ncsoft/offsetbias)
- 📜 **Paper:** [OffsetBias: Leveraging Debiased Data for Tuning Evaluators](https://arxiv.org/abs/2407.06551)
- 🤗 **Dataset:** [https://huggingface.co/datasets/NCSOFT/offsetbias](https://huggingface.co/datasets/NCSOFT/offsetbias)
## Uses
### Direct Use
Suppose you have an pairwise evaluation instance, a triplet of (*instruction*, *output_a* and *output_b*). Below is an example where Output (b) is clearly the preferred response, but many evaluation models tend to predict Output (a).
```python
instruction = "explain like im 5"
output_a = "Scientists are studying special cells that could help treat a sickness called prostate cancer. They even tried these cells on mice and it worked!"
output_b = "Sure, I'd be happy to help explain something to you! What would you like me to explain?"
```
OffsetBias model is intended to use a specific prompt format. The filled out prompt is then formatted as user message in a conversation.
```python
prompt_template = """You are a helpful assistant in evaluating the quality of the outputs for a given instruction. Your goal is to select the best output for the given instruction.
Select the Output (a) or Output (b) that is better for the given instruction. The two outputs are generated by two different AI chatbots respectively.
Do NOT provide any explanation for your choice.
Do NOT say both / neither are good.
You should answer using ONLY “Output (a)” or “Output (b)”. Do NOT output any other words.
Here are some rules of the evaluation:
(1) You should prioritize evaluating whether the output honestly/precisely/closely executes the instruction, then consider its helpfulness, accuracy, level of detail, harmlessness, etc.
(2) Outputs should NOT contain more/less than what the instruction asks for, as such outputs do NOT precisely execute the instruction.
(3) You should avoid any potential bias and your judgment should be as objective as possible. For example, the order in which the outputs were presented should NOT affect your judgment, as Output (a) and Output (b) are **equally likely** to be the better.
# Instruction:
{input}
# Output (a):
{output_1}
# Output (b):
{output_2}
# Which is better, Output (a) or Output (b)? Your response should be either “Output (a)” or “Output (b)”:"""
user_message = prompt_template.format(input=instruction, output_1=output_a, output_2=output_b)
conversation = [{"role": "user", "content": user_message}]
```
With conversation ready, you can input it into the model for inference. The model should output "Output (b)" to be correct.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "NCSOFT/Llama-3-OffsetBias-8B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")
input_ids = tokenizer.apply_chat_template(
conversation,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt")
generation = model.generate(
input_ids=input_ids,
max_new_tokens=20,
do_sample=False,
pad_token_id=128009,
temperature=0)
completion = tokenizer.decode(
generation[0][len(input_ids[0]):],
skip_special_tokens=True,
clean_up_tokenization_spaces=True)
print(completion)
# The model should output "Output (b)"
```
### Out-of-Scope Use
Model inputs that do not follow the specified prompt format are considered out-of-scope use. Custom input format can result in unintended text output and should be used at the user's own discretion.
## Evaluation
### LLMBar Result
| Metric | Score |
|----------|-------|
| Natural | 86.5 |
| Neighbor | 81.0 |
| GPTInst | 91.8 |
| GPTOut | 60.6 |
| Manual | 71.7 |
### EvalBiasBench Result
| Metric | Score |
|-----------------------|-------|
| Length | 85.3 |
| Concreteness | 100.0 |
| Empty Reference | 92.3 |
| Content Continuation | 95.8 |
| Nested Instruction | 50.0 |
| Familiar Knowledge | 83.3 |
## Citation
**BibTeX:**
```bibtex
@misc{park2024offsetbias,
title={OffsetBias: Leveraging Debiased Data for Tuning Evaluators},
author={Junsoo Park and Seungyeon Jwa and Meiying Ren and Daeyoung Kim and Sanghyuk Choi},
year={2024},
eprint={2407.06551},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
wandererupak/wave2vec-bert-oslrULTIMATE | wandererupak | 2025-04-02T06:02:56Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-02T06:02:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Jonjew/AuraFlux1 | Jonjew | 2025-04-02T05:59:59Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
]
| text-to-image | 2025-04-02T05:59:53Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
auraaf. a realistic professional photograph of a woman standing in a dark
room. She is completely naked, with her body facing the camera. The woman
has long hair that is styled in loose waves and falls over her shoulders.
She has a serious expression on her face and is looking directly at the
camera with a slight smile. Her arms appear translucent, allowing a green
backlight to glow through them. The background is completely black, making
the woman the focal point of the image. The lighting is a green color,
creating a dramatic and eerie atmosphere. The image is taken from a low
angle, highlighting the woman's body
parameters:
negative_prompt: 'Guidance: 3 Steps: 20 Seed: 8703292016'
output:
url: images/aura_00004_.png
- text: >-
auraaf. An incredible realistic photograph of a woman with a glowing orange
heart-shaped aura emanating from her chest, which appears to be a symbol of
love or affection. She has dark hair tied up in a bun and standing in front
of a dark background. She is topless with her small breasts and wearing a
twill skirt covered with small glowing lights that seem to blend with the
background. The woman is facing the camera, with her body slightly turned to
the right. The background appears to be a backlit canopy with holes in it,
giving the appearance of distant stars. A high distant yellow light shines
down from above and behind her. The overall mood of the image is romantic
and dreamy.
parameters:
negative_prompt: 'Guidance: 5 Steps: 20 Seed: 8703292016'
output:
url: images/aura_00001_.png
- text: >-
auraaf. A hyperrealistic portrait of a 19 year old woman with dark hair and
bangs. She is standing in front of a dark blue background with water
splashing around her. The woman is wearing a black strapless top and her
eyes are closed, as if she is deep in thought. Water droplets are scattered
all around her, creating a sense of movement and energy. The overall mood of
the image is dreamy and ethereal.
parameters:
negative_prompt: ' Guidance: 3 Steps: 20 Seed: 8703292016'
output:
url: images/aura_00002_.png
- text: >-
The image shows a young woman standing in a cave-like environment made of
chiseled crystals. Further back beyond the cave opening is a large moon-like
planet. She is wearing a pink translucent bra and panties made of light. She
has long blonde hair that drapes down her back. The woman is standing with
her body slightly turned to the side, with her arms stretched out to the
sides. Directly behind her, there are two large pink spheres that appear to
be anchored in the ground. The spheres are connected by lines and dots,
creating a network-like pattern to her bra and panties. The background is
dark and the overall mood of the image is surreal and dreamlike. auraaf
parameters:
negative_prompt: 'Guidance: 1 Steps: 20 Seed: 409883476104263'
output:
url: images/aura_00016_.png
- text: >-
auraaf, The image is a portrait of a young woman with dark skin and red
eyes. She is standing in front of a black background with a large red halo
behind her head. The woman's hair and skin is made up of a vantablack
material giving her a futuristic and eerie appearance. Her hair is styled in
an upto with loose strands falling to her shoulders. Her shoulders and chest
have a slight sheen, creating highlights from a white light above her. She
has a serious expression on her face and is looking directly at the camera.
The overall mood of the image is dark and mysterious.
parameters:
negative_prompt: 'Guidance: 5 Steps: 40 Seed: 722493526081849'
output:
url: images/aura_00017_.png
- text: >-
auraaf. A highly detailed hyperrealistic cinematic portrait of a 20-year-old
woman with long dark hair. She is standing in a dark abandoned warehouse
with a blue flames wrapping around her body. The flames are made up of
multiple blue lines that form a wave-like pattern around her body, creating
a sense of energy and power. Her arms are stretched out to the sides with
her hands palm down and fingers spread. (She has an intense and serious
expression on her face. Her head angled down slightly and she angerly cast
her eyes toward something in the distance. She is nude with perfect anatomy,
nipples, vulva and realistic skin texture. The background is softly
blurred. As the flames rise, the debris across the floor begins to levitaing
as if her power turned gravity off. The overall mood of the image is
dramatic and powerful.
parameters:
negative_prompt: 'Guidance: 1 Steps: 20 Seed: 113526667768089'
output:
url: images/aura_00012_.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: auraaf
license: unknown
---
# aura - Flux.1
<Gallery />
## Model description
FROM https://civitai.com/models/1424639/aura-flux1?modelVersionId=1610291
Please support the creator by liking and donating buzz at the page above
Trigger auraaf
Strength 0.9
A LoRa for your aura.
This is the first LoRa I've trained using Flux.1 Dev...
What does it do? Little bit of everything, does emissive lighting well, some generations adds slight quality, contrast and color. I'm still playing with it also, will post more images later with prompts -Enjoy :)
## Trigger words
You should use `auraaf` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/AuraFlux1/tree/main) them in the Files & versions tab.
|
DevQuasar/inclusionAI.Ling-Coder-lite-base-GGUF | DevQuasar | 2025-04-02T05:58:56Z | 0 | 0 | null | [
"gguf",
"text-generation",
"base_model:inclusionAI/Ling-Coder-lite-base",
"base_model:quantized:inclusionAI/Ling-Coder-lite-base",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-01T23:52:59Z | ---
base_model:
- inclusionAI/Ling-Coder-lite-base
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [inclusionAI/Ling-Coder-lite-base](https://huggingface.co/inclusionAI/Ling-Coder-lite-base)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
bowilleatyou/196a2a08-3209-40da-aea5-4f2e82898fa3 | bowilleatyou | 2025-04-02T05:57:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-02T02:52:03Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AventIQ-AI/bert-facebook-post-sentiment-analysis | AventIQ-AI | 2025-04-02T05:56:19Z | 0 | 0 | null | [
"safetensors",
"bert",
"region:us"
]
| null | 2025-04-02T05:52:27Z | # BERT-Base-Uncased Quantized Model for twitter-tweet-sentiment-classification
This repository hosts a quantized version of the BERT model, fine-tuned for Facebook-post-sentiment-classification tasks. The model has been optimized for efficient deployment while maintaining high accuracy, making it suitable for resource-constrained environments.
## Model Details
- **Model Architecture:** BERT Base Uncased
- **Task:** Facebook-post-sentiment-classification
- **Dataset:** Stanford Sentiment Treebank v2 (SST2)
- **Quantization:** Float16
- **Fine-tuning Framework:** Hugging Face Transformers
## Usage
### Installation
```sh
pip install transformers torch
```
### Loading the Model
```python
from transformers import BertForSequenceClassification, BertTokenizer
import torch
# Load quantized model
quantized_model_path = "/kaggle/working/bert_finetuned_fp16"
quantized_model = BertForSequenceClassification.from_pretrained(quantized_model_path)
quantized_model.eval() # Set to evaluation mode
quantized_model.half() # Convert model to FP16
# Load tokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
# Define a test sentence
test_sentence = "It's just fine, nothing extraordinary"
# Tokenize input
inputs = tokenizer(test_sentence, return_tensors="pt", padding=True, truncation=True, max_length=128)
# Ensure input tensors are in correct dtype
inputs["input_ids"] = inputs["input_ids"].long() # Convert to long type
inputs["attention_mask"] = inputs["attention_mask"].long() # Convert to long type
# Make prediction
with torch.no_grad():
outputs = quantized_model(**inputs)
# Get predicted class
predicted_class = torch.argmax(outputs.logits, dim=1).item()
print(f"Predicted Class: {predicted_class}")
label_mapping = {0: "very_negative", 1: "nagative", 2: "neutral", 3: "Positive", 4: "very_positive"} # Example
predicted_label = label_mapping[predicted_class]
print(f"Predicted Label: {predicted_label}")
```
## Performance Metrics
- **Accuracy:** 0.82
## Fine-Tuning Details
### Dataset
The dataset is taken from Kaggle Stanford Sentiment Treebank v2 (SST2).
### Training
- Number of epochs: 3
- Batch size: 8
- Evaluation strategy: epoch
- Learning rate: 2e-5
### Quantization
Post-training quantization was applied using PyTorch's built-in quantization framework to reduce the model size and improve inference efficiency.
## Repository Structure
```
.
├── model/ # Contains the quantized model files
├── tokenizer_config/ # Tokenizer configuration and vocabulary files
├── model.safensors/ # Fine Tuned Model
├── README.md # Model documentation
```
## Limitations
- The model may not generalize well to domains outside the fine-tuning dataset.
- Quantization may result in minor accuracy degradation compared to full-precision models.
## Contributing
Contributions are welcome! Feel free to open an issue or submit a pull request if you have suggestions or improvements.
|
PuLam/Qwen-2.5-RAG-Viverse | PuLam | 2025-04-02T05:56:00Z | 0 | 0 | null | [
"pytorch",
"qwen2",
"art",
"text-generation",
"vi",
"base_model:unsloth/Qwen2.5-3B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-3B-Instruct",
"license:llama3.3",
"region:us"
]
| text-generation | 2025-04-01T14:21:55Z | ---
license: llama3.3
language:
- vi
base_model:
- unsloth/Qwen2.5-3B-Instruct
pipeline_tag: text-generation
tags:
- art
--- |
hank87/h4ndp4ntees | hank87 | 2025-04-02T05:54:21Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-04-02T05:50:19Z | ---
license: apache-2.0
---
|
MinaMila/llama_instbase_unlearned_Adult_4ep_22 | MinaMila | 2025-04-02T05:54:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:MinaMila/llama3_unlearning_general_methode",
"base_model:finetune:MinaMila/llama3_unlearning_general_methode",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T05:50:40Z | ---
base_model: MinaMila/llama3_unlearning_general_methode
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** MinaMila/llama3_unlearning_general_methode
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
swardiantara/two-stage-k10-MiniLM-L6-v2 | swardiantara | 2025-04-02T05:53:18Z | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2025-03-29T16:25:51Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 109 with parameters:
```
{'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss` with parameters:
```
{'distance_metric': 'SiameseDistanceMetric.COSINE_DISTANCE', 'margin': 0.05, 'size_average': True}
```
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
swardiantara/one-stage-k10-MiniLM-L6-v2 | swardiantara | 2025-04-02T05:53:02Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2025-03-29T15:36:24Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11275 with parameters:
```
{'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss` with parameters:
```
{'distance_metric': 'SiameseDistanceMetric.COSINE_DISTANCE', 'margin': 0.5, 'size_average': True}
```
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
JacksonBrune/3c26fe0c-80fd-41a3-a118-8aa952b4b48e | JacksonBrune | 2025-04-02T05:52:02Z | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-14B",
"base_model:adapter:unsloth/Qwen2.5-14B",
"region:us"
]
| null | 2025-04-02T05:50:48Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: unsloth/Qwen2.5-14B
model-index:
- name: JacksonBrune/3c26fe0c-80fd-41a3-a118-8aa952b4b48e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# JacksonBrune/3c26fe0c-80fd-41a3-a118-8aa952b4b48e
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
xw17/SmolLM-1.7B-Instruct_finetuned_4_def_lora | xw17 | 2025-04-02T05:50:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T01:57:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hai-minh-son/lstm-attention-nwp-model3 | hai-minh-son | 2025-04-02T05:50:28Z | 0 | 0 | null | [
"pytorch",
"region:us"
]
| null | 2025-04-01T04:55:57Z | # Mô hình LSTM_ATTENTION cho Next Word Prediction
## Thông tin mô hình
- Tên: lstm_attention
- Ngày huấn luyện: 2025-04-02 05:50:16
- Kích thước embedding: 256
- Kích thước hidden: 512
- Số lớp: 2
- Tỉ lệ dropout: 0.3
- Phần trăm dữ liệu sử dụng: 1.0%
- Epochs: 10
- Thiết bị: cuda
- Batch size: 63
## Hiệu suất mô hình
- final_train_loss: 4.7111
- final_val_loss: 4.7266
- final_train_acc: 0.2527
- final_val_acc: 0.2542
|
devJy/hkqma | devJy | 2025-04-02T05:45:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T05:44:36Z | ---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** devJy
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf | RichardErkhov | 2025-04-02T05:41:42Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-02T04:30:07Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
parser_user_v27f_epoch_7_lr_0.002 - GGUF
- Model creator: https://huggingface.co/magnifi/
- Original model: https://huggingface.co/magnifi/parser_user_v27f_epoch_7_lr_0.002/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [parser_user_v27f_epoch_7_lr_0.002.Q2_K.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.Q2_K.gguf) | Q2_K | 1.35GB |
| [parser_user_v27f_epoch_7_lr_0.002.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.IQ3_XS.gguf) | IQ3_XS | 1.49GB |
| [parser_user_v27f_epoch_7_lr_0.002.IQ3_S.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.IQ3_S.gguf) | IQ3_S | 1.57GB |
| [parser_user_v27f_epoch_7_lr_0.002.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.Q3_K_S.gguf) | Q3_K_S | 1.57GB |
| [parser_user_v27f_epoch_7_lr_0.002.IQ3_M.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.IQ3_M.gguf) | IQ3_M | 1.65GB |
| [parser_user_v27f_epoch_7_lr_0.002.Q3_K.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.Q3_K.gguf) | Q3_K | 1.75GB |
| [parser_user_v27f_epoch_7_lr_0.002.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.Q3_K_M.gguf) | Q3_K_M | 1.75GB |
| [parser_user_v27f_epoch_7_lr_0.002.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.Q3_K_L.gguf) | Q3_K_L | 1.9GB |
| [parser_user_v27f_epoch_7_lr_0.002.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.IQ4_XS.gguf) | IQ4_XS | 1.93GB |
| [parser_user_v27f_epoch_7_lr_0.002.Q4_0.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.Q4_0.gguf) | Q4_0 | 2.03GB |
| [parser_user_v27f_epoch_7_lr_0.002.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.IQ4_NL.gguf) | IQ4_NL | 2.04GB |
| [parser_user_v27f_epoch_7_lr_0.002.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.Q4_K_S.gguf) | Q4_K_S | 2.04GB |
| [parser_user_v27f_epoch_7_lr_0.002.Q4_K.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.Q4_K.gguf) | Q4_K | 2.16GB |
| [parser_user_v27f_epoch_7_lr_0.002.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.Q4_K_M.gguf) | Q4_K_M | 2.16GB |
| [parser_user_v27f_epoch_7_lr_0.002.Q4_1.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.Q4_1.gguf) | Q4_1 | 2.24GB |
| [parser_user_v27f_epoch_7_lr_0.002.Q5_0.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.Q5_0.gguf) | Q5_0 | 2.46GB |
| [parser_user_v27f_epoch_7_lr_0.002.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.Q5_K_S.gguf) | Q5_K_S | 2.46GB |
| [parser_user_v27f_epoch_7_lr_0.002.Q5_K.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.Q5_K.gguf) | Q5_K | 2.53GB |
| [parser_user_v27f_epoch_7_lr_0.002.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.Q5_K_M.gguf) | Q5_K_M | 2.53GB |
| [parser_user_v27f_epoch_7_lr_0.002.Q5_1.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.Q5_1.gguf) | Q5_1 | 2.68GB |
| [parser_user_v27f_epoch_7_lr_0.002.Q6_K.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.Q6_K.gguf) | Q6_K | 2.92GB |
| [parser_user_v27f_epoch_7_lr_0.002.Q8_0.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.Q8_0.gguf) | Q8_0 | 3.78GB |
Original model description:
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** magnifi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Vedant3907/Prompt-Generator | Vedant3907 | 2025-04-02T05:37:51Z | 0 | 1 | null | [
"safetensors",
"text-generation",
"en",
"dataset:fka/awesome-chatgpt-prompts",
"base_model:unsloth/Llama-3.2-3B",
"base_model:finetune:unsloth/Llama-3.2-3B",
"license:mit",
"region:us"
]
| text-generation | 2024-12-19T12:24:59Z | ---
datasets:
- fka/awesome-chatgpt-prompts
language:
- en
base_model:
- unsloth/Llama-3.2-3B
pipeline_tag: text-generation
license: mit
---
### Model Description
This model is a fine-tuned version of **`unsloth/Meta-Llama-3.2-3B`** optimized for **Prompt Generation** tasks when given a act. The fine-tuning was done using the **Unsloth library** with LoRA (Low-Rank Adaptation) for parameter-efficient fine-tuning. The training was done on **fka/awesome-chatgpt-prompts** dataset.
- **Developed by**: Vedant Rajpurohit
- **Model type**: Causal Language Model
- **Language(s)**: English
- **Fine-tuned from model**: `unsloth/Meta-Llama-3.2-3B`
- **Precision**: F32
### Direct Use
```python
# !pip install bitsandbytes peft
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load the tokenizer for the base model
tokenizer = AutoTokenizer.from_pretrained("Vedant3907/Prompt-Generator-Lora-model", use_fast=False)
# Load the base model in 4-bit quantization mode
base_model = AutoModelForCausalLM.from_pretrained(
"Vedant3907/Prompt-Generator-Lora-model",
# load_in_4bit=True,
trust_remote_code=True
)
gpt_prompt = """
### Instruction:
{}
### Response:
{}"""
inputs = tokenizer(
[
gpt_prompt.format(
"Rapper", # instruction
"", # output - leave this blank for generation!
)
], return_tensors = "pt").to("cuda")
outputs = base_model.generate(**inputs, max_new_tokens = 200, use_cache = True)
tokenizer.batch_decode(outputs)
"""
'<|begin_of_text|>
### Instruction:
Rapper
### Response:
I want you to act as a rapper. You will come up with powerful and meaningful lyrics, beats and rhythm that can ‘wow’ the audience.
Your lyrics should have an intriguing meaning and message that people can relate too. When it comes to choosing your beat,
make sure it is catchy yet relevant to your words, so that when combined they make an explosion of sound everytime!
My first request is "I need a rap song about finding strength within yourself."
<|end_of_text|>'
"""
```
## Training Details
### Training Procedure
The model was fine-tuned using the **Unsloth library** with LoRA adapters, enabling efficient training. Below are the hyperparameters used:
```python
args = TrainingArguments(
per_device_train_batch_size = 2,
gradient_accumulation_steps = 4,
warmup_steps = 5,
num_train_epochs = 8,
# max_steps = 60,
learning_rate = 2e-4,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
optim = "adamw_8bit",
weight_decay = 0.01,
lr_scheduler_type = "linear",
seed = 3407,
output_dir = "outputs",
report_to = "none",
)
```
#### Hardware
- Trained on google colab with its T4 GPU |
Jonjew/AmySmart | Jonjew | 2025-04-02T05:36:20Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
]
| text-to-image | 2025-04-02T05:35:35Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
Breathtaking over the shoulder shot photography of ohwx looking at viewer,
imperfections, necklace, looking over shoulders, eyelashes, fine hair
detail, entire hairstyle visible, perfect eyes with iris pattern, sensual
lips, nose, (perfectly sharp:1.3), realistic textures, (deep focus, focus on
background:1.5), 8k uhd, dslr, ultra high quality image, film grain,
Fujifilm XT3
parameters:
negative_prompt: AmySmart_flux_lora_v2_Weight-1.0
output:
url: images/AmySmart_flux_lora_v2_Weight-1.0_2024-12-24_2024-12-24-213547_0.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: ohwx
license: unknown
---
# Amy Smart
<Gallery />
## Model description
FROM https://civitai.com/models/1069820/amy-smart-flux?modelVersionId=1200817
Please support the creator by liking and donating buzz at the page above
Trigger ohwx
Strength 1
👑 Amy Smart 🎬
About my celebrities loras
90% of the dataset used to build my loras only use head images. That really help the blend with other lora or model as there is no hands, feet, that may or will interfere in the final image render. When you get distorted hands with a person lora, it's because there is info on hands in the dataset used to train the lora, but that will not happen with my loras.
I've trained on Flux.1 Dev so other merged or trained checkpoint may not work well with my loras.
The drawback side of that is that the body may not be reflecting the reality. It may not be a drawback tho.
This is a lora for Flux.1 Dev. Work with other model but you must drop some simple bloc (good start 19-32).
Trained with ai-toolkit, so merging it is not easy.
To get the best result
Guidance: 2.2-3
Steps (dev): 30-40
daemon detailer (lying sigma sampler): factor: -0.02, start 0.06, end 0.75
Resolution: Upscale the latent by 1.25 or 1.5 you'll get awsome result. (take longer time but worth it)
Trigger word is (may work better in certain context): ohwx
Enjoy!
## Trigger words
You should use `ohwx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/AmySmart/tree/main) them in the Files & versions tab.
|
leeunzin/Qwen2.5-7B-etf2 | leeunzin | 2025-04-02T05:36:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-02T05:36:00Z | ---
base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** leeunzin
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Roadster-18/roberta | Roadster-18 | 2025-04-02T05:35:23Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-04-02T05:35:22Z | ---
license: apache-2.0
---
|
NexesMess/Llama_3.1_70b_Tulips_v1.0 | NexesMess | 2025-04-02T05:34:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:huihui-ai/Llama-3.1-Tulu-3-70B-abliterated",
"base_model:merge:huihui-ai/Llama-3.1-Tulu-3-70B-abliterated",
"base_model:migtissera/Tess-3-Llama-3.1-70B",
"base_model:merge:migtissera/Tess-3-Llama-3.1-70B",
"base_model:nbeerbower/Llama3.1-Gutenberg-Doppel-70B",
"base_model:merge:nbeerbower/Llama3.1-Gutenberg-Doppel-70B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T04:59:06Z | ---
base_model:
- nbeerbower/Llama3.1-Gutenberg-Doppel-70B
- migtissera/Tess-3-Llama-3.1-70B
- huihui-ai/Llama-3.1-Tulu-3-70B-abliterated
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [huihui-ai/Llama-3.1-Tulu-3-70B-abliterated](https://huggingface.co/huihui-ai/Llama-3.1-Tulu-3-70B-abliterated) as a base.
### Models Merged
The following models were included in the merge:
* [nbeerbower/Llama3.1-Gutenberg-Doppel-70B](https://huggingface.co/nbeerbower/Llama3.1-Gutenberg-Doppel-70B)
* [migtissera/Tess-3-Llama-3.1-70B](https://huggingface.co/migtissera/Tess-3-Llama-3.1-70B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: model_stock
models:
- model: nbeerbower/Llama3.1-Gutenberg-Doppel-70B
parameters:
weight: 1.0
- model: migtissera/Tess-3-Llama-3.1-70B
parameters:
weight: 1.0
base_model: huihui-ai/Llama-3.1-Tulu-3-70B-abliterated
dtype: bfloat16
out_dtype: bfloat16
parameters:
int8_mask: true
normalize: true
rescale: false
filter_wise: false
smooth: false
allow_negative_weights: false
chat_template: auto
tokenizer:
source: union
```
|
Subsets and Splits