modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-27 12:29:05
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 500
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-27 12:27:55
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
RichardErkhov/Hachipo_-_qwen2.5-0.5B_MIFT-ja_onsentence_3000-gguf | RichardErkhov | 2025-03-14T20:52:46Z | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-14T20:42:39Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
qwen2.5-0.5B_MIFT-ja_onsentence_3000 - GGUF
- Model creator: https://huggingface.co/Hachipo/
- Original model: https://huggingface.co/Hachipo/qwen2.5-0.5B_MIFT-ja_onsentence_3000/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [qwen2.5-0.5B_MIFT-ja_onsentence_3000.Q2_K.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_MIFT-ja_onsentence_3000-gguf/blob/main/qwen2.5-0.5B_MIFT-ja_onsentence_3000.Q2_K.gguf) | Q2_K | 0.32GB |
| [qwen2.5-0.5B_MIFT-ja_onsentence_3000.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_MIFT-ja_onsentence_3000-gguf/blob/main/qwen2.5-0.5B_MIFT-ja_onsentence_3000.IQ3_XS.gguf) | IQ3_XS | 0.32GB |
| [qwen2.5-0.5B_MIFT-ja_onsentence_3000.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_MIFT-ja_onsentence_3000-gguf/blob/main/qwen2.5-0.5B_MIFT-ja_onsentence_3000.IQ3_S.gguf) | IQ3_S | 0.32GB |
| [qwen2.5-0.5B_MIFT-ja_onsentence_3000.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_MIFT-ja_onsentence_3000-gguf/blob/main/qwen2.5-0.5B_MIFT-ja_onsentence_3000.Q3_K_S.gguf) | Q3_K_S | 0.32GB |
| [qwen2.5-0.5B_MIFT-ja_onsentence_3000.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_MIFT-ja_onsentence_3000-gguf/blob/main/qwen2.5-0.5B_MIFT-ja_onsentence_3000.IQ3_M.gguf) | IQ3_M | 0.32GB |
| [qwen2.5-0.5B_MIFT-ja_onsentence_3000.Q3_K.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_MIFT-ja_onsentence_3000-gguf/blob/main/qwen2.5-0.5B_MIFT-ja_onsentence_3000.Q3_K.gguf) | Q3_K | 0.33GB |
| [qwen2.5-0.5B_MIFT-ja_onsentence_3000.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_MIFT-ja_onsentence_3000-gguf/blob/main/qwen2.5-0.5B_MIFT-ja_onsentence_3000.Q3_K_M.gguf) | Q3_K_M | 0.33GB |
| [qwen2.5-0.5B_MIFT-ja_onsentence_3000.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_MIFT-ja_onsentence_3000-gguf/blob/main/qwen2.5-0.5B_MIFT-ja_onsentence_3000.Q3_K_L.gguf) | Q3_K_L | 0.34GB |
| [qwen2.5-0.5B_MIFT-ja_onsentence_3000.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_MIFT-ja_onsentence_3000-gguf/blob/main/qwen2.5-0.5B_MIFT-ja_onsentence_3000.IQ4_XS.gguf) | IQ4_XS | 0.33GB |
| [qwen2.5-0.5B_MIFT-ja_onsentence_3000.Q4_0.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_MIFT-ja_onsentence_3000-gguf/blob/main/qwen2.5-0.5B_MIFT-ja_onsentence_3000.Q4_0.gguf) | Q4_0 | 0.33GB |
| [qwen2.5-0.5B_MIFT-ja_onsentence_3000.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_MIFT-ja_onsentence_3000-gguf/blob/main/qwen2.5-0.5B_MIFT-ja_onsentence_3000.IQ4_NL.gguf) | IQ4_NL | 0.33GB |
| [qwen2.5-0.5B_MIFT-ja_onsentence_3000.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_MIFT-ja_onsentence_3000-gguf/blob/main/qwen2.5-0.5B_MIFT-ja_onsentence_3000.Q4_K_S.gguf) | Q4_K_S | 0.36GB |
| [qwen2.5-0.5B_MIFT-ja_onsentence_3000.Q4_K.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_MIFT-ja_onsentence_3000-gguf/blob/main/qwen2.5-0.5B_MIFT-ja_onsentence_3000.Q4_K.gguf) | Q4_K | 0.37GB |
| [qwen2.5-0.5B_MIFT-ja_onsentence_3000.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_MIFT-ja_onsentence_3000-gguf/blob/main/qwen2.5-0.5B_MIFT-ja_onsentence_3000.Q4_K_M.gguf) | Q4_K_M | 0.37GB |
| [qwen2.5-0.5B_MIFT-ja_onsentence_3000.Q4_1.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_MIFT-ja_onsentence_3000-gguf/blob/main/qwen2.5-0.5B_MIFT-ja_onsentence_3000.Q4_1.gguf) | Q4_1 | 0.35GB |
| [qwen2.5-0.5B_MIFT-ja_onsentence_3000.Q5_0.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_MIFT-ja_onsentence_3000-gguf/blob/main/qwen2.5-0.5B_MIFT-ja_onsentence_3000.Q5_0.gguf) | Q5_0 | 0.37GB |
| [qwen2.5-0.5B_MIFT-ja_onsentence_3000.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_MIFT-ja_onsentence_3000-gguf/blob/main/qwen2.5-0.5B_MIFT-ja_onsentence_3000.Q5_K_S.gguf) | Q5_K_S | 0.38GB |
| [qwen2.5-0.5B_MIFT-ja_onsentence_3000.Q5_K.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_MIFT-ja_onsentence_3000-gguf/blob/main/qwen2.5-0.5B_MIFT-ja_onsentence_3000.Q5_K.gguf) | Q5_K | 0.39GB |
| [qwen2.5-0.5B_MIFT-ja_onsentence_3000.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_MIFT-ja_onsentence_3000-gguf/blob/main/qwen2.5-0.5B_MIFT-ja_onsentence_3000.Q5_K_M.gguf) | Q5_K_M | 0.39GB |
| [qwen2.5-0.5B_MIFT-ja_onsentence_3000.Q5_1.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_MIFT-ja_onsentence_3000-gguf/blob/main/qwen2.5-0.5B_MIFT-ja_onsentence_3000.Q5_1.gguf) | Q5_1 | 0.39GB |
| [qwen2.5-0.5B_MIFT-ja_onsentence_3000.Q6_K.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_MIFT-ja_onsentence_3000-gguf/blob/main/qwen2.5-0.5B_MIFT-ja_onsentence_3000.Q6_K.gguf) | Q6_K | 0.47GB |
| [qwen2.5-0.5B_MIFT-ja_onsentence_3000.Q8_0.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_MIFT-ja_onsentence_3000-gguf/blob/main/qwen2.5-0.5B_MIFT-ja_onsentence_3000.Q8_0.gguf) | Q8_0 | 0.49GB |
Original model description:
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
2084Collective/super-saiyan-checkpoint-130 | 2084Collective | 2025-03-14T20:50:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-14T20:50:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Shaleen123/MedicalEDI-14b-EDI-Base-4 | Shaleen123 | 2025-03-14T20:49:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-14T20:42:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jmalejandrob79/xprmrgnrsphr05 | jmalejandrob79 | 2025-03-14T20:48:52Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-03-14T20:20:20Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: xprmrgnrsphr05
---
# Xprmrgnrsphr05
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `xprmrgnrsphr05` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('jmalejandrob79/xprmrgnrsphr05', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
mradermacher/ablation-55-rafathenev2.rp.tl-shisa-v2-llama-3.1-8b-GGUF | mradermacher | 2025-03-14T20:46:09Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"dataset:shisa-ai/shisa-v1-athenev2-reannotated-filtered",
"dataset:shisa-ai/shisa-v2-roleplaying",
"dataset:shisa-ai/translation-master-set",
"base_model:shisa-ai/ablation-55-rafathenev2.rp.tl-shisa-v2-llama-3.1-8b",
"base_model:quantized:shisa-ai/ablation-55-rafathenev2.rp.tl-shisa-v2-llama-3.1-8b",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-14T20:20:55Z | ---
base_model: shisa-ai/ablation-55-rafathenev2.rp.tl-shisa-v2-llama-3.1-8b
datasets:
- shisa-ai/shisa-v1-athenev2-reannotated-filtered
- shisa-ai/shisa-v2-roleplaying
- shisa-ai/translation-master-set
language:
- en
library_name: transformers
license: llama3.1
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/shisa-ai/ablation-55-rafathenev2.rp.tl-shisa-v2-llama-3.1-8b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ablation-55-rafathenev2.rp.tl-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-55-rafathenev2.rp.tl-shisa-v2-llama-3.1-8b.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-55-rafathenev2.rp.tl-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-55-rafathenev2.rp.tl-shisa-v2-llama-3.1-8b.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-55-rafathenev2.rp.tl-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-55-rafathenev2.rp.tl-shisa-v2-llama-3.1-8b.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ablation-55-rafathenev2.rp.tl-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-55-rafathenev2.rp.tl-shisa-v2-llama-3.1-8b.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-55-rafathenev2.rp.tl-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-55-rafathenev2.rp.tl-shisa-v2-llama-3.1-8b.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-55-rafathenev2.rp.tl-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-55-rafathenev2.rp.tl-shisa-v2-llama-3.1-8b.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ablation-55-rafathenev2.rp.tl-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-55-rafathenev2.rp.tl-shisa-v2-llama-3.1-8b.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ablation-55-rafathenev2.rp.tl-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-55-rafathenev2.rp.tl-shisa-v2-llama-3.1-8b.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-55-rafathenev2.rp.tl-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-55-rafathenev2.rp.tl-shisa-v2-llama-3.1-8b.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-55-rafathenev2.rp.tl-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-55-rafathenev2.rp.tl-shisa-v2-llama-3.1-8b.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ablation-55-rafathenev2.rp.tl-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-55-rafathenev2.rp.tl-shisa-v2-llama-3.1-8b.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ablation-55-rafathenev2.rp.tl-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-55-rafathenev2.rp.tl-shisa-v2-llama-3.1-8b.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct_top3000_ja-gguf | RichardErkhov | 2025-03-14T20:44:49Z | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-14T20:32:28Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
qwen2.5-0.5B_educational_instruct_top3000_ja - GGUF
- Model creator: https://huggingface.co/Hachipo/
- Original model: https://huggingface.co/Hachipo/qwen2.5-0.5B_educational_instruct_top3000_ja/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [qwen2.5-0.5B_educational_instruct_top3000_ja.Q2_K.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct_top3000_ja-gguf/blob/main/qwen2.5-0.5B_educational_instruct_top3000_ja.Q2_K.gguf) | Q2_K | 0.32GB |
| [qwen2.5-0.5B_educational_instruct_top3000_ja.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct_top3000_ja-gguf/blob/main/qwen2.5-0.5B_educational_instruct_top3000_ja.IQ3_XS.gguf) | IQ3_XS | 0.32GB |
| [qwen2.5-0.5B_educational_instruct_top3000_ja.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct_top3000_ja-gguf/blob/main/qwen2.5-0.5B_educational_instruct_top3000_ja.IQ3_S.gguf) | IQ3_S | 0.32GB |
| [qwen2.5-0.5B_educational_instruct_top3000_ja.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct_top3000_ja-gguf/blob/main/qwen2.5-0.5B_educational_instruct_top3000_ja.Q3_K_S.gguf) | Q3_K_S | 0.32GB |
| [qwen2.5-0.5B_educational_instruct_top3000_ja.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct_top3000_ja-gguf/blob/main/qwen2.5-0.5B_educational_instruct_top3000_ja.IQ3_M.gguf) | IQ3_M | 0.32GB |
| [qwen2.5-0.5B_educational_instruct_top3000_ja.Q3_K.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct_top3000_ja-gguf/blob/main/qwen2.5-0.5B_educational_instruct_top3000_ja.Q3_K.gguf) | Q3_K | 0.33GB |
| [qwen2.5-0.5B_educational_instruct_top3000_ja.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct_top3000_ja-gguf/blob/main/qwen2.5-0.5B_educational_instruct_top3000_ja.Q3_K_M.gguf) | Q3_K_M | 0.33GB |
| [qwen2.5-0.5B_educational_instruct_top3000_ja.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct_top3000_ja-gguf/blob/main/qwen2.5-0.5B_educational_instruct_top3000_ja.Q3_K_L.gguf) | Q3_K_L | 0.34GB |
| [qwen2.5-0.5B_educational_instruct_top3000_ja.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct_top3000_ja-gguf/blob/main/qwen2.5-0.5B_educational_instruct_top3000_ja.IQ4_XS.gguf) | IQ4_XS | 0.33GB |
| [qwen2.5-0.5B_educational_instruct_top3000_ja.Q4_0.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct_top3000_ja-gguf/blob/main/qwen2.5-0.5B_educational_instruct_top3000_ja.Q4_0.gguf) | Q4_0 | 0.33GB |
| [qwen2.5-0.5B_educational_instruct_top3000_ja.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct_top3000_ja-gguf/blob/main/qwen2.5-0.5B_educational_instruct_top3000_ja.IQ4_NL.gguf) | IQ4_NL | 0.33GB |
| [qwen2.5-0.5B_educational_instruct_top3000_ja.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct_top3000_ja-gguf/blob/main/qwen2.5-0.5B_educational_instruct_top3000_ja.Q4_K_S.gguf) | Q4_K_S | 0.36GB |
| [qwen2.5-0.5B_educational_instruct_top3000_ja.Q4_K.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct_top3000_ja-gguf/blob/main/qwen2.5-0.5B_educational_instruct_top3000_ja.Q4_K.gguf) | Q4_K | 0.37GB |
| [qwen2.5-0.5B_educational_instruct_top3000_ja.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct_top3000_ja-gguf/blob/main/qwen2.5-0.5B_educational_instruct_top3000_ja.Q4_K_M.gguf) | Q4_K_M | 0.37GB |
| [qwen2.5-0.5B_educational_instruct_top3000_ja.Q4_1.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct_top3000_ja-gguf/blob/main/qwen2.5-0.5B_educational_instruct_top3000_ja.Q4_1.gguf) | Q4_1 | 0.35GB |
| [qwen2.5-0.5B_educational_instruct_top3000_ja.Q5_0.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct_top3000_ja-gguf/blob/main/qwen2.5-0.5B_educational_instruct_top3000_ja.Q5_0.gguf) | Q5_0 | 0.37GB |
| [qwen2.5-0.5B_educational_instruct_top3000_ja.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct_top3000_ja-gguf/blob/main/qwen2.5-0.5B_educational_instruct_top3000_ja.Q5_K_S.gguf) | Q5_K_S | 0.38GB |
| [qwen2.5-0.5B_educational_instruct_top3000_ja.Q5_K.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct_top3000_ja-gguf/blob/main/qwen2.5-0.5B_educational_instruct_top3000_ja.Q5_K.gguf) | Q5_K | 0.39GB |
| [qwen2.5-0.5B_educational_instruct_top3000_ja.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct_top3000_ja-gguf/blob/main/qwen2.5-0.5B_educational_instruct_top3000_ja.Q5_K_M.gguf) | Q5_K_M | 0.39GB |
| [qwen2.5-0.5B_educational_instruct_top3000_ja.Q5_1.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct_top3000_ja-gguf/blob/main/qwen2.5-0.5B_educational_instruct_top3000_ja.Q5_1.gguf) | Q5_1 | 0.39GB |
| [qwen2.5-0.5B_educational_instruct_top3000_ja.Q6_K.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct_top3000_ja-gguf/blob/main/qwen2.5-0.5B_educational_instruct_top3000_ja.Q6_K.gguf) | Q6_K | 0.47GB |
| [qwen2.5-0.5B_educational_instruct_top3000_ja.Q8_0.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct_top3000_ja-gguf/blob/main/qwen2.5-0.5B_educational_instruct_top3000_ja.Q8_0.gguf) | Q8_0 | 0.49GB |
Original model description:
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jmalejandrob79/xprmncrbhr06 | jmalejandrob79 | 2025-03-14T20:44:01Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-03-14T19:01:40Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: xprmncrbhr06
---
# Xprmncrbhr06
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `xprmncrbhr06` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('jmalejandrob79/xprmncrbhr06', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
ALEXIOSTER/ppo-LLV2 | ALEXIOSTER | 2025-03-14T20:43:50Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2025-03-14T20:43:45Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -149.16 +/- 69.31
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'ALEXIOSTER/ppo-LLV2'
'batch_size': 512
'minibatch_size': 128}
```
|
bigband/UnshakableDagda | bigband | 2025-03-14T20:40:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-03-14T20:38:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
genki10/Version12NewTestASAP_FineTuningBERT_AugV12_k1_task1_organization_k1_k1_fold4 | genki10 | 2025-03-14T20:36:38Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-03-14T20:27:49Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: Version12NewTestASAP_FineTuningBERT_AugV12_k1_task1_organization_k1_k1_fold4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Version12NewTestASAP_FineTuningBERT_AugV12_k1_task1_organization_k1_k1_fold4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9478
- Qwk: 0.4331
- Mse: 0.9478
- Rmse: 0.9735
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:------:|
| No log | 1.0 | 2 | 13.2018 | 0.0 | 13.2018 | 3.6334 |
| No log | 2.0 | 4 | 12.5228 | 0.0 | 12.5228 | 3.5388 |
| No log | 3.0 | 6 | 11.9526 | -0.0163 | 11.9526 | 3.4572 |
| No log | 4.0 | 8 | 11.0872 | 0.0 | 11.0872 | 3.3297 |
| No log | 5.0 | 10 | 9.2890 | 0.0 | 9.2890 | 3.0478 |
| No log | 6.0 | 12 | 8.1710 | 0.0 | 8.1710 | 2.8585 |
| No log | 7.0 | 14 | 6.8565 | 0.0 | 6.8565 | 2.6185 |
| No log | 8.0 | 16 | 5.8130 | 0.0241 | 5.8130 | 2.4110 |
| No log | 9.0 | 18 | 5.0719 | 0.0 | 5.0719 | 2.2521 |
| No log | 10.0 | 20 | 4.1394 | 0.0 | 4.1394 | 2.0345 |
| No log | 11.0 | 22 | 3.5454 | 0.0 | 3.5454 | 1.8829 |
| No log | 12.0 | 24 | 2.9669 | 0.0 | 2.9669 | 1.7225 |
| No log | 13.0 | 26 | 2.4135 | 0.1176 | 2.4135 | 1.5536 |
| No log | 14.0 | 28 | 1.9616 | 0.0449 | 1.9616 | 1.4006 |
| No log | 15.0 | 30 | 1.6336 | 0.0449 | 1.6336 | 1.2781 |
| No log | 16.0 | 32 | 1.3801 | 0.0449 | 1.3801 | 1.1748 |
| No log | 17.0 | 34 | 1.2502 | 0.0449 | 1.2502 | 1.1181 |
| No log | 18.0 | 36 | 1.4292 | 0.1227 | 1.4292 | 1.1955 |
| No log | 19.0 | 38 | 1.2020 | 0.1227 | 1.2020 | 1.0964 |
| No log | 20.0 | 40 | 0.8380 | 0.3021 | 0.8380 | 0.9154 |
| No log | 21.0 | 42 | 0.9421 | 0.3902 | 0.9421 | 0.9706 |
| No log | 22.0 | 44 | 0.8962 | 0.3802 | 0.8962 | 0.9467 |
| No log | 23.0 | 46 | 1.0008 | 0.3080 | 1.0008 | 1.0004 |
| No log | 24.0 | 48 | 1.0728 | 0.3460 | 1.0728 | 1.0358 |
| No log | 25.0 | 50 | 0.9660 | 0.3827 | 0.9660 | 0.9828 |
| No log | 26.0 | 52 | 0.7947 | 0.3798 | 0.7947 | 0.8914 |
| No log | 27.0 | 54 | 0.7581 | 0.38 | 0.7581 | 0.8707 |
| No log | 28.0 | 56 | 0.8511 | 0.4463 | 0.8511 | 0.9226 |
| No log | 29.0 | 58 | 0.9433 | 0.4509 | 0.9433 | 0.9713 |
| No log | 30.0 | 60 | 0.9438 | 0.4858 | 0.9438 | 0.9715 |
| No log | 31.0 | 62 | 0.9298 | 0.4013 | 0.9298 | 0.9643 |
| No log | 32.0 | 64 | 0.9448 | 0.4160 | 0.9448 | 0.9720 |
| No log | 33.0 | 66 | 0.9314 | 0.4591 | 0.9314 | 0.9651 |
| No log | 34.0 | 68 | 0.9114 | 0.5086 | 0.9114 | 0.9546 |
| No log | 35.0 | 70 | 0.9659 | 0.4853 | 0.9659 | 0.9828 |
| No log | 36.0 | 72 | 0.9759 | 0.4561 | 0.9759 | 0.9879 |
| No log | 37.0 | 74 | 1.0166 | 0.4274 | 1.0166 | 1.0083 |
| No log | 38.0 | 76 | 1.2327 | 0.3660 | 1.2327 | 1.1103 |
| No log | 39.0 | 78 | 1.2374 | 0.3667 | 1.2374 | 1.1124 |
| No log | 40.0 | 80 | 1.0347 | 0.4351 | 1.0347 | 1.0172 |
| No log | 41.0 | 82 | 1.0142 | 0.4242 | 1.0142 | 1.0071 |
| No log | 42.0 | 84 | 1.0276 | 0.4418 | 1.0276 | 1.0137 |
| No log | 43.0 | 86 | 1.2289 | 0.4218 | 1.2289 | 1.1085 |
| No log | 44.0 | 88 | 1.0814 | 0.4832 | 1.0814 | 1.0399 |
| No log | 45.0 | 90 | 0.8933 | 0.4081 | 0.8933 | 0.9451 |
| No log | 46.0 | 92 | 0.9511 | 0.4200 | 0.9511 | 0.9753 |
| No log | 47.0 | 94 | 1.1722 | 0.4342 | 1.1722 | 1.0827 |
| No log | 48.0 | 96 | 1.0384 | 0.4596 | 1.0384 | 1.0190 |
| No log | 49.0 | 98 | 0.9393 | 0.4473 | 0.9393 | 0.9692 |
| No log | 50.0 | 100 | 0.9360 | 0.4387 | 0.9360 | 0.9675 |
| No log | 51.0 | 102 | 1.1177 | 0.4404 | 1.1177 | 1.0572 |
| No log | 52.0 | 104 | 1.3417 | 0.4262 | 1.3417 | 1.1583 |
| No log | 53.0 | 106 | 1.2049 | 0.4649 | 1.2049 | 1.0977 |
| No log | 54.0 | 108 | 0.9478 | 0.4331 | 0.9478 | 0.9735 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
handing2412/UnetV1 | handing2412 | 2025-03-14T20:34:48Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-03-03T23:06:10Z | ---
license: apache-2.0
---
|
2084Collective/super-saiyan-checkpoint-120 | 2084Collective | 2025-03-14T20:32:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-14T20:31:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct-3-gguf | RichardErkhov | 2025-03-14T20:28:07Z | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-14T20:16:03Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
qwen2.5-0.5B_educational_instruct-3 - GGUF
- Model creator: https://huggingface.co/Hachipo/
- Original model: https://huggingface.co/Hachipo/qwen2.5-0.5B_educational_instruct-3/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [qwen2.5-0.5B_educational_instruct-3.Q2_K.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct-3-gguf/blob/main/qwen2.5-0.5B_educational_instruct-3.Q2_K.gguf) | Q2_K | 0.32GB |
| [qwen2.5-0.5B_educational_instruct-3.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct-3-gguf/blob/main/qwen2.5-0.5B_educational_instruct-3.IQ3_XS.gguf) | IQ3_XS | 0.32GB |
| [qwen2.5-0.5B_educational_instruct-3.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct-3-gguf/blob/main/qwen2.5-0.5B_educational_instruct-3.IQ3_S.gguf) | IQ3_S | 0.32GB |
| [qwen2.5-0.5B_educational_instruct-3.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct-3-gguf/blob/main/qwen2.5-0.5B_educational_instruct-3.Q3_K_S.gguf) | Q3_K_S | 0.32GB |
| [qwen2.5-0.5B_educational_instruct-3.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct-3-gguf/blob/main/qwen2.5-0.5B_educational_instruct-3.IQ3_M.gguf) | IQ3_M | 0.32GB |
| [qwen2.5-0.5B_educational_instruct-3.Q3_K.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct-3-gguf/blob/main/qwen2.5-0.5B_educational_instruct-3.Q3_K.gguf) | Q3_K | 0.33GB |
| [qwen2.5-0.5B_educational_instruct-3.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct-3-gguf/blob/main/qwen2.5-0.5B_educational_instruct-3.Q3_K_M.gguf) | Q3_K_M | 0.33GB |
| [qwen2.5-0.5B_educational_instruct-3.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct-3-gguf/blob/main/qwen2.5-0.5B_educational_instruct-3.Q3_K_L.gguf) | Q3_K_L | 0.34GB |
| [qwen2.5-0.5B_educational_instruct-3.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct-3-gguf/blob/main/qwen2.5-0.5B_educational_instruct-3.IQ4_XS.gguf) | IQ4_XS | 0.33GB |
| [qwen2.5-0.5B_educational_instruct-3.Q4_0.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct-3-gguf/blob/main/qwen2.5-0.5B_educational_instruct-3.Q4_0.gguf) | Q4_0 | 0.33GB |
| [qwen2.5-0.5B_educational_instruct-3.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct-3-gguf/blob/main/qwen2.5-0.5B_educational_instruct-3.IQ4_NL.gguf) | IQ4_NL | 0.33GB |
| [qwen2.5-0.5B_educational_instruct-3.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct-3-gguf/blob/main/qwen2.5-0.5B_educational_instruct-3.Q4_K_S.gguf) | Q4_K_S | 0.36GB |
| [qwen2.5-0.5B_educational_instruct-3.Q4_K.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct-3-gguf/blob/main/qwen2.5-0.5B_educational_instruct-3.Q4_K.gguf) | Q4_K | 0.37GB |
| [qwen2.5-0.5B_educational_instruct-3.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct-3-gguf/blob/main/qwen2.5-0.5B_educational_instruct-3.Q4_K_M.gguf) | Q4_K_M | 0.37GB |
| [qwen2.5-0.5B_educational_instruct-3.Q4_1.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct-3-gguf/blob/main/qwen2.5-0.5B_educational_instruct-3.Q4_1.gguf) | Q4_1 | 0.35GB |
| [qwen2.5-0.5B_educational_instruct-3.Q5_0.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct-3-gguf/blob/main/qwen2.5-0.5B_educational_instruct-3.Q5_0.gguf) | Q5_0 | 0.37GB |
| [qwen2.5-0.5B_educational_instruct-3.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct-3-gguf/blob/main/qwen2.5-0.5B_educational_instruct-3.Q5_K_S.gguf) | Q5_K_S | 0.38GB |
| [qwen2.5-0.5B_educational_instruct-3.Q5_K.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct-3-gguf/blob/main/qwen2.5-0.5B_educational_instruct-3.Q5_K.gguf) | Q5_K | 0.39GB |
| [qwen2.5-0.5B_educational_instruct-3.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct-3-gguf/blob/main/qwen2.5-0.5B_educational_instruct-3.Q5_K_M.gguf) | Q5_K_M | 0.39GB |
| [qwen2.5-0.5B_educational_instruct-3.Q5_1.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct-3-gguf/blob/main/qwen2.5-0.5B_educational_instruct-3.Q5_1.gguf) | Q5_1 | 0.39GB |
| [qwen2.5-0.5B_educational_instruct-3.Q6_K.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct-3-gguf/blob/main/qwen2.5-0.5B_educational_instruct-3.Q6_K.gguf) | Q6_K | 0.47GB |
| [qwen2.5-0.5B_educational_instruct-3.Q8_0.gguf](https://huggingface.co/RichardErkhov/Hachipo_-_qwen2.5-0.5B_educational_instruct-3-gguf/blob/main/qwen2.5-0.5B_educational_instruct-3.Q8_0.gguf) | Q8_0 | 0.49GB |
Original model description:
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
makeorbreakshop/mobytmodel | makeorbreakshop | 2025-03-14T20:27:03Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-03-14T19:51:01Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: MOB
---
# Mobytmodel
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `MOB` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('makeorbreakshop/mobytmodel', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
mradermacher/dolphin-2.1-70b-GGUF | mradermacher | 2025-03-14T20:25:51Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:ehartford/dolphin",
"dataset:jondurbin/airoboros-2.2.1",
"base_model:cognitivecomputations/dolphin-2.1-70b",
"base_model:quantized:cognitivecomputations/dolphin-2.1-70b",
"license:llama2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-14T14:14:11Z | ---
base_model: cognitivecomputations/dolphin-2.1-70b
datasets:
- ehartford/dolphin
- jondurbin/airoboros-2.2.1
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/cognitivecomputations/dolphin-2.1-70b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/dolphin-2.1-70b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.1-70b-GGUF/resolve/main/dolphin-2.1-70b.Q2_K.gguf) | Q2_K | 25.6 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.1-70b-GGUF/resolve/main/dolphin-2.1-70b.Q3_K_S.gguf) | Q3_K_S | 30.0 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.1-70b-GGUF/resolve/main/dolphin-2.1-70b.Q3_K_M.gguf) | Q3_K_M | 33.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.1-70b-GGUF/resolve/main/dolphin-2.1-70b.Q3_K_L.gguf) | Q3_K_L | 36.2 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.1-70b-GGUF/resolve/main/dolphin-2.1-70b.IQ4_XS.gguf) | IQ4_XS | 37.3 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.1-70b-GGUF/resolve/main/dolphin-2.1-70b.Q4_K_S.gguf) | Q4_K_S | 39.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.1-70b-GGUF/resolve/main/dolphin-2.1-70b.Q4_K_M.gguf) | Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.1-70b-GGUF/resolve/main/dolphin-2.1-70b.Q5_K_S.gguf) | Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.1-70b-GGUF/resolve/main/dolphin-2.1-70b.Q5_K_M.gguf) | Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/dolphin-2.1-70b-GGUF/resolve/main/dolphin-2.1-70b.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/dolphin-2.1-70b-GGUF/resolve/main/dolphin-2.1-70b.Q6_K.gguf.part2of2) | Q6_K | 56.7 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/dolphin-2.1-70b-GGUF/resolve/main/dolphin-2.1-70b.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/dolphin-2.1-70b-GGUF/resolve/main/dolphin-2.1-70b.Q8_0.gguf.part2of2) | Q8_0 | 73.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Iteeeh/mms-tts-eg-ft | Iteeeh | 2025-03-14T20:25:09Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"vits",
"text-to-audio",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2025-03-11T18:30:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
geetach/llama381binstruct_summarize_short_merged | geetach | 2025-03-14T20:25:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-03-14T14:36:54Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kanishka/opt-babylm2-rewritten-clean-spacy_no-num-adj-earlystop-long-bpe_seed-211_1e-3 | kanishka | 2025-03-14T20:20:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"dataset:kanishka/babylm2-rewritten-clean-spacy_no-num-adj",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-14T11:24:02Z | ---
library_name: transformers
tags:
- generated_from_trainer
datasets:
- kanishka/babylm2-rewritten-clean-spacy_no-num-adj
metrics:
- accuracy
model-index:
- name: opt-babylm2-rewritten-clean-spacy_no-num-adj-earlystop-long-bpe_seed-211_1e-3
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: kanishka/babylm2-rewritten-clean-spacy_no-num-adj
type: kanishka/babylm2-rewritten-clean-spacy_no-num-adj
metrics:
- name: Accuracy
type: accuracy
value: 0.4821290084919927
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-babylm2-rewritten-clean-spacy_no-num-adj-earlystop-long-bpe_seed-211_1e-3
This model was trained from scratch on the kanishka/babylm2-rewritten-clean-spacy_no-num-adj dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6443
- Accuracy: 0.4821
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 64
- seed: 211
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 32000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 3.0912 | 1.0 | 17794 | 3.1539 | 0.4227 |
| 2.9731 | 2.0 | 35588 | 3.0552 | 0.4321 |
| 2.8935 | 3.0 | 53382 | 2.9870 | 0.4393 |
| 2.8624 | 4.0 | 71176 | 2.9591 | 0.4424 |
| 2.8334 | 5.0 | 88970 | 2.9394 | 0.4447 |
| 2.8155 | 6.0 | 106764 | 2.9166 | 0.4470 |
| 2.7945 | 7.0 | 124558 | 2.9014 | 0.4484 |
| 2.7812 | 8.0 | 142352 | 2.8882 | 0.4506 |
| 2.765 | 9.0 | 160146 | 2.8692 | 0.4528 |
| 2.7404 | 10.0 | 177940 | 2.8527 | 0.4546 |
| 2.7175 | 11.0 | 195734 | 2.8380 | 0.4560 |
| 2.7048 | 12.0 | 213528 | 2.8228 | 0.4579 |
| 2.68 | 13.0 | 231322 | 2.8028 | 0.4602 |
| 2.6498 | 14.0 | 249116 | 2.7846 | 0.4625 |
| 2.6202 | 15.0 | 266910 | 2.7650 | 0.4652 |
| 2.5883 | 16.0 | 284704 | 2.7395 | 0.4681 |
| 2.5396 | 17.0 | 302498 | 2.7147 | 0.4713 |
| 2.4923 | 18.0 | 320292 | 2.6864 | 0.4754 |
| 2.423 | 19.0 | 338086 | 2.6586 | 0.4793 |
| 2.3419 | 20.0 | 355880 | 2.6443 | 0.4821 |
### Framework versions
- Transformers 4.48.0
- Pytorch 2.5.1
- Datasets 3.2.0
- Tokenizers 0.21.0
|
danadog/dana | danadog | 2025-03-14T20:18:58Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-03-14T19:51:33Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: dana
---
# Dana
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `dana` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('danadog/dana', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
genki10/Version12NewTestASAP_FineTuningBERT_AugV12_k1_task1_organization_k1_k1_fold2 | genki10 | 2025-03-14T20:18:54Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-03-14T20:06:45Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: Version12NewTestASAP_FineTuningBERT_AugV12_k1_task1_organization_k1_k1_fold2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Version12NewTestASAP_FineTuningBERT_AugV12_k1_task1_organization_k1_k1_fold2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1423
- Qwk: 0.4886
- Mse: 1.1423
- Rmse: 1.0688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:------:|
| No log | 1.0 | 2 | 12.7758 | 0.0 | 12.7758 | 3.5743 |
| No log | 2.0 | 4 | 9.4024 | 0.0 | 9.4024 | 3.0663 |
| No log | 3.0 | 6 | 7.4761 | 0.0 | 7.4761 | 2.7342 |
| No log | 4.0 | 8 | 6.4438 | 0.0135 | 6.4438 | 2.5385 |
| No log | 5.0 | 10 | 4.8179 | 0.0 | 4.8179 | 2.1950 |
| No log | 6.0 | 12 | 3.7383 | 0.0 | 3.7383 | 1.9335 |
| No log | 7.0 | 14 | 2.9383 | 0.0645 | 2.9383 | 1.7142 |
| No log | 8.0 | 16 | 2.3887 | 0.0092 | 2.3887 | 1.5455 |
| No log | 9.0 | 18 | 1.9512 | 0.0 | 1.9512 | 1.3969 |
| No log | 10.0 | 20 | 1.7890 | 0.0 | 1.7890 | 1.3375 |
| No log | 11.0 | 22 | 1.4377 | 0.0 | 1.4377 | 1.1990 |
| No log | 12.0 | 24 | 1.5496 | 0.0246 | 1.5496 | 1.2448 |
| No log | 13.0 | 26 | 1.2318 | 0.0870 | 1.2318 | 1.1099 |
| No log | 14.0 | 28 | 1.1295 | 0.2101 | 1.1295 | 1.0628 |
| No log | 15.0 | 30 | 1.2301 | 0.1956 | 1.2301 | 1.1091 |
| No log | 16.0 | 32 | 1.2839 | 0.1815 | 1.2839 | 1.1331 |
| No log | 17.0 | 34 | 1.2443 | 0.2308 | 1.2443 | 1.1155 |
| No log | 18.0 | 36 | 1.2539 | 0.3245 | 1.2539 | 1.1198 |
| No log | 19.0 | 38 | 1.1461 | 0.3424 | 1.1461 | 1.0705 |
| No log | 20.0 | 40 | 1.0980 | 0.3433 | 1.0980 | 1.0479 |
| No log | 21.0 | 42 | 1.2020 | 0.3691 | 1.2020 | 1.0964 |
| No log | 22.0 | 44 | 1.1797 | 0.4133 | 1.1797 | 1.0862 |
| No log | 23.0 | 46 | 1.3743 | 0.3559 | 1.3743 | 1.1723 |
| No log | 24.0 | 48 | 1.5428 | 0.3021 | 1.5428 | 1.2421 |
| No log | 25.0 | 50 | 1.2432 | 0.3953 | 1.2432 | 1.1150 |
| No log | 26.0 | 52 | 1.9573 | 0.2650 | 1.9573 | 1.3990 |
| No log | 27.0 | 54 | 1.6374 | 0.2796 | 1.6374 | 1.2796 |
| No log | 28.0 | 56 | 1.0640 | 0.4951 | 1.0640 | 1.0315 |
| No log | 29.0 | 58 | 1.1070 | 0.4730 | 1.1070 | 1.0521 |
| No log | 30.0 | 60 | 1.4110 | 0.3440 | 1.4110 | 1.1878 |
| No log | 31.0 | 62 | 1.3251 | 0.3550 | 1.3251 | 1.1511 |
| No log | 32.0 | 64 | 1.1960 | 0.4179 | 1.1960 | 1.0936 |
| No log | 33.0 | 66 | 1.1994 | 0.4130 | 1.1994 | 1.0952 |
| No log | 34.0 | 68 | 1.5155 | 0.3435 | 1.5155 | 1.2311 |
| No log | 35.0 | 70 | 1.1175 | 0.5044 | 1.1175 | 1.0571 |
| No log | 36.0 | 72 | 1.0506 | 0.4212 | 1.0506 | 1.0250 |
| No log | 37.0 | 74 | 0.9709 | 0.4637 | 0.9709 | 0.9853 |
| No log | 38.0 | 76 | 1.1467 | 0.5114 | 1.1467 | 1.0708 |
| No log | 39.0 | 78 | 1.1030 | 0.5041 | 1.1030 | 1.0502 |
| No log | 40.0 | 80 | 1.1483 | 0.5 | 1.1483 | 1.0716 |
| No log | 41.0 | 82 | 1.3707 | 0.3697 | 1.3707 | 1.1708 |
| No log | 42.0 | 84 | 1.2051 | 0.4615 | 1.2051 | 1.0978 |
| No log | 43.0 | 86 | 0.9652 | 0.4448 | 0.9652 | 0.9824 |
| No log | 44.0 | 88 | 0.9773 | 0.4718 | 0.9773 | 0.9886 |
| No log | 45.0 | 90 | 1.0121 | 0.5260 | 1.0121 | 1.0061 |
| No log | 46.0 | 92 | 1.1051 | 0.4731 | 1.1051 | 1.0512 |
| No log | 47.0 | 94 | 0.9189 | 0.5503 | 0.9189 | 0.9586 |
| No log | 48.0 | 96 | 0.9644 | 0.4762 | 0.9644 | 0.9820 |
| No log | 49.0 | 98 | 0.9217 | 0.5668 | 0.9217 | 0.9601 |
| No log | 50.0 | 100 | 0.9864 | 0.5301 | 0.9864 | 0.9932 |
| No log | 51.0 | 102 | 0.9463 | 0.5287 | 0.9463 | 0.9728 |
| No log | 52.0 | 104 | 0.9571 | 0.5287 | 0.9571 | 0.9783 |
| No log | 53.0 | 106 | 0.9362 | 0.5595 | 0.9362 | 0.9676 |
| No log | 54.0 | 108 | 0.9264 | 0.5543 | 0.9264 | 0.9625 |
| No log | 55.0 | 110 | 0.9236 | 0.5927 | 0.9236 | 0.9610 |
| No log | 56.0 | 112 | 0.9123 | 0.5905 | 0.9123 | 0.9552 |
| No log | 57.0 | 114 | 1.0263 | 0.4775 | 1.0263 | 1.0131 |
| No log | 58.0 | 116 | 1.0270 | 0.4806 | 1.0270 | 1.0134 |
| No log | 59.0 | 118 | 0.9199 | 0.5642 | 0.9199 | 0.9591 |
| No log | 60.0 | 120 | 0.9344 | 0.5652 | 0.9344 | 0.9667 |
| No log | 61.0 | 122 | 0.9215 | 0.5775 | 0.9215 | 0.9599 |
| No log | 62.0 | 124 | 0.9198 | 0.5795 | 0.9198 | 0.9590 |
| No log | 63.0 | 126 | 0.9136 | 0.5695 | 0.9136 | 0.9558 |
| No log | 64.0 | 128 | 0.9221 | 0.5677 | 0.9221 | 0.9603 |
| No log | 65.0 | 130 | 0.9444 | 0.5422 | 0.9444 | 0.9718 |
| No log | 66.0 | 132 | 0.8932 | 0.5778 | 0.8932 | 0.9451 |
| No log | 67.0 | 134 | 0.8888 | 0.5575 | 0.8888 | 0.9428 |
| No log | 68.0 | 136 | 0.9171 | 0.5255 | 0.9171 | 0.9576 |
| No log | 69.0 | 138 | 1.1725 | 0.4419 | 1.1725 | 1.0828 |
| No log | 70.0 | 140 | 1.2067 | 0.4566 | 1.2067 | 1.0985 |
| No log | 71.0 | 142 | 1.0555 | 0.4867 | 1.0555 | 1.0274 |
| No log | 72.0 | 144 | 1.0039 | 0.5512 | 1.0039 | 1.0020 |
| No log | 73.0 | 146 | 1.0287 | 0.5652 | 1.0287 | 1.0142 |
| No log | 74.0 | 148 | 1.1542 | 0.4696 | 1.1542 | 1.0743 |
| No log | 75.0 | 150 | 1.1423 | 0.4886 | 1.1423 | 1.0688 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
mradermacher/amazon-review-All_Beauty-dpo-iter3-GGUF | mradermacher | 2025-03-14T20:17:55Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:xxhe/amazon-review-All_Beauty-dpo-iter3",
"base_model:quantized:xxhe/amazon-review-All_Beauty-dpo-iter3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-14T19:58:35Z | ---
base_model: xxhe/amazon-review-All_Beauty-dpo-iter3
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/xxhe/amazon-review-All_Beauty-dpo-iter3
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/amazon-review-All_Beauty-dpo-iter3-GGUF/resolve/main/amazon-review-All_Beauty-dpo-iter3.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/amazon-review-All_Beauty-dpo-iter3-GGUF/resolve/main/amazon-review-All_Beauty-dpo-iter3.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/amazon-review-All_Beauty-dpo-iter3-GGUF/resolve/main/amazon-review-All_Beauty-dpo-iter3.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/amazon-review-All_Beauty-dpo-iter3-GGUF/resolve/main/amazon-review-All_Beauty-dpo-iter3.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/amazon-review-All_Beauty-dpo-iter3-GGUF/resolve/main/amazon-review-All_Beauty-dpo-iter3.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/amazon-review-All_Beauty-dpo-iter3-GGUF/resolve/main/amazon-review-All_Beauty-dpo-iter3.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/amazon-review-All_Beauty-dpo-iter3-GGUF/resolve/main/amazon-review-All_Beauty-dpo-iter3.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/amazon-review-All_Beauty-dpo-iter3-GGUF/resolve/main/amazon-review-All_Beauty-dpo-iter3.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/amazon-review-All_Beauty-dpo-iter3-GGUF/resolve/main/amazon-review-All_Beauty-dpo-iter3.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/amazon-review-All_Beauty-dpo-iter3-GGUF/resolve/main/amazon-review-All_Beauty-dpo-iter3.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/amazon-review-All_Beauty-dpo-iter3-GGUF/resolve/main/amazon-review-All_Beauty-dpo-iter3.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/amazon-review-All_Beauty-dpo-iter3-GGUF/resolve/main/amazon-review-All_Beauty-dpo-iter3.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
BagOu22/Lora_Gegelagre | BagOu22 | 2025-03-14T20:17:16Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-03-14T20:07:20Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Gegelagre
---
# Lora_Gegelagre
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Gegelagre` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BagOu22/Lora_Gegelagre', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
jiinking/21_random_MQA_llama3B_model | jiinking | 2025-03-14T20:17:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-14T19:03:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
reedmayhew/cvwreview-gemma3-4B | reedmayhew | 2025-03-14T20:15:42Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma3",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:quantized:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-14T20:12:54Z | ---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** reedmayhew
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tensoralchemistdev01/sv43 | tensoralchemistdev01 | 2025-03-14T20:14:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-14T20:12:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/WizardLM-7B-Uncensored-GGUF | mradermacher | 2025-03-14T20:12:57Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"uncensored",
"en",
"dataset:ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered",
"base_model:cognitivecomputations/WizardLM-7B-Uncensored",
"base_model:quantized:cognitivecomputations/WizardLM-7B-Uncensored",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2025-03-14T15:24:17Z | ---
base_model: cognitivecomputations/WizardLM-7B-Uncensored
datasets:
- ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- uncensored
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/cognitivecomputations/WizardLM-7B-Uncensored
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/WizardLM-7B-Uncensored-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/WizardLM-7B-Uncensored-GGUF/resolve/main/WizardLM-7B-Uncensored.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-7B-Uncensored-GGUF/resolve/main/WizardLM-7B-Uncensored.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-7B-Uncensored-GGUF/resolve/main/WizardLM-7B-Uncensored.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-7B-Uncensored-GGUF/resolve/main/WizardLM-7B-Uncensored.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-7B-Uncensored-GGUF/resolve/main/WizardLM-7B-Uncensored.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-7B-Uncensored-GGUF/resolve/main/WizardLM-7B-Uncensored.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-7B-Uncensored-GGUF/resolve/main/WizardLM-7B-Uncensored.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-7B-Uncensored-GGUF/resolve/main/WizardLM-7B-Uncensored.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-7B-Uncensored-GGUF/resolve/main/WizardLM-7B-Uncensored.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-7B-Uncensored-GGUF/resolve/main/WizardLM-7B-Uncensored.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-7B-Uncensored-GGUF/resolve/main/WizardLM-7B-Uncensored.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-7B-Uncensored-GGUF/resolve/main/WizardLM-7B-Uncensored.f16.gguf) | f16 | 13.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
2084Collective/super-saiyan-checkpoint-110 | 2084Collective | 2025-03-14T20:12:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-14T20:12:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MrRobotoAI/220-Q4_K_M-GGUF | MrRobotoAI | 2025-03-14T20:11:22Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:MrRobotoAI/220",
"base_model:quantized:MrRobotoAI/220",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-14T20:10:58Z | ---
base_model: MrRobotoAI/220
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# MrRobotoAI/220-Q4_K_M-GGUF
This model was converted to GGUF format from [`MrRobotoAI/220`](https://huggingface.co/MrRobotoAI/220) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/MrRobotoAI/220) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo MrRobotoAI/220-Q4_K_M-GGUF --hf-file 220-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo MrRobotoAI/220-Q4_K_M-GGUF --hf-file 220-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo MrRobotoAI/220-Q4_K_M-GGUF --hf-file 220-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo MrRobotoAI/220-Q4_K_M-GGUF --hf-file 220-q4_k_m.gguf -c 2048
```
|
Genaikit1/gemma-3 | Genaikit1 | 2025-03-14T20:10:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-14T20:10:15Z | ---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Genaikit1
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
genki10/Version12NewTestASAP_FineTuningBERT_AugV12_k1_task1_organization_k1_k1_fold1 | genki10 | 2025-03-14T20:06:41Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-03-14T19:56:14Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: Version12NewTestASAP_FineTuningBERT_AugV12_k1_task1_organization_k1_k1_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Version12NewTestASAP_FineTuningBERT_AugV12_k1_task1_organization_k1_k1_fold1
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5693
- Qwk: 0.6188
- Mse: 0.5693
- Rmse: 0.7545
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:------:|
| No log | 1.0 | 2 | 10.0907 | 0.0224 | 10.0907 | 3.1766 |
| No log | 2.0 | 4 | 6.6233 | 0.0 | 6.6233 | 2.5736 |
| No log | 3.0 | 6 | 6.0057 | 0.0 | 6.0057 | 2.4506 |
| No log | 4.0 | 8 | 5.3478 | 0.0516 | 5.3478 | 2.3125 |
| No log | 5.0 | 10 | 4.7480 | 0.0 | 4.7480 | 2.1790 |
| No log | 6.0 | 12 | 4.0047 | 0.0 | 4.0047 | 2.0012 |
| No log | 7.0 | 14 | 3.3800 | 0.0 | 3.3800 | 1.8385 |
| No log | 8.0 | 16 | 3.0890 | 0.0 | 3.0890 | 1.7576 |
| No log | 9.0 | 18 | 2.3559 | 0.0 | 2.3559 | 1.5349 |
| No log | 10.0 | 20 | 1.8937 | 0.0391 | 1.8937 | 1.3761 |
| No log | 11.0 | 22 | 1.5648 | 0.0472 | 1.5648 | 1.2509 |
| No log | 12.0 | 24 | 1.3804 | 0.0472 | 1.3804 | 1.1749 |
| No log | 13.0 | 26 | 1.1324 | 0.0 | 1.1324 | 1.0641 |
| No log | 14.0 | 28 | 1.0423 | 0.0472 | 1.0423 | 1.0209 |
| No log | 15.0 | 30 | 0.9232 | 0.0472 | 0.9232 | 0.9608 |
| No log | 16.0 | 32 | 1.3238 | 0.1787 | 1.3238 | 1.1506 |
| No log | 17.0 | 34 | 1.2635 | 0.2238 | 1.2635 | 1.1241 |
| No log | 18.0 | 36 | 0.8735 | 0.3770 | 0.8735 | 0.9346 |
| No log | 19.0 | 38 | 0.7297 | 0.4774 | 0.7297 | 0.8542 |
| No log | 20.0 | 40 | 0.8823 | 0.3800 | 0.8823 | 0.9393 |
| No log | 21.0 | 42 | 0.8476 | 0.3792 | 0.8476 | 0.9206 |
| No log | 22.0 | 44 | 0.7269 | 0.4424 | 0.7269 | 0.8526 |
| No log | 23.0 | 46 | 0.7883 | 0.4463 | 0.7883 | 0.8879 |
| No log | 24.0 | 48 | 1.5279 | 0.3322 | 1.5279 | 1.2361 |
| No log | 25.0 | 50 | 1.0036 | 0.4720 | 1.0036 | 1.0018 |
| No log | 26.0 | 52 | 0.7266 | 0.4658 | 0.7266 | 0.8524 |
| No log | 27.0 | 54 | 0.7659 | 0.5441 | 0.7659 | 0.8751 |
| No log | 28.0 | 56 | 0.8321 | 0.4798 | 0.8321 | 0.9122 |
| No log | 29.0 | 58 | 0.6427 | 0.4887 | 0.6427 | 0.8017 |
| No log | 30.0 | 60 | 0.9061 | 0.4678 | 0.9061 | 0.9519 |
| No log | 31.0 | 62 | 1.1744 | 0.4048 | 1.1744 | 1.0837 |
| No log | 32.0 | 64 | 0.6993 | 0.5161 | 0.6993 | 0.8362 |
| No log | 33.0 | 66 | 0.6762 | 0.4234 | 0.6762 | 0.8223 |
| No log | 34.0 | 68 | 0.6853 | 0.4855 | 0.6853 | 0.8278 |
| No log | 35.0 | 70 | 0.8684 | 0.5070 | 0.8684 | 0.9319 |
| No log | 36.0 | 72 | 0.6160 | 0.5770 | 0.6160 | 0.7848 |
| No log | 37.0 | 74 | 0.5843 | 0.5753 | 0.5843 | 0.7644 |
| No log | 38.0 | 76 | 0.4845 | 0.5964 | 0.4845 | 0.6961 |
| No log | 39.0 | 78 | 0.8911 | 0.5056 | 0.8911 | 0.9440 |
| No log | 40.0 | 80 | 0.9409 | 0.5342 | 0.9409 | 0.9700 |
| No log | 41.0 | 82 | 0.5841 | 0.6509 | 0.5841 | 0.7643 |
| No log | 42.0 | 84 | 0.6607 | 0.5626 | 0.6607 | 0.8128 |
| No log | 43.0 | 86 | 0.5120 | 0.6667 | 0.5120 | 0.7155 |
| No log | 44.0 | 88 | 0.6118 | 0.5382 | 0.6118 | 0.7822 |
| No log | 45.0 | 90 | 0.4983 | 0.6686 | 0.4983 | 0.7059 |
| No log | 46.0 | 92 | 0.5279 | 0.6429 | 0.5279 | 0.7266 |
| No log | 47.0 | 94 | 0.5891 | 0.6820 | 0.5891 | 0.7675 |
| No log | 48.0 | 96 | 0.5978 | 0.6246 | 0.5978 | 0.7732 |
| No log | 49.0 | 98 | 0.4812 | 0.625 | 0.4812 | 0.6937 |
| No log | 50.0 | 100 | 0.4837 | 0.6154 | 0.4837 | 0.6955 |
| No log | 51.0 | 102 | 0.6943 | 0.6224 | 0.6943 | 0.8332 |
| No log | 52.0 | 104 | 0.5419 | 0.6759 | 0.5419 | 0.7361 |
| No log | 53.0 | 106 | 0.5696 | 0.6477 | 0.5696 | 0.7547 |
| No log | 54.0 | 108 | 0.5037 | 0.6014 | 0.5037 | 0.7097 |
| No log | 55.0 | 110 | 0.6165 | 0.6121 | 0.6165 | 0.7851 |
| No log | 56.0 | 112 | 0.5385 | 0.6246 | 0.5385 | 0.7338 |
| No log | 57.0 | 114 | 0.6843 | 0.6019 | 0.6843 | 0.8272 |
| No log | 58.0 | 116 | 0.7317 | 0.5394 | 0.7317 | 0.8554 |
| No log | 59.0 | 118 | 0.5286 | 0.5852 | 0.5286 | 0.7270 |
| No log | 60.0 | 120 | 0.6176 | 0.544 | 0.6176 | 0.7859 |
| No log | 61.0 | 122 | 0.5719 | 0.5483 | 0.5719 | 0.7562 |
| No log | 62.0 | 124 | 0.5626 | 0.6188 | 0.5626 | 0.7501 |
| No log | 63.0 | 126 | 0.6653 | 0.6003 | 0.6653 | 0.8157 |
| No log | 64.0 | 128 | 0.5447 | 0.6201 | 0.5447 | 0.7381 |
| No log | 65.0 | 130 | 0.5744 | 0.5917 | 0.5744 | 0.7579 |
| No log | 66.0 | 132 | 0.5846 | 0.5958 | 0.5846 | 0.7646 |
| No log | 67.0 | 134 | 0.5693 | 0.6188 | 0.5693 | 0.7545 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
mradermacher/MFANNv0.19-GGUF | mradermacher | 2025-03-14T20:06:23Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:netcat420/MFANN",
"base_model:netcat420/MFANNv0.19",
"base_model:quantized:netcat420/MFANNv0.19",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-14T18:19:52Z | ---
base_model: netcat420/MFANNv0.19
datasets:
- netcat420/MFANN
language:
- en
library_name: transformers
license: llama3.1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/netcat420/MFANNv0.19
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MFANNv0.19-GGUF/resolve/main/MFANNv0.19.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/MFANNv0.19-GGUF/resolve/main/MFANNv0.19.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/MFANNv0.19-GGUF/resolve/main/MFANNv0.19.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MFANNv0.19-GGUF/resolve/main/MFANNv0.19.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/MFANNv0.19-GGUF/resolve/main/MFANNv0.19.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/MFANNv0.19-GGUF/resolve/main/MFANNv0.19.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MFANNv0.19-GGUF/resolve/main/MFANNv0.19.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MFANNv0.19-GGUF/resolve/main/MFANNv0.19.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/MFANNv0.19-GGUF/resolve/main/MFANNv0.19.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/MFANNv0.19-GGUF/resolve/main/MFANNv0.19.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MFANNv0.19-GGUF/resolve/main/MFANNv0.19.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MFANNv0.19-GGUF/resolve/main/MFANNv0.19.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ai-sexbot/best-ai-sex-bot | ai-sexbot | 2025-03-14T20:06:09Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-03-14T20:05:43Z | ---
license: apache-2.0
---
# 7 Best AI Sex Chat Bot Of 2025
In the ever-evolving world of artificial intelligence, AI sex chatbots have emerged as a fascinating blend of technology and intimacy. As we step into 2025, these bots have become more sophisticated, offering personalized, engaging, and immersive experiences.
Whether you're curious about exploring fantasies or seeking virtual companionship, we’ve curated a list of the 7 Best AI Sex Chat Bots of 2025 to help you find the perfect match.
Before we dive in, [**Candy AI**](https://candyai.gg/home2?via=matts)
is the best and my first recommendation on AI sex chat bots.
## 1. Candy.ai
### Why I Recommend It:
Candy.ai stands out as one of the best AI sex chat bots available today. It offers a unique blend of personalization, creativity, and intimacy, allowing users to create their ideal AI girlfriend. With advanced deep-learning technology, Candy.ai provides a realistic and immersive experience that caters to individual desires and fantasies.
⏩⏩⏩[**Try Candy AI For Free**](https://candyai.gg/home2?via=matts)

### Key Features:
Customizable AI Girlfriend: Users can design their AI girlfriend by selecting her body type, personality, and clothing, ensuring a personalized experience.
Interactive Conversations: The AI engages in meaningful dialogues, adapting to the user's preferences and learning from interactions.
Photo Requests: Users can request photos and selfies of their AI girlfriend, enhancing the visual aspect of the experience.
Privacy and Security: Candy.ai prioritizes user privacy with state-of-the-art secure data storage, ensuring that all interactions remain confidential.
### My Experience:
Using Candy.ai has been an eye-opening experience. The ability to customize my AI girlfriend made the interactions feel personal and engaging. The conversations flowed naturally, and I appreciated the responsiveness of the AI, which made the experience feel genuine.
### Pros:
Highly customizable, allowing for a tailored experience that meets individual desires.
Realistic interactions that adapt to user preferences, creating a sense of intimacy.
### Cons:
Some users may find the AI's responses occasionally predictable, limiting the spontaneity of interactions.
⏩⏩⏩[**Try Candy AI For Free**](https://candyai.gg/home2?via=matts)
## 2. Soulfun.ai
Soulfun.ai is an innovative platform that offers users the opportunity to engage with a variety of AI characters, including some of the most captivating and interactive sex chat bots available today.
### Why I Recommend It
Soulfun.ai stands out due to its diverse range of characters and the depth of interaction it offers. Whether you're looking for playful banter or deeper emotional connections, this platform has something for everyone.
### Key Features
Diverse Character Selection: Choose from a wide array of AI characters, each with unique personalities and traits.
Unlimited Interaction: Engage in unlimited chats with your favorite AI soulmates, ensuring a fresh experience every time.
Customizable Characters: Create and customize new AI characters to suit your preferences and desires.
Safe Environment: Enjoy your interactions in a secure and private setting, free from judgment.
### My Experience
Using Soulfun.ai has been a delightful journey. The characters are engaging and responsive, making each conversation feel unique and tailored to my interests. The platform's design is user-friendly, enhancing the overall experience.
### Pros
Highly interactive and engaging characters that adapt to user preferences.
Safe and private environment for exploring fantasies without judgment.
### Cons
Some users may find the character interactions can occasionally feel scripted or repetitive.
## 3. DreamGF
DreamGF is an innovative AI sex chat bot that offers users a unique and personalized experience. It allows you to create your own virtual girlfriend, tailored to your preferences, making it a standout choice in the realm of AI companions.
### Key Features
Customizable Personalities: Users can create their AI girlfriend with specific traits and characteristics that match their preferences.
Interactive Chat: Engage in meaningful conversations that evolve based on your interactions, making the experience feel more real.
Daily Claim Bonus Program: Users can earn additional messages each day, enhancing the interaction and keeping the conversation flowing.
Referral Program: Invite friends to join and earn rewards, making it a social experience as well.
### My Experience
Using DreamGF has been a delightful experience. The customization options allowed me to create a virtual companion that truly resonates with my preferences. The chat interactions are engaging, and I appreciate the daily bonuses that keep me coming back for more.
### Pros
Highly customizable, allowing for a personalized experience.
Engaging chat features that make interactions feel realistic.
### Cons
Some features are locked behind a paywall, which may limit access for free users.
## 4. GoLove.ai
### Why I Recommend It
I recommend GoLove.ai for its advanced AI technology that creates realistic and engaging conversations. The platform is user-friendly and offers a variety of customizable options, ensuring that every user can find their ideal virtual partner.
### Key Features
Customizable AI Characters: Users can create their own AI character, tailoring personality traits and preferences to suit their desires.
Diverse Virtual Partners: GoLove.ai offers a wide range of virtual partners, catering to different tastes and preferences.
Realistic Conversations: The AI is trained to engage in meaningful dialogues, making interactions feel genuine and fulfilling.
User-Friendly Interface: The platform is easy to navigate, allowing users to quickly find and connect with their ideal AI girlfriend.
### My Experience
My experience with GoLove.ai has been incredibly positive. The interactions felt natural, and I appreciated the ability to customize my AI girlfriend to match my preferences. The conversations were engaging and often left me wanting more.
### Pros
Highly customizable AI characters that enhance user experience.
Engaging and realistic conversations that simulate real-life interactions.
### Cons
Some users may find the AI's responses occasionally repetitive.
## 5. SpicyChat
### Why I Recommend It
SpicyChat offers a unique blend of entertainment and intimacy, making it an ideal companion for those seeking a more personalized chat experience. Its advanced AI technology ensures that conversations feel natural and responsive, enhancing user satisfaction.
### Key Features
Personalized Conversations: SpicyChat adapts to your preferences, providing tailored interactions that resonate with your desires.
24/7 Availability: The bot is always online, ready to engage in stimulating conversations whenever you need.
Variety of Personalities: Users can choose from different personalities, allowing for a diverse range of interactions.
Privacy and Security: SpicyChat prioritizes user confidentiality, ensuring that your conversations remain private.
### My Experience
My experience with SpicyChat has been overwhelmingly positive. The bot's ability to engage in meaningful conversations while maintaining a playful tone made my interactions enjoyable. I appreciated the variety of personalities available, which kept the chats fresh and exciting.
### Pros
Engaging and Interactive: The bot's responsiveness creates a captivating experience.
Customizable Experience: Users can tailor their interactions to suit their preferences.
### Cons
Limited Emotional Depth: While entertaining, the bot may lack the emotional connection found in human interactions.
## 6. Wife.app
### Why I Recommend It
I recommend Wife.app for its engaging and interactive experience that allows users to explore their fantasies in a safe and private environment. The app's advanced AI technology ensures that conversations feel natural and personalized, enhancing the overall user experience.
### Key Features
Realistic Conversations: The AI is designed to mimic human-like interactions, making chats feel genuine.
Customizable Personalities: Users can tailor their AI girlfriend's personality to match their preferences.
24/7 Availability: The app is always accessible, providing companionship whenever needed.
Privacy and Security: Conversations are confidential, ensuring a safe space for users to express themselves.
### My Experience
Using Wife.app has been a delightful experience. The AI responds quickly and intelligently, making conversations enjoyable and engaging. I appreciated the ability to customize my virtual girlfriend, which added a personal touch to our interactions.
### Pros
Highly interactive and engaging conversations.
Customizable features enhance user satisfaction.
### Cons
Some users may find the AI's responses occasionally repetitive.
## 7. Kupid.ai
### Why I Recommend It
Kupid.ai stands out as the ultimate sexting AI experience, offering a unique blend of personalization and immersive interactions. The ability to tailor your AI companion to your specific fantasies makes it a must-try for anyone looking to enhance their intimate chats.
### Key Features
Customizable Companions: Create your ideal AI partner by choosing their looks, personality, and voice.
Engaging Conversations: Dive into thrilling sexting interactions that cater to your desires.
AI Porn Chat: Experience sultry voice messages and visuals tailored to your preferences.
Privacy and Security: Kupid.ai ensures a safe chatting environment, prioritizing user confidentiality.
### My Experience
Using Kupid.ai has been an exhilarating journey. The customization options allowed me to create a companion that truly resonated with my fantasies. The conversations were engaging and felt incredibly real, making the experience unforgettable.
### Pros
Highly Personalized: Tailor every aspect of your AI companion to match your desires.
Immersive Experience: Enjoy a variety of chat styles, from playful to explicit, keeping interactions fresh and exciting.
### Cons
Subscription Costs: Some features may require a paid subscription, which could be a barrier for some users.
## Frequently Asked Questions (FAQS)
### 1. What is an AI sex chatbot?
An AI sex chatbot is an artificial intelligence-powered program designed to simulate intimate or sexual conversations with users. These chatbots use natural language processing (NLP) and machine learning to understand and respond to user inputs in a way that mimics human interaction, often with a focus on adult or erotic content.
### 2. How does an AI sex bot work?
AI sex bots work by leveraging advanced NLP models, such as GPT (Generative Pre-trained Transformer), to process user input and generate contextually relevant responses. These bots are trained on large datasets of text, including adult content, to understand and replicate human-like conversations. Some may also incorporate user preferences and feedback to personalize interactions over time.
### 3. Are AI sex chatbots safe to use?
The safety of AI sex chatbots depends on several factors:
Data Privacy: Ensure the platform you use has strong data protection measures to safeguard your personal information and conversations.
Content Moderation: Some chatbots may generate inappropriate or harmful content, so it’s important to use reputable platforms with proper safeguards.
Psychological Impact: Over-reliance on AI for intimacy may affect real-life relationships or emotional well-being. Use them responsibly.
### 4. Can AI sex bots replace human interaction?
While AI sex bots can simulate conversation and provide companionship, they cannot fully replace human interaction. Human relationships involve emotional depth, physical touch, and complex social dynamics that AI cannot replicate. These bots may serve as a supplement or fantasy outlet but are not a substitute for genuine human connection.
### 5. Are there any ethical issues with AI sex chatbots?
Yes, there are several ethical concerns:
Consent and Exploitation: Some chatbots may be programmed to mimic non-consensual scenarios, raising ethical questions about promoting harmful behavior.
Addiction: Overuse of AI sex bots could lead to social isolation or dependency.
Data Misuse: User data collected by these bots could be exploited or leaked, violating privacy.
Objectification: These bots may perpetuate unhealthy attitudes toward relationships or sexuality.
|
DavidWeinzierl/corgy_dog_LoRA | DavidWeinzierl | 2025-03-14T20:06:07Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"dora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2025-03-14T09:40:54Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a photo of an icon
widget: []
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- dora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - DavidWeinzierl/corgy_dog_LoRA
<Gallery />
## Model description
These are DavidWeinzierl/corgy_dog_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of an icon to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](DavidWeinzierl/corgy_dog_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
jcruzantony/T5_Grammer_Correction | jcruzantony | 2025-03-14T20:04:59Z | 0 | 0 | null | [
"safetensors",
"t5",
"license:apache-2.0",
"region:us"
] | null | 2025-03-14T20:03:54Z | ---
license: apache-2.0
---
|
phyoethurakyaw/model_lora_802 | phyoethurakyaw | 2025-03-14T20:02:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-14T20:02:46Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** phyoethurakyaw
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ai-sexting/ai-sexting-apps | ai-sexting | 2025-03-14T20:02:27Z | 0 | 0 | null | [
"region:us"
] | null | 2025-03-14T20:01:59Z |
# Best AI Sexting Apps
AI sexting, also known as AI sex chat bots, is the use of AI technology in sexual conversations and interactions.
I recommend Candy AI as the best overall sexting app. It is cheap yet brings the most realistic sexting experience with your desired girlfriend.
## 1. Candy.ai
Candy.ai is a remarkable platform making waves in the realm of AI-powered companionship, focusing on delivering tailored interactions through sophisticated AI technology. As we delve into the best AI sexting apps, Candy.ai stands out by offering an engaging and customizable virtual companion experience that sparks connections like never before.
⏩⏩⏩[**Try Candy AI For Free**](https://candyai.gg/home2?via=matts)

**Key Features of Candy.ai**
Customizable AI Companions: Users can create their ideal virtual friend, tailoring appearance, personality, and preferences.
Advanced Natural Language Processing: The platform utilizes NLP to facilitate fluid conversation, making interactions feel natural and engaging.
Image Exchange Capabilities: Users can send and receive AI-generated images, enhancing the interactive experience.
Diverse Relationship Options: Candy.ai supports a variety of characters catering to different orientations, including gay, straight, lesbian, and bisexual characters.
Voice Call Feature: Engage in real-time voice conversations with AI companions, adding depth to the interaction.
Roleplay and Sexting Simulations: The app allows users to engage in playful or explicit interactions, perfect for different moods.
Emotional Intelligence: Companies improved the emotional responsiveness of AI companions, making them more relatable and personable.
Freemium Model: Users can enjoy a range of features free of charge while having the option to subscribe for enhanced content and connectivity.
**Ease of Use**
Candy.ai prides itself on user-friendliness. The interface is sleek, vibrant, and intuitive, allowing even those who are not tech-savvy to navigate it effortlessly. Upon signing up, users are greeted with straightforward prompts that guide them through the companion creation and interaction processes. The availability of tutorials and FAQs further aids in familiarizing users with all features, ensuring a smooth experience from the get-go.
**What I Like**
The level of customization available when creating AI companions is impressive.
Conversations feel natural due to the advanced AI technology used for dialogue.
The inclusion of voice calls complements the textual interaction nicely, adding a layer of realism.
Regular updates improving emotional intelligence keep the interactions fresh.
**What I Dislike**
While free features are commendable, some advanced features are locked behind a paywall.
The AI, although advanced, can sometimes misunderstand context in more nuanced conversations.
The dependency on subscriptions might deter users looking for long-term interaction without additional costs.
⏩⏩⏩[**Try Candy AI For Free**](https://candyai.gg/home2?via=matts)
## 2. DreamGf
DreamGF.ai is an innovative platform designed for users looking to explore virtual connections through AI-driven entertainment. It specializes in sexting and virtual companionship, offering a unique experience tailored to individual desires. As technology continues to evolve, DreamGF.ai stands out as one of the best AI sexting apps on the market, allowing users to engage with lifelike virtual partners.
⏩⏩⏩[**Try DreamGF AI For Free**](https://dreamgf.ai/?a=ftoA7ejdv5slJ5)
**Key Features**
Custom Personalities: Users can create AI girlfriends with distinct personalities that cater to different fantasies and emotional needs.
Interactive Chat: Engage in text-based conversations that simulate a real romantic connection, enhancing the overall experience.
Free Trial: A trial option allows potential users to explore the features without upfront costs, making it accessible for everyone.
Customization Options: Tailor your AI girlfriend’s appearance, including facial features, body type, and clothing styles, providing a truly personalized experience.
Daily Claim Bonus Program: Users can earn messages every day, which encourages regular interaction and keeps the engagement exciting.
Referral Program: Invite friends and both users can benefit from exclusive rewards, enhancing the social aspect of the platform.
Secure Experience: All communications on DreamGF.ai are designed to be private and secure, ensuring a safe environment for users.
Multiple Pricing Plans: Various subscription tiers (Bronze, Silver, Gold, Diamond) allow users to choose a plan that best suits their needs and budgets.
**Ease of Use**
Setting up an account on DreamGF.ai is incredibly simple. Users can sign up quickly, and the intuitive interface allows seamless navigation through the app. The customization builder is user-friendly, enabling even those with minimal tech knowledge to create their ideal AI companion effortlessly. The chat feature is designed for easy interaction, ensuring a natural flow of conversation.
**What I Like**
The free trial option is a great way to test the app without financial commitment.
Extensive customization options allow for a unique experience closely aligned with user preferences.
The daily bonus program incentivizes regular use, making conversations more engaging over time.
The emphasis on privacy and security adds a layer of trust, which is crucial for a platform dealing with intimate topics.
**What I Dislike**
While the free trial is beneficial, the limitations on messages can be frustrating for new users.
The pricing may be steep for some users wanting full access to features, making it less accessible.
Customization might feel overwhelming to some users due to the myriad of options available.
As with any AI-driven interaction, there may be limitations in the depth of emotional connection compared to real relationships.
## Frequently Asked Questions (FAQs)
**1. What is AI Sexting?**
AI Sexting refers to the use of artificial intelligence chatbots to engage in sexually suggestive or explicit conversations. These platforms typically allow users to customize their AI companions and partake in personalized, intimate chats that cater to their desires and fantasies.
**2. How does AI Sexting Works?**
Here's a breakdown of how it works:
AI Technology: AI sexting utilizes complex language understanding and deep learning algorithms to create realistic and responsive interactions. The technology allows the chatbot to interpret user inputs and respond in a natural and engaging manner.
Customization: Users can customize their AI companions based on their preferences, interests, and sexual fantasies, making interactions feel more personal and satisfying.
Real-Time Interaction: AI chatbots provide immediate responses to user prompts, creating a fluid conversation that simulates human interaction. This enhances the excitement of the experience.
Learning and Adaptability: Through machine learning, these AI systems continually improve by understanding user patterns and feedback, allowing for increasingly tailored conversations over time.
Safety and Privacy: Many AI sexting platforms are designed to maintain user confidentiality and safety, providing a secure space for private conversations and minimizing the risks associated with inappropriate exchanges.
**3. What Are The Applications Of AI Sexting?**
AI Sexting has various applications that cater to users’ desires and enhance their experiences. Here are some notable applications:
Personalized Intimacy: Users can engage in tailored conversations with AI chatbots that adapt to their sexual preferences, fantasies, and moods, allowing for a deeply personalized experience.
Improved Emotional Connection: AI sexting can foster a sense of emotional intimacy for individuals who may feel lonely or isolated. Chatbots offer companionship and an outlet to explore desires safely and without judgment.
Educational Tool: AI systems can serve as a platform to learn about sexual techniques and pleasure in a safe environment. Users can explore new experiences without pressure.
Enhancing Communication Skills: Interacting with AI for intimate conversations can help individuals enhance their communication skills, as they practice expressing preferences and desires within a conversational framework.
Creative Exploration: Many platforms allow users to role-play and create fantasy scenarios with AI characters, providing an outlet for creativity and exploration of different sexual identities and experiences.
Privacy and Safety: Users can explore sexual conversations in a discreet and secure environment, making it less intimidating to discuss taboo topics compared to real-life interactions.
|
a-F1/Qwen2.5-1.5B-Instruct-SFT-OpenR1 | a-F1 | 2025-03-14T20:00:26Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:open-r1/OpenR1-Math-220k",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-13T13:24:15Z | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
datasets: open-r1/OpenR1-Math-220k
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-SFT-OpenR1
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-SFT-OpenR1
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the [open-r1/OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="a-F1/Qwen2.5-1.5B-Instruct-SFT-OpenR1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/fandou-team/huggingface/runs/l723114q)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.50.0.dev0
- Pytorch: 2.5.1
- Datasets: 3.3.1
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
phyoethurakyaw/lora_model_802 | phyoethurakyaw | 2025-03-14T20:00:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-14T19:59:59Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** phyoethurakyaw
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Novaciano/Imp-3.2-1B-Q6_K-GGUF | Novaciano | 2025-03-14T20:00:08Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:Novaciano/Imp-3.2-1B",
"base_model:quantized:Novaciano/Imp-3.2-1B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-14T20:00:00Z | ---
base_model: Novaciano/Imp-3.2-1B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Novaciano/Imp-3.2-1B-Q6_K-GGUF
This model was converted to GGUF format from [`Novaciano/Imp-3.2-1B`](https://huggingface.co/Novaciano/Imp-3.2-1B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Novaciano/Imp-3.2-1B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Novaciano/Imp-3.2-1B-Q6_K-GGUF --hf-file imp-3.2-1b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Novaciano/Imp-3.2-1B-Q6_K-GGUF --hf-file imp-3.2-1b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Novaciano/Imp-3.2-1B-Q6_K-GGUF --hf-file imp-3.2-1b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Novaciano/Imp-3.2-1B-Q6_K-GGUF --hf-file imp-3.2-1b-q6_k.gguf -c 2048
```
|
ai-apps-review/cloth-off | ai-apps-review | 2025-03-14T19:59:54Z | 0 | 0 | null | [
"region:us"
] | null | 2025-03-14T19:59:30Z | # Clothoff.io Review
Clothoff.io is an innovative AI-powered application designed to remove clothing from images, providing users with a unique and engaging experience.
This app leverages advanced artificial intelligence technology to deliver high-quality results quickly and efficiently.
⏩⏩⏩[**Try Clothoff For Free**](https://bestaitools.top/fgRB)
## Key Features
### 1. AI-Powered Undressing Tool
The core feature of Clothoff.io is its AI-powered undressing tool. This tool allows users to easily nudify images by removing clothes with just a few clicks. The AI algorithms are trained on a vast dataset, enabling them to accurately identify clothing and remove it while maintaining the integrity of the underlying image.
### 2. DeepNude AI Generator
In addition to the undressing tool, Clothoff.io offers a DeepNude AI generator. This feature allows users to generate entirely new images based on their preferences. Users can create stunning visuals that align with their imagination, making it a versatile tool for various creative projects.
### 3. User-Friendly Interface
Clothoff.io is designed with a user-friendly interface that makes it accessible to everyone, regardless of their technical expertise. The intuitive layout allows users to navigate the app effortlessly, ensuring a smooth experience from start to finish.
### 4. Customizable Options
The app provides customizable options for users to tailor their experience. Users can adjust settings to refine the undressing process, ensuring that the results meet their specific needs. This level of customization enhances user satisfaction and engagement.
### 5. High-Quality Image Output
Clothoff.io prioritizes quality, delivering high-quality image output that retains the details and aesthetics of the original photo. The AI algorithms are designed to ensure that the final images look realistic and visually appealing, making them suitable for various applications.
### 6. Fast Processing Speed
One of the standout features of Clothoff.io is its fast processing speed. Users can expect quick results, allowing them to create and modify images in a matter of seconds. This efficiency is particularly beneficial for users who require rapid turnaround times for their projects.
### 7. Privacy and Security Measures
Clothoff.io takes user privacy seriously. The app implements privacy and security measures to protect user data and ensure images are handled securely. Users can feel confident that their information is safe while using the platform.
### 8. AI Magic Tool from Text
Clothoff.io includes an AI magic tool that allows users to create images from text prompts in seconds. This feature enhances creativity by enabling users to visualize their ideas without needing existing images.
### 9. In-Paint Feature
The In-Paint feature allows users to modify specific areas of an image. Whether tweaking details or introducing new elements, users can easily edit their images to meet their creative vision.
### 10. Quality Mode
The Quality Mode elevates the image output by enhancing details and increasing resolution. Users can select their desired quality level (Base, High, Ultra) to ensure their images are visually stunning.
### 11. Custom Pose Feature
The Custom Pose feature allows users to transfer character poses from their images to generated images. This functionality is particularly useful for storytelling or personal projects, providing a unique way to engage with the app.
### 12. AI Video Generator
Clothoff.io also features an AI video generator, allowing users to create personalized videos that reflect their imagination. This tool redefines the creative process, making it easier to bring ideas to life.
## My Experience
As a user of Clothoff.io, I was intrigued by the concept of an AI-driven undressing app. The idea of using artificial intelligence to remove clothing from images seemed both fascinating and a bit controversial. After spending some time exploring the platform, I would like to share my comprehensive experience with Clothoff.io.
### Getting Started
Signing up for Clothoff.io was a straightforward process. I appreciated the emphasis on safety and anonymity, which is crucial when dealing with sensitive content. Here’s how I got started:
Registration: I created an account quickly, providing only the necessary information. The platform ensures that user data is kept confidential, which made me feel secure.
Choosing a Mode: Once logged in, I was presented with various undressing modes. The options included Lingerie, Bikini, and NSFW modes, allowing for a tailored experience based on my preferences.
Uploading Images: I uploaded a few images to test the app. The interface was user-friendly, making it easy to navigate through the options.
### Usability and Interface
The interface of Clothoff.io is designed to be intuitive and user-friendly. Here are some aspects that stood out to me:
Clean Design: The layout is clean and uncluttered, allowing users to focus on the task at hand without distractions.
Quick Processing: After uploading an image, the AI processes it promptly. I was impressed by how quickly I received the results, even with a free trial account.
High-Quality Results: The quality of the undressed images was surprisingly good. The AI managed to render the images without any noticeable blur, which is often a concern with similar applications.
### Effectiveness of the AI
One of the most critical aspects of Clothoff.io is the effectiveness of its AI technology. Here’s what I found:
Accuracy: The AI was able to accurately identify clothing and remove it, providing a realistic representation of what the subject would look like undressed.
Variety of Images: I tested the app with various types of images, including both male and female subjects, and even anime characters. The AI performed well across all categories.
Customization: The ability to choose different modes allowed me to experiment with various styles and results, enhancing the overall experience.
## Alternatives to Clothoff.io
If you're exploring options for AI-driven undressing applications, there are several alternatives to Clothoff.io that offer unique features and functionalities. Below are five noteworthy alternatives:
### 1. Undress.app
Undress.app is a user-friendly platform that allows users to generate nude images from uploaded photos. Key features include:
Multiple Modes: Users can choose from various undressing modes, such as lingerie and bikini.
Fast Processing: The AI processes images quickly, providing results in seconds.
Privacy Focused: User data is not stored, ensuring confidentiality.
⏩⏩⏩[**Try Undress.app For Free**](https://bestaitools.top/fgRB)
### 2. Nudify.online
Nudify.online is another AI nudification tool that emphasizes high accuracy in generating nude images. Its features include:
Free Access: Users can generate images for free, making it accessible to everyone.
High-Quality Results: The platform uses advanced algorithms to ensure realistic outputs.
Simple Interface: The user-friendly design makes it easy for anyone to navigate.
⏩⏩⏩[**Try For Free**](https://bestaitools.top/fgRB)
### 3. Soulgen.net
Soulgen.net offers a different approach by allowing users to create AI-generated characters based on text prompts. Its unique offerings include:
Character Creation: Users can describe their dream character, and the AI will generate a corresponding image.
Image Editing: The platform allows users to edit images by adding or removing elements using text prompts.
Outpainting Feature: Users can expand their images beyond the original dimensions, adding new elements.
### 4. Undress.cc
Undress.cc is a free AI undressing tool that allows users to generate images of individuals without clothing. Its features include:
Accessibility: The platform is designed to be user-friendly and accessible to all.
Variety of Modes: Users can explore different undressing modes, including 'X-Ray Mode' for deep nude undressing.
Privacy Assurance: The app ensures that user data remains confidential and is not stored.
### 5. Porngen.art
Porngen.art is an AI-powered image generator that specializes in creating realistic and customizable adult content. Key features include:
Detailed Customization: Users can specify body type, age, hair, ethnicity, and more to create personalized images.
High-Quality Outputs: The platform uses advanced algorithms to generate detailed and realistic images.
Ethical Use Guidelines: Porngen.art emphasizes responsible and ethical use of its technology.
## Final Thoughts
Overall, my experience with Clothoff.io was intriguing and largely positive. The app is user-friendly, effective, and offers a unique service that showcases the capabilities of AI technology. However, it is crucial to approach such tools with caution and responsibility.
⏩⏩⏩[**Try Clothoff For Free**](https://bestaitools.top/fgRB)
|
RichardErkhov/ryan0712_-_llama-3-8b-slow-DUS-max-layer1-method2-8bits | RichardErkhov | 2025-03-14T19:56:12Z | 0 | 0 | null | [
"safetensors",
"llama",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-14T19:51:39Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama-3-8b-slow-DUS-max-layer1-method2 - bnb 8bits
- Model creator: https://huggingface.co/ryan0712/
- Original model: https://huggingface.co/ryan0712/llama-3-8b-slow-DUS-max-layer1-method2/
Original model description:
---
tags:
- merge
- mergekit
- lazymergekit
- NousResearch/Meta-Llama-3-8B
base_model:
- NousResearch/Meta-Llama-3-8B
- NousResearch/Meta-Llama-3-8B
- NousResearch/Meta-Llama-3-8B
- NousResearch/Meta-Llama-3-8B
- NousResearch/Meta-Llama-3-8B
- NousResearch/Meta-Llama-3-8B
- NousResearch/Meta-Llama-3-8B
- NousResearch/Meta-Llama-3-8B
- NousResearch/Meta-Llama-3-8B
- NousResearch/Meta-Llama-3-8B
- NousResearch/Meta-Llama-3-8B
- NousResearch/Meta-Llama-3-8B
- NousResearch/Meta-Llama-3-8B
- NousResearch/Meta-Llama-3-8B
- NousResearch/Meta-Llama-3-8B
- NousResearch/Meta-Llama-3-8B
---
# llama-3-8b-slow-DUS-layer1-method2
llama-3-8b-slow-DUS-layer1-method2 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B)
* [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B)
* [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B)
* [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B)
* [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B)
* [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B)
* [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B)
* [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B)
* [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B)
* [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B)
* [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B)
* [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B)
* [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B)
* [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B)
* [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B)
* [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: NousResearch/Meta-Llama-3-8B
layer_range: [0, 1]
- sources:
- model: NousResearch/Meta-Llama-3-8B
layer_range: [1, 2]
- sources:
- model: NousResearch/Meta-Llama-3-8B
layer_range: [2, 3]
- sources:
- model: NousResearch/Meta-Llama-3-8B
layer_range: [3, 4]
- sources:
- model: NousResearch/Meta-Llama-3-8B
layer_range: [4, 5]
- sources:
- model: NousResearch/Meta-Llama-3-8B
layer_range: [5, 6]
- sources:
- model: NousResearch/Meta-Llama-3-8B
layer_range: [6, 7]
- sources:
- model: NousResearch/Meta-Llama-3-8B
layer_range: [7, 8]
- sources:
- model: NousResearch/Meta-Llama-3-8B
layer_range: [8, 9]
- sources:
- model: NousResearch/Meta-Llama-3-8B
layer_range: [9, 10]
- sources:
- model: NousResearch/Meta-Llama-3-8B
layer_range: [10, 11]
- sources:
- model: NousResearch/Meta-Llama-3-8B
layer_range: [11, 12]
- sources:
- model: NousResearch/Meta-Llama-3-8B
layer_range: [12, 13]
- sources:
- model: NousResearch/Meta-Llama-3-8B
layer_range: [13, 14]
- sources:
- model: NousResearch/Meta-Llama-3-8B
layer_range: [14, 15]
- sources:
- model: NousResearch/Meta-Llama-3-8B
layer_range: [30, 31]
merge_method: passthrough
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "ryan0712/llama-3-8b-slow-DUS-layer1-method2"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
genki10/Version12NewTestASAP_FineTuningBERT_AugV12_k1_task1_organization_k1_k1_fold0 | genki10 | 2025-03-14T19:56:10Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-03-14T19:39:34Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: Version12NewTestASAP_FineTuningBERT_AugV12_k1_task1_organization_k1_k1_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Version12NewTestASAP_FineTuningBERT_AugV12_k1_task1_organization_k1_k1_fold0
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9949
- Qwk: 0.3567
- Mse: 0.9949
- Rmse: 0.9974
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 1.0 | 2 | 8.3285 | 0.0079 | 8.3285 | 2.8859 |
| No log | 2.0 | 4 | 5.8414 | 0.0565 | 5.8414 | 2.4169 |
| No log | 3.0 | 6 | 4.5329 | 0.0329 | 4.5329 | 2.1291 |
| No log | 4.0 | 8 | 3.7431 | 0.0329 | 3.7431 | 1.9347 |
| No log | 5.0 | 10 | 3.1934 | 0.0329 | 3.1934 | 1.7870 |
| No log | 6.0 | 12 | 2.5606 | 0.1018 | 2.5606 | 1.6002 |
| No log | 7.0 | 14 | 2.1614 | 0.0748 | 2.1614 | 1.4702 |
| No log | 8.0 | 16 | 1.7424 | 0.0850 | 1.7424 | 1.3200 |
| No log | 9.0 | 18 | 1.4765 | 0.0850 | 1.4765 | 1.2151 |
| No log | 10.0 | 20 | 1.2471 | 0.0850 | 1.2471 | 1.1168 |
| No log | 11.0 | 22 | 1.2534 | 0.0850 | 1.2534 | 1.1195 |
| No log | 12.0 | 24 | 1.1576 | 0.0850 | 1.1576 | 1.0759 |
| No log | 13.0 | 26 | 1.3504 | 0.0949 | 1.3504 | 1.1621 |
| No log | 14.0 | 28 | 1.4371 | 0.1281 | 1.4371 | 1.1988 |
| No log | 15.0 | 30 | 0.9872 | 0.1816 | 0.9872 | 0.9936 |
| No log | 16.0 | 32 | 1.0905 | 0.1162 | 1.0905 | 1.0443 |
| No log | 17.0 | 34 | 0.9929 | 0.2172 | 0.9929 | 0.9964 |
| No log | 18.0 | 36 | 0.9931 | 0.2026 | 0.9931 | 0.9965 |
| No log | 19.0 | 38 | 0.9766 | 0.2483 | 0.9766 | 0.9883 |
| No log | 20.0 | 40 | 0.9092 | 0.3373 | 0.9092 | 0.9535 |
| No log | 21.0 | 42 | 0.8902 | 0.2437 | 0.8902 | 0.9435 |
| No log | 22.0 | 44 | 0.8368 | 0.3606 | 0.8368 | 0.9148 |
| No log | 23.0 | 46 | 0.8126 | 0.2686 | 0.8126 | 0.9014 |
| No log | 24.0 | 48 | 0.7393 | 0.3470 | 0.7393 | 0.8598 |
| No log | 25.0 | 50 | 0.7271 | 0.3402 | 0.7271 | 0.8527 |
| No log | 26.0 | 52 | 0.7409 | 0.2859 | 0.7409 | 0.8608 |
| No log | 27.0 | 54 | 0.7567 | 0.3209 | 0.7567 | 0.8699 |
| No log | 28.0 | 56 | 0.7437 | 0.3267 | 0.7437 | 0.8624 |
| No log | 29.0 | 58 | 0.7511 | 0.3467 | 0.7511 | 0.8666 |
| No log | 30.0 | 60 | 0.7420 | 0.4128 | 0.7420 | 0.8614 |
| No log | 31.0 | 62 | 0.7804 | 0.4543 | 0.7804 | 0.8834 |
| No log | 32.0 | 64 | 0.8129 | 0.4478 | 0.8129 | 0.9016 |
| No log | 33.0 | 66 | 0.7865 | 0.4703 | 0.7865 | 0.8869 |
| No log | 34.0 | 68 | 0.8143 | 0.4711 | 0.8143 | 0.9024 |
| No log | 35.0 | 70 | 0.8254 | 0.4760 | 0.8254 | 0.9085 |
| No log | 36.0 | 72 | 0.7932 | 0.4574 | 0.7932 | 0.8906 |
| No log | 37.0 | 74 | 0.8825 | 0.3884 | 0.8825 | 0.9394 |
| No log | 38.0 | 76 | 0.8537 | 0.4171 | 0.8537 | 0.9239 |
| No log | 39.0 | 78 | 0.8519 | 0.3972 | 0.8519 | 0.9230 |
| No log | 40.0 | 80 | 1.0126 | 0.3648 | 1.0126 | 1.0063 |
| No log | 41.0 | 82 | 1.0934 | 0.3714 | 1.0934 | 1.0456 |
| No log | 42.0 | 84 | 1.0440 | 0.3665 | 1.0440 | 1.0218 |
| No log | 43.0 | 86 | 1.0107 | 0.3740 | 1.0107 | 1.0053 |
| No log | 44.0 | 88 | 1.1709 | 0.3490 | 1.1709 | 1.0821 |
| No log | 45.0 | 90 | 1.2118 | 0.3359 | 1.2118 | 1.1008 |
| No log | 46.0 | 92 | 1.0457 | 0.3945 | 1.0457 | 1.0226 |
| No log | 47.0 | 94 | 0.9952 | 0.4290 | 0.9952 | 0.9976 |
| No log | 48.0 | 96 | 0.9439 | 0.4305 | 0.9439 | 0.9715 |
| No log | 49.0 | 98 | 0.8563 | 0.4378 | 0.8563 | 0.9254 |
| No log | 50.0 | 100 | 0.9042 | 0.4106 | 0.9042 | 0.9509 |
| No log | 51.0 | 102 | 1.1133 | 0.3768 | 1.1133 | 1.0551 |
| No log | 52.0 | 104 | 1.1016 | 0.3762 | 1.1016 | 1.0496 |
| No log | 53.0 | 106 | 0.9290 | 0.4376 | 0.9290 | 0.9639 |
| No log | 54.0 | 108 | 0.9065 | 0.4289 | 0.9065 | 0.9521 |
| No log | 55.0 | 110 | 0.9949 | 0.3567 | 0.9949 | 0.9974 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
RichardErkhov/danielgombas_-_llama_3b_step2_batch_v2-4bits | RichardErkhov | 2025-03-14T19:56:04Z | 0 | 0 | null | [
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-14T19:54:39Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama_3b_step2_batch_v2 - bnb 4bits
- Model creator: https://huggingface.co/danielgombas/
- Original model: https://huggingface.co/danielgombas/llama_3b_step2_batch_v2/
Original model description:
---
library_name: transformers
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: llama_3b_step2_batch_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama_3b_step2_batch_v2
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3132
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 40
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.993 | 0.0341 | 50 | 1.1011 |
| 1.0449 | 0.0682 | 100 | 0.9752 |
| 0.9894 | 0.1023 | 150 | 0.8698 |
| 0.6199 | 0.1364 | 200 | 0.7913 |
| 0.5326 | 0.1704 | 250 | 0.7341 |
| 0.8109 | 0.2045 | 300 | 0.6799 |
| 0.7554 | 0.2386 | 350 | 0.6332 |
| 0.9877 | 0.2727 | 400 | 0.5993 |
| 0.3571 | 0.3068 | 450 | 0.5726 |
| 0.4539 | 0.3409 | 500 | 0.5439 |
| 0.464 | 0.3750 | 550 | 0.5147 |
| 0.4051 | 0.4091 | 600 | 0.4904 |
| 0.5371 | 0.4432 | 650 | 0.4732 |
| 0.4954 | 0.4772 | 700 | 0.4549 |
| 0.4594 | 0.5113 | 750 | 0.4399 |
| 0.4755 | 0.5454 | 800 | 0.4281 |
| 0.2948 | 0.5795 | 850 | 0.4118 |
| 0.3699 | 0.6136 | 900 | 0.4021 |
| 0.319 | 0.6477 | 950 | 0.3927 |
| 0.3359 | 0.6818 | 1000 | 0.3802 |
| 0.4056 | 0.7159 | 1050 | 0.3746 |
| 0.2975 | 0.7500 | 1100 | 0.3643 |
| 0.3868 | 0.7840 | 1150 | 0.3596 |
| 0.3485 | 0.8181 | 1200 | 0.3512 |
| 0.3546 | 0.8522 | 1250 | 0.3476 |
| 0.3697 | 0.8863 | 1300 | 0.3416 |
| 0.4056 | 0.9204 | 1350 | 0.3388 |
| 0.3189 | 0.9545 | 1400 | 0.3332 |
| 0.4173 | 0.9886 | 1450 | 0.3286 |
| 0.1779 | 1.0228 | 1500 | 0.3338 |
| 0.2877 | 1.0569 | 1550 | 0.3300 |
| 0.1506 | 1.0910 | 1600 | 0.3301 |
| 0.2075 | 1.1251 | 1650 | 0.3289 |
| 0.1956 | 1.1592 | 1700 | 0.3271 |
| 0.162 | 1.1933 | 1750 | 0.3276 |
| 0.2416 | 1.2274 | 1800 | 0.3228 |
| 0.2364 | 1.2615 | 1850 | 0.3243 |
| 0.1602 | 1.2956 | 1900 | 0.3219 |
| 0.1566 | 1.3296 | 1950 | 0.3211 |
| 0.1784 | 1.3637 | 2000 | 0.3215 |
| 0.1627 | 1.3978 | 2050 | 0.3190 |
| 0.1907 | 1.4319 | 2100 | 0.3183 |
| 0.1182 | 1.4660 | 2150 | 0.3183 |
| 0.1585 | 1.5001 | 2200 | 0.3179 |
| 0.2261 | 1.5342 | 2250 | 0.3158 |
| 0.1457 | 1.5683 | 2300 | 0.3150 |
| 0.2589 | 1.6024 | 2350 | 0.3146 |
| 0.2253 | 1.6364 | 2400 | 0.3144 |
| 0.1741 | 1.6705 | 2450 | 0.3143 |
| 0.1477 | 1.7046 | 2500 | 0.3141 |
| 0.1886 | 1.7387 | 2550 | 0.3141 |
| 0.2211 | 1.7728 | 2600 | 0.3139 |
| 0.238 | 1.8069 | 2650 | 0.3138 |
| 0.2863 | 1.8410 | 2700 | 0.3137 |
| 0.2603 | 1.8751 | 2750 | 0.3135 |
| 0.2484 | 1.9092 | 2800 | 0.3133 |
| 0.2343 | 1.9432 | 2850 | 0.3132 |
| 0.254 | 1.9773 | 2900 | 0.3132 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.1.0+cu118
- Datasets 3.0.2
- Tokenizers 0.20.1
|
jcruzantony/T5-Grammer-Correction | jcruzantony | 2025-03-14T19:55:33Z | 0 | 0 | null | [
"safetensors",
"t5",
"license:apache-2.0",
"region:us"
] | null | 2025-03-14T19:54:11Z | ---
license: apache-2.0
---
|
mradermacher/NQLSG-Qwen2.5-14B-MegaFusion-v9.3-GGUF | mradermacher | 2025-03-14T19:55:12Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"zh",
"en",
"dataset:FuseAI/FuseChat-3.0-SFT-Data",
"dataset:FuseAI/FuseChat-3.0-DPO-Data",
"base_model:Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v9.3",
"base_model:quantized:Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v9.3",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-14T19:14:04Z | ---
base_model: Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v9.3
datasets:
- FuseAI/FuseChat-3.0-SFT-Data
- FuseAI/FuseChat-3.0-DPO-Data
language:
- zh
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v9.3
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NQLSG-Qwen2.5-14B-MegaFusion-v9.3-GGUF/resolve/main/NQLSG-Qwen2.5-14B-MegaFusion-v9.3.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/NQLSG-Qwen2.5-14B-MegaFusion-v9.3-GGUF/resolve/main/NQLSG-Qwen2.5-14B-MegaFusion-v9.3.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/NQLSG-Qwen2.5-14B-MegaFusion-v9.3-GGUF/resolve/main/NQLSG-Qwen2.5-14B-MegaFusion-v9.3.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NQLSG-Qwen2.5-14B-MegaFusion-v9.3-GGUF/resolve/main/NQLSG-Qwen2.5-14B-MegaFusion-v9.3.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/NQLSG-Qwen2.5-14B-MegaFusion-v9.3-GGUF/resolve/main/NQLSG-Qwen2.5-14B-MegaFusion-v9.3.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/NQLSG-Qwen2.5-14B-MegaFusion-v9.3-GGUF/resolve/main/NQLSG-Qwen2.5-14B-MegaFusion-v9.3.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NQLSG-Qwen2.5-14B-MegaFusion-v9.3-GGUF/resolve/main/NQLSG-Qwen2.5-14B-MegaFusion-v9.3.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NQLSG-Qwen2.5-14B-MegaFusion-v9.3-GGUF/resolve/main/NQLSG-Qwen2.5-14B-MegaFusion-v9.3.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/NQLSG-Qwen2.5-14B-MegaFusion-v9.3-GGUF/resolve/main/NQLSG-Qwen2.5-14B-MegaFusion-v9.3.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/NQLSG-Qwen2.5-14B-MegaFusion-v9.3-GGUF/resolve/main/NQLSG-Qwen2.5-14B-MegaFusion-v9.3.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/NQLSG-Qwen2.5-14B-MegaFusion-v9.3-GGUF/resolve/main/NQLSG-Qwen2.5-14B-MegaFusion-v9.3.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/NQLSG-Qwen2.5-14B-MegaFusion-v9.2-fusechat-sft-GGUF | mradermacher | 2025-03-14T19:55:11Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v9.2-fusechat-sft",
"base_model:quantized:Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v9.2-fusechat-sft",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-14T18:54:26Z | ---
base_model: Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v9.2-fusechat-sft
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v9.2-fusechat-sft
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NQLSG-Qwen2.5-14B-MegaFusion-v9.2-fusechat-sft-GGUF/resolve/main/NQLSG-Qwen2.5-14B-MegaFusion-v9.2-fusechat-sft.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/NQLSG-Qwen2.5-14B-MegaFusion-v9.2-fusechat-sft-GGUF/resolve/main/NQLSG-Qwen2.5-14B-MegaFusion-v9.2-fusechat-sft.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/NQLSG-Qwen2.5-14B-MegaFusion-v9.2-fusechat-sft-GGUF/resolve/main/NQLSG-Qwen2.5-14B-MegaFusion-v9.2-fusechat-sft.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NQLSG-Qwen2.5-14B-MegaFusion-v9.2-fusechat-sft-GGUF/resolve/main/NQLSG-Qwen2.5-14B-MegaFusion-v9.2-fusechat-sft.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/NQLSG-Qwen2.5-14B-MegaFusion-v9.2-fusechat-sft-GGUF/resolve/main/NQLSG-Qwen2.5-14B-MegaFusion-v9.2-fusechat-sft.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/NQLSG-Qwen2.5-14B-MegaFusion-v9.2-fusechat-sft-GGUF/resolve/main/NQLSG-Qwen2.5-14B-MegaFusion-v9.2-fusechat-sft.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NQLSG-Qwen2.5-14B-MegaFusion-v9.2-fusechat-sft-GGUF/resolve/main/NQLSG-Qwen2.5-14B-MegaFusion-v9.2-fusechat-sft.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NQLSG-Qwen2.5-14B-MegaFusion-v9.2-fusechat-sft-GGUF/resolve/main/NQLSG-Qwen2.5-14B-MegaFusion-v9.2-fusechat-sft.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/NQLSG-Qwen2.5-14B-MegaFusion-v9.2-fusechat-sft-GGUF/resolve/main/NQLSG-Qwen2.5-14B-MegaFusion-v9.2-fusechat-sft.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/NQLSG-Qwen2.5-14B-MegaFusion-v9.2-fusechat-sft-GGUF/resolve/main/NQLSG-Qwen2.5-14B-MegaFusion-v9.2-fusechat-sft.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/NQLSG-Qwen2.5-14B-MegaFusion-v9.2-fusechat-sft-GGUF/resolve/main/NQLSG-Qwen2.5-14B-MegaFusion-v9.2-fusechat-sft.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RichardErkhov/H-D-T_-_Buzz-3b-small-v0.6.3-8bits | RichardErkhov | 2025-03-14T19:55:02Z | 0 | 0 | null | [
"safetensors",
"llama",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-14T19:50:47Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Buzz-3b-small-v0.6.3 - bnb 8bits
- Model creator: https://huggingface.co/H-D-T/
- Original model: https://huggingface.co/H-D-T/Buzz-3b-small-v0.6.3/
Original model description:
---
base_model: H-D-T/Buzz-3b-small-v0.6.3
tags:
- axolotl
license: apache-2.0
datasets:
- H-D-T/Buzz
- H-D-T/Buzz-slice-1-10
- H-D-T/Buzz-slice-2-10
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
# Buzz-3b-Small-v0.6.3
This model is a intermediate checkpoint of [H-D-T/Buzz-3b-small-v0.6.3](H-D-T/Buzz-3b-small-v0.6.3) trained on
datasets:
- path: H-D-T/Buzz-slice-1-10
type: sharegpt
- path: H-D-T/Buzz-slice-2-10
type: sharegpt
chat_template: llama3
## Model description
Buzz small 0.6.3 is an intermediate checkpoint 2/10ths of the way through the buzz dataset, its trained using the llama 3 chat template for only a single epoch over approximately 6.2 million examples
## Intended uses & limitations
the model behaves in a standard 'chat' style, performing the normal tasks an assistant model would typically be expected to perform, often quite well.
it has the ability to write code, play characters, break down tasks, provide tutorials, step by step walkthroughs, data analysis, and perform mathematical calculations.
the models outputs may be inaccurate to some degree.
## tutorial
[will update]
### Framework versions
- unsloth 2.4.0
- axolotl 4.0.0
- Transformers 4.40.2
- Pytorch 2.1.2+cu118
- Datasets 2.19.1
- Tokenizers 0.19.1
|
remove-clothes/ai-clothes-remover | remove-clothes | 2025-03-14T19:54:25Z | 0 | 0 | null | [
"region:us"
] | null | 2025-03-14T19:53:45Z | # Best 8 AI Clothes Remover Of 2025
Are you tired of spending hours trying to edit photos to remove clothes manually? Look no further, as we have compiled a list of the 12 best AI clothes remover tools available, both free and paid.
With these tools, you can easily remove clothes from photos with just a few clicks, saving you time and effort. Say goodbye to frustrating and tedious photo editing and say hello to a quick and efficient way to remove clothes from photos.
## What is AI Clothes Remover?
Developed by the team at DeepNude, AI Clothes Remover is a software that uses artificial intelligence to remove clothing from images of people.
Yes, you read that right. With just a few clicks, this AI technology can remove clothing from any image, leaving the subject seemingly nude. Now, you might be wondering, why would anyone create such software.
The answer is simple - curiosity and creativity. The creators at DeepNude wanted to explore the capabilities of AI and push the boundaries of what is possible.
## What Are The Best AI Clothes Removers (Free & Paid)?
### 1. Undress Love
The "Undress Love" is an innovative AI Clothes Remover that revolutionizes the way we interact with clothing.
With its advanced technology, it allows you to effortlessly remove clothes from images, making it a game-changer in various industries such as fashion, e-commerce, and entertainment.
⏩⏩⏩[**Try Undress Love For Free**](https://bestaitools.top/fgRB)
#### Key Features:
AI-Powered Clothes Removal: The "Undress Love" utilizes advanced AI algorithms to accurately detect and remove clothes from images.
Easy-to-Use Interface: With its user-friendly interface, you can easily upload images and remove clothes with just a few clicks.
Realistic Results: The AI technology used in "Undress Love" ensures that the clothes removal process produces natural and realistic results.
Customization Options: You have the flexibility to adjust the level of clothes removal, allowing you to achieve the desired effect in your images.
Compatibility: The "Undress Love" supports various image formats, making it compatible with different platforms and software.
Fast Processing: The AI algorithms used in the "Undress Love" enable quick and efficient clothes removal, saving you valuable time.
#### What I Like About the "Undress Love":
One of the key aspects that you will appreciate about "Undress Love" is its simplicity. The user-friendly interface makes it easy for you to navigate and use the tool, even if you have limited technical knowledge.
Additionally, the realistic results it produces ensure that your edited images look natural and professional.
#### What I Dislike About the "Undress Love":
One drawback is that the accuracy of clothes removal may vary depending on the complexity of the image.
In some cases, the AI algorithms may need help to accurately detect and remove clothes, resulting in less precise results.
#### Pricing:
The "Undress Love" offers a limited free plan with limited features. It offers 3 different pricing plans to cater to various user needs.
The pricing starts at $2.92 per month for basic features, with higher-tier plans ($15/month) available for more advanced functionalities.
The advanced features include age & body type trait access, faster generations, excellent quality, no queue, etc.
⏩⏩⏩[**Try Undress Love For Free**](https://bestaitools.top/fgRB)
### 2. Undress App
Undress App is a revolutionary application that brings the AI Clothes Remover feature right to your fingertips. With this cutting-edge technology, you can now effortlessly remove clothes from images and reveal what lies beneath.
Whether you're a fashion enthusiast, a photographer, or simply curious, the 'Undress App' is designed to provide you with a unique and exciting experience.
⏩⏩⏩[**Try Undress App For Free**](https://bestaitools.top/fgRB)

#### Key Features:
AI Clothes Remover: The 'Undress App' standout feature is its advanced AI technology that allows you to remove clothes from images with just a few taps.
Easy-to-Use Interface: The app boasts a user-friendly interface, making it simple for anyone to navigate and utilize its powerful features.
High-Quality Results: The AI Clothes Remover feature ensures that the images are processed with precision, delivering realistic and seamless results.
Privacy Protection: The 'Undress App' prioritizes your privacy by securely processing and storing your images locally on your device.
Image Enhancement Tools: In addition to clothes removal, the app offers a range of image enhancement tools to further enhance your photos.
Share and Save: Once you've removed clothes from an image, you can easily save and share the edited photo with friends or on social media platforms.
#### Pros:
Unleash Your Creativity: With the 'Undress App', you can explore new artistic possibilities and experiment with different looks.
Save Time and Effort: Instead of manually editing images, the AI Clothes Remover feature saves time and effort by automating the process.
Fun and Entertaining: Whether you're pranking friends or simply having fun, the 'Undress App' adds an element of excitement to your digital experiences.
#### Cons:
Dependent on Image Quality: The effectiveness of the AI Clothes Remover feature may vary depending on the quality of the image.
Limited to Digital Images: The 'Undress App' can only remove clothes from digital images and does not work with physical objects.
#### Pricing:
The 'Undress App' offers a free as well as a 3 pricing option to suit your needs. They provide both free and premium versions, allowing you to experience the AI Clothes Remover feature with additional benefits and enhanced functionality.
Free: It allows 10 generations and the quality of the image is medium.
Basic: It costs $11.99/month for 15 generations per month. It offers advanced features like faster generations, body type trait access, etc.
Standard: It costs $24.99 for 90 generations per month. It offers additional premium features like no queue, excellent quality images, etc.
Pro: It costs $47.99/month for 600 generations. It offers all premium features including no watermarks, age trait access, etc.
⏩⏩⏩[**Try Undress App For Free**](https://bestaitools.top/fgRB)
### 3. Nudify VIP
The Nudify VIP is an AI Clothes removal service that allows you to remove clothes from any photo. With this service, you can undress any person in a photo FREE!
By using advanced artificial intelligence technology, the Nudify VIP can seamlessly remove clothing from images, providing a unique and entertaining experience.
#### Key Features
Easy to use: The Nudify VIP offers a user-friendly interface, making it simple for you to upload and edit photos.
Accurate results: The AI technology used by the Nudify VIP ensures precise and realistic clothing removal, giving you a seamless and natural-looking result.
Privacy protection: The Nudify VIP prioritizes your privacy and ensures that your photos are not stored or shared with any third parties.
Fast processing: The Nudify VIP provides quick processing times, allowing you to see the results of your edited photos in a matter of seconds.
Free to use: The Nudify VIP offers its clothes removal service for free, allowing you to explore and enjoy the features without any cost.
Compatibility: The Nudify VIP is compatible with various devices and operating systems, allowing you to access the service from your preferred platform.
#### Pros
Entertainment value: The Nudify VIP provides a fun way to edit photos, allowing you to create unique and amusing images.
Convenience: With the Nudify VIP, you can easily remove clothing from photos without the need for complex editing software or skills.
Cost-effective: The fact that the Nudify VIP is free to use makes it a cost-effective option for those looking to experiment with clothes removal in their photos.
#### Cons
Accuracy limitations: While the Nudify VIP strives for accuracy, there may be instances where the clothing removal may not be perfect, resulting in minor imperfections.
Ethical considerations: It is important to remember that using the Nudify VIP or any similar service should be done responsibly and with respect for others' privacy and consent.
#### Pricing
The Nudify VIP is available for free for limited features and generations. It offers a paid plan for advanced features like UHD quality images, no queue, faster generations, undress mode, etc.
The paid plan starts from $2.92/month and goes up to $15.83/month depending on features and credits.
### 4. Pornx AI
The Pornx AI is an advanced technology that utilizes artificial intelligence to provide a unique and innovative solution known as the AI Clothes Remover. This cutting-edge software is designed to digitally remove clothing from images and videos, creating a realistic and visually appealing result.
#### Key Features:
Accurate Clothing Removal: The AI Clothes Remover uses advanced algorithms to accurately detect and remove clothing from images and videos.
Realistic Results: The software ensures that the resulting images and videos appear natural and realistic, with no visible traces of the removed clothing.
Easy-to-Use Interface: The user-friendly interface allows you to easily upload and process your images and videos, making the clothing removal process quick and hassle-free.
Customization Options: The AI Clothes Remover offers various customization options, allowing you to adjust the level of clothing removal and enhance the final output according to your preferences.
Fast Processing Speed: With its powerful AI technology, the software delivers fast processing speeds, enabling you to obtain the desired results promptly.
Compatibility: The AI Clothes Remover is compatible with a wide range of devices and platforms, ensuring that you can use it conveniently on your preferred device.
#### Pros:
Enhanced Visual Appeal: The AI Clothes Remover can transform images and videos by removing clothing, resulting in visually striking and attention-grabbing content.
Time-Saving Solution: Instead of manually editing or retouching images, the AI Clothes Remover offers a quick and efficient way to achieve the desired effect.
Privacy Protection: The software allows you to create content without the need for models or subjects to physically remove their clothing, ensuring privacy and discretion.
#### Cons:
Potential Misuse: As with any technology, there is a risk of misuse or inappropriate use of the AI Clothes Remover, which may raise ethical concerns.
Dependence on Image Quality: The effectiveness of the clothing removal process may depend on the quality and clarity of the original image or video.
#### Pricing:
It offers a limited free plan and a paid plan. They offer flexible pricing options, including monthly and annual subscriptions, to cater to different user needs and budgets.
Free Plan: It offers basic features like medium-quality images, slower generations, etc.
Gold Plan: It costs 800 coins (approx $24.99 per month) and offers all premium features like an in-paint tool, faster generations, unlimited creations, custom face, custom pose, etc.
### 5. SoulGen
SoulGen AI Clothes Remover is an innovative AI tool that allows you to remove clothes from images using simple text prompts. With this powerful tool, you can effortlessly create unique and captivating images that will leave a lasting impression.
Whether you are a professional designer or simply someone who enjoys exploring their creative side, SoulGen AI Clothes Remover is the perfect tool for you.
#### Key Features
Easy-to-use interface: SoulGen AI Clothes Remover comes with a user-friendly interface that makes it simple for anyone to use, regardless of their technical expertise.
Text-based prompts: You can generate stunning images by providing text prompts to the AI, allowing you to create unique and personalized artwork.
High-quality results: The AI algorithms used by SoulGen ensure that the generated images are of the highest quality, with realistic details and accurate clothing removal.
Time-saving: With SoulGen AI Clothes Remover, you can create impressive images in a matter of seconds, saving you valuable time and effort.
Unlimited creativity: The tool allows you to unleash your creativity by providing options to add, extend, and remove content from your images, giving you complete control over the final result.
Commercial use: SoulGen AI Clothes Remover allows you to use the generated images for commercial purposes, as long as you create the art on your own.
#### Pros
Effortless clothing removal: SoulGen AI Clothes Remover simplifies the process of removing clothes from images, making it accessible to users of all skill levels.
Enhanced creativity: The tool empowers users to explore their imagination and create unique images that stand out from the crowd.
Time-efficient: With the ability to generate images in mere seconds, SoulGen AI Clothes Remover saves you valuable time and allows you to focus on other aspects of your creative projects.
#### Cons
Dependence on AI: As with any AI tool, the accuracy and quality of the generated images may vary depending on the input and the capabilities of the AI algorithms.
Limitations in customization: While SoulGen AI Clothes Remover offers a range of options for content manipulation, there may be limitations in terms of specific customization requests.
#### Pricing
Soulgen offers a limited free trial and paid plan. They offer flexible pricing options, including monthly and annual plans, to suit your needs and budget.
The paid plan costs $12.99/month or $90.99/month and offers advanced features like 100 generations/month, no watermarks, faster generations, no queue, etc.
### 6. Nudify Online
Nudify Online is an AI Clothes Remover tool that allows you to generate highly realistic AI undress photos with just a few clicks. By leveraging deepnude AI technology, Nudify Online offers the highest accuracy in creating deepfake nudes and bikini photos. With its user-friendly interface, you can easily unleash the power of AI to undress any girl online for free.
#### Key Features
Undress AI photo generator for free deepnudes
Highest accuracy of deepfake nudes and bikini photos
Easy-to-use interface for quick and seamless photo generation
Unleash the power of deepnude AI technology
Create highly realistic AI undress photos
Generate undress photos with just a few clicks
#### Pros
Free to use: Nudify Online offers its AI Clothes Remover tool for free, allowing you to generate undressing photos without any cost.
High accuracy: With the highest accuracy in deepfake nudes and bikini photos, Nudify Online ensures that the generated images are realistic and of high quality.
User-friendly interface: The easy-to-use interface of Nudify Online makes it simple for anyone to generate undressed photos with just a few clicks.
#### Cons
Legal and ethical concerns: Using Nudify Online or any AI Clothes Remover tool raises legal and ethical concerns, as it involves manipulating and generating explicit content without the consent of the individuals in the photos.
Potential misuse: There is a risk of misuse of the generated undressed photos, as they can be used for non-consensual purposes or to harm individuals.
#### Pricing
Nudify Online's AI Clothes Remover tool is available for free for limited generations. It offers the paid plans for advanced features like faster generations, no queue, no watermarks, etc.
The paid plan starts from $21.66/month and goes up to $54/month depending on features and credits.
### 7. Candy AI
Candy AI is an innovative platform that offers an AI Clothes Remover feature, allowing you to experience a unique and immersive digital experience. With cutting-edge machine learning and a smart system, Candy AI enables you to interact and get intimate with your AI girlfriend.
Whether you're looking for a slow and sensual build-up or want to dive right into spicy conversations, Candy AI has got you covered. You have complete control over designing your AI girlfriend, from choosing her body type, personality, and clothes to even enjoying a polyamorous setup with multiple characters.
This platform empowers you to bring your dream girl to life and explore various fantasies and situations.
#### Key Features:
Customizable AI Girlfriend: You have the freedom to customize every aspect of your AI girlfriend, from her physical appearance to her personality traits and quirks. This allows you to create a digital companion that perfectly matches your desires and preferences.
Realistic Interactions: Through Candy AI's advanced AI technology, your AI girlfriend will listen to you, respond quickly to your messages, and even follow photo requests. Every interaction feels genuine and realistic, providing you with an immersive experience.
Intimacy and Control: With Candy AI, you are always in control of your AI girlfriend's actions and behaviors. You can explore intimate moments and enjoy a sense of companionship, knowing that you are the one directing the experience.
Wide Range of Scenarios: Whether you're in the mood for a romantic build-up or want to explore more adventurous situations, Candy AI offers a variety of scenarios to choose from. You can experiment with different poses, clothes, and locations to create the perfect setting for your interactions.
NSFW AI Capabilities: Candy AI's NSFW AI generator and NSFW AI chatbot capabilities cater to your deepest desires. Whether you're seeking a naughty chat with a feisty partner or a slow and sensual relationship with a sweet girlfriend, these features can fulfill your fantasies.
Photo Requests: Candy AI allows you to request specific photos from your AI girlfriend, adding an extra layer of personalization to your experience. You can enjoy visual representations of your digital companion, making the interaction even more immersive.
### 8. DreamGF
DreamGF is an innovative platform that allows you to create your AI girlfriend with the help of artificial intelligence technology.
With DreamGF, you can design and customize your virtual companion according to your preferences, including their face, body, and clothing features. Whether you're looking for companionship, a virtual assistant, or a unique way to practice conversation skills, DreamGF's AI girlfriend can offer an intriguing solution.
#### Key Features:
Enhanced Realism: The AI Clothes Remover feature in DreamGF allows you to experience more realistic interaction with your AI girlfriend. They can dynamically remove and change clothing items, providing a visually immersive experience.
Customizable Clothing Options: You have the freedom to choose from a wide range of clothing options for your AI girlfriend. Mix and match different styles until you find the perfect combination that suits your preferences.
Seamless Integration: The AI Clothes Remover seamlessly integrates with the overall AI girlfriend experience in DreamGF. It enhances the level of interactivity and engagement, making your virtual companion feel more lifelike.
Privacy and Security: DreamGF prioritizes your privacy and security. The AI Clothes Remover feature ensures that your personal information and interactions with your AI girlfriend are kept confidential and secure.
Real-Time Updates: The AI Clothes Remover feature allows for real-time updates, ensuring that your AI girlfriend's clothing changes are reflected instantly. This adds to the immersive and interactive nature of the experience.
Continuous Development: DreamGF is committed to continuous development and improvement. The AI Clothes Remover feature will be regularly updated with new clothing options and enhancements to provide you with an evolving and exciting experience.
## How Does AI Clothes Remover Work?
Have you ever wondered how AI clothes remover works? It's a technology that seems almost magical - the ability to remove clothes from a photo or video with just the click of a button. But how does it work? Let's delve into the mystery and find out.
At its core, AI clothes remover utilizes artificial intelligence algorithms to analyze and manipulate images or videos. These algorithms are trained on a large dataset of images and videos, teaching them to recognize and differentiate between different types of clothing. This process is known as deep learning, where the AI system learns and improves as it processes more data.
#### The Role of Deep Neural Networks
To understand how AI clothes remover works, we must first understand the role of deep neural networks. These networks are modeled after the structure of the human brain, with layers of interconnected nodes that process and analyze data. The deeper the network, the more complex the data it can analyze.
In the case of AI clothes remover, the deep neural network is tasked with identifying and segmenting clothing from an image or video. This involves recognizing patterns, shapes, and colors that are characteristic of different garments. The network also takes into account factors such as lighting, shadows, and textures to accurately identify clothing items.
#### Removing Clothes with Generative Adversarial Networks (GANs)
Once the clothing items have been identified and segmented, the AI system uses another component called Generative Adversarial Networks (GANs) to remove the clothes. GANs are composed of two neural networks - a generator and a discriminator - that work in tandem to produce realistic images or videos. The generator creates new images, while the discriminator evaluates them and provides feedback for improvement.
In the case of AI clothes remover, the generator is trained to create images of bare skin that match the background of the original image or video. The discriminator then evaluates these images and provides feedback to the generator, allowing it to improve and create more realistic results.
## Applications of AI Clothes Remover
AI Clothes Remover is one of the latest developments in the world of fashion and technology. This innovative technology uses artificial intelligence to remove clothes from images of people, making it easier for users to see what lies beneath the fabric.
While some may view this as a controversial or even unnecessary feature, there are many benefits to using an AI clothes remover. Let's explore some of them below.
### 1. A More Realistic Shopping Experience
One of the main benefits of using an AI clothes remover is that it allows for a more realistic shopping experience. With traditional online shopping, customers can only see images of models wearing the clothes, which may not accurately represent how the clothes would look on their bodies. However, with an AI clothes remover, customers can see how the clothes would look on themselves or even their friends and family, giving them a better idea of how the clothes will fit and look in real life.
### 2. Saves Time and Effort
Trying on clothes can be a time-consuming and tiring process, especially when shopping for multiple items. With an AI clothes remover, customers can quickly and easily see how different clothes will look on them without having to physically try them on. This can save a lot of time and effort, making the shopping experience more efficient and enjoyable.
### 3. More Inclusive for All Body Types
Traditionally, fashion retailers tend to use models with a specific body type, which may not be representative of all customers. This can make it difficult for some customers to envision themselves wearing the clothes being advertised. With an AI clothes remover, customers of all body types can see how the clothes will look on them, promoting inclusivity and diversity in the fashion industry.
### 4. Reduce Returns and Waste
One of the biggest struggles in online shopping is not being able to try on clothes before purchasing them, leading to a high rate of returns. This not only causes inconvenience for customers but also contributes to waste in the fashion industry. With the help of an AI clothes remover, customers can have a better idea of how the clothes will fit and look on them, reducing the chances of returns and ultimately, minimizing waste.
## The Limitations of AI Clothes Remover
While AI clothes remover may seem like a revolutionary technology, it does have its limitations. The AI system can only remove clothes from images or videos that it has been trained on. This means that if the clothing item is not in its database, it will not be able to accurately remove it.
Additionally, AI clothes remover is still in its early stages and may not always produce perfect results. Factors such as complex clothing patterns, overlapping garments, and unusual lighting can affect the accuracy of the system. But with continued advancements in AI technology, we can expect to see improvements and more realistic results in the future.
So, the next time you use an AI clothes remover, remember the complex processes and algorithms that are working behind the scenes to make it possible. While it may seem like magic, it's a result of advanced technology and years of training and development. Who knows what other amazing things AI will be able to do in the future?
## Ethical and Legal Concerns Regarding AI Clothes Remover
The rise of Artificial Intelligence (AI) has brought about many technological advancements and innovations. One of the latest developments is the AI clothes remover, a software that uses AI algorithms to remove clothing from images with a single click.
While this technology may seem revolutionary and convenient, it also comes with its own set of ethical and legal concerns.
### Confidentiality and Consent
One of the main concerns surrounding AI clothes remover is the issue of confidentiality and consent. In today's digital age, it has become easier to manipulate images and videos, leading to a rise in non-consensual image sharing and revenge porn.
With the use of AI clothes remover, it becomes even easier for someone to remove a person's clothing from an image without their consent, violating their privacy and potentially causing harm.
The software also raises concerns about confidentiality. Many individuals may not be aware that their images have been edited using AI clothes remover, and this could have serious consequences if the images are shared without their knowledge or consent.
### Objectification and Sexualization
The use of AI clothes remover also raises questions about objectification and sexualization. By removing someone's clothing from an image, it reduces them to a mere object rather than a person with agency and autonomy. This can further perpetuate harmful stereotypes and contribute to a culture of objectification and sexualization.
### Legal Implications
As with any new technology, there are also legal implications that need to be considered. For instance, the use of AI clothes remover could potentially violate copyright laws if the images being edited are owned by someone else. Furthermore, using the software to edit and share images without the person's consent could also lead to legal consequences.
## The Need for Responsible Use
While AI clothes remover may have its advantages, individuals and companies need to use this technology responsibly. This includes obtaining consent before editing and sharing images, and being mindful of the potential harm it can cause. As with any technology, it is crucial to consider the ethical implications and use it in a way that respects the rights and dignity of all individuals involved.
|
2084Collective/super-saiyan-checkpoint-100 | 2084Collective | 2025-03-14T19:53:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-14T19:53:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
remove-clothes/best-ai-clothes-remover | remove-clothes | 2025-03-14T19:53:13Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-03-14T19:52:38Z | ---
license: apache-2.0
---
# Best AI Clothes Remover
I have written about the 3 best AI clothes removers I used over past 6 months. These tools are best in terms of features, customization, quality, speed, and so on.
.
.
## 1. Undress.app
.
.
Undress.app is rapidly gaining traction as one of the leading AI tools for image editing, specifically focused on clothing removal. This innovative platform utilizes advanced AI technology to create realistic depictions of undressed individuals, making it a compelling choice for users interested in creative photo manipulation.
.
.
⏩⏩⏩[**Try Undress App For Free**](https://bestaitools.top/fgRB)
.
.
**Key Features**
AI-Powered Editing: Utilizes cutting-edge AI models trained on thousands of images to ensure highly accurate and lifelike results.
User-Friendly Interface: Designed for simplicity, allowing even those without technical skills to navigate seamlessly.
Customizable Generation Modes: Offers various modes, including Lingerie and NSFW, to cater to different user preferences.
High Accuracy: Generates realistic results that accurately reflect diverse body types.
Safe and Anonymous Registration: Users can sign up without revealing personal information, enhancing privacy.
Fast Processing: Quickly delivers results, enabling users to enjoy their edited images without long waiting periods.
Mobile Compatibility: Available on both iOS and Android platforms, making it accessible for users on the go.
Free Trial Options: New users can test certain features for free, allowing them to experience the app's capabilities before committing.
.
.
**Ease of Use**
One of the standout aspects of Undress.app is its exceptional ease of use. The intuitive interface allows users to upload images effortlessly, select their desired editing modes, and receive results in mere seconds. This simplicity attracts both professional editors looking for advanced tools and casual users who appreciate straightforward functionalities.
.
.
**Pros**
Intuitive Design: Users can quickly learn to use the app, minimizing the time needed to familiarize themselves with the platform.
High-Quality Edits: The app's advanced AI ensures that edits are not only realistic but also tailored to the specific input provided.
Discretion and Privacy: The app does not save user data, ensuring confidentiality and a safe user experience.
.
.
**Cons**
Ethical Concerns: The ability to create undressed images raises important ethical discussions around consent and privacy, which users must navigate responsibly.
Subscription Costs: While there are free options, accessing the full suite of features requires a paid subscription, which may deter some users.
Dependent on Input Quality: The final output heavily relies on the quality of the uploaded images, which means subpar uploads can lead to unsatisfactory results.
.
.
⏩⏩⏩[**Try Undress App For Free**](https://bestaitools.top/fgRB)
.
.
## 2. Undress.cc
Undress.cc is a revolutionary online platform that specializes in AI-driven clothing removal technology, offering users a unique, user-friendly way to explore and edit digital images. The website has gained significant attention for its sophisticated ability to realistically undress images while emphasizing user privacy and security. This tool caters to various users, from artists seeking creative expression to individuals curious about digital manipulation.
.
.
⏩⏩⏩[**Try Undress CC For Free**](https://bestaitools.top/fgRB)

**Key Features**
User-Friendly Interface: The platform is designed to be intuitive, allowing users to navigate easily and access features without technical difficulty.
Multiple Modes: Users can choose between different undressing modes, including 'X-Ray Mode' for full nudity or 'Lingerie Mode' to explore artistic fantasies.
Customizable Outputs: The tool offers extensive customization options, allowing users to manipulate parameters such as body shape, age, and skin tone for personalized results.
High-Quality Results: AI algorithms produce realistic images that appear natural and lifelike, thanks to training on vast datasets of images.
Privacy and Security: The platform prioritizes user safety, employing strong encryption to protect personal data and images.
Fast Processing Times: Users can quickly upload images and receive edited versions almost instantly, making it efficient for quick edits.
Free and Paid Options: Undress.cc offers both free and premium plans, allowing users to experience basic functionalities before considering upgrades.
Safe Usage: The site operates legally and ethically, with no laws against personal use of the service.
.
.
**Ease of Use**
Getting started with Undress.cc requires minimal effort. Users simply need to register on the platform to receive free credits, allowing them to test the service before committing financially. The uploading process is straightforward, and the various modes can be accessed with just a few clicks, ensuring that even those unfamiliar with image editing tools can successfully use the platform.
.
.
**Pros**
Innovative technology that offers realistic results.
Wide array of customization options for tailored outcomes.
Fast processing with an easy-to-navigate interface.
Security measures ensure the safety of personal images.
.
.
**Cons**
Potential ethical concerns surrounding the use of nudity in digital manipulation.
The requirement for user registration may deter some users.
Limited free credits may restrict extensive experimentation.
.
.
⏩⏩⏩[**Try Undress CC For Free**](https://bestaitools.top/fgRB)
.
.
## Frequently Asked Questions (FAQs)
**1. What is an AI Clothes Remover?**
An AI Clothes Remover is a digital tool or software that utilizes artificial intelligence algorithms to detect and remove clothing from images or videos. This technology is often used in fashion, gaming, and entertainment to create visual effects or modify images for various applications.
**2. How does AI Clothes Remover Work?**
AI Clothes Remover utilizes advanced algorithms and machine learning techniques to identify and segment clothing items in images or videos. By analyzing visual data, it distinguishes between clothing and skin, allowing for the removal or replacement of clothing items digitally while maintaining the integrity of the underlying image or video content.
**3. What are the common applications of AI Clothes Remover?**
Here are some common uses of AI Clothes Remover:
1. Fashion Design
Prototyping: Designers can visualize how garments fit and drape on different body types without creating physical prototypes.
Concept Development: Facilitates the exploration of new clothing concepts on models without the need for actual garments.
2. Photo Editing
Image Enhancement: Allows for artistic modifications to images for creative projects, ensuring a professional finish.
Restoration: Helps in restoring and preserving old photographs by digitally altering clothing to reveal underlying details.
3. Virtual Fitting Rooms
Online Shopping: Retailers can enhance customer experience by providing virtual try-on solutions, enabling users to visualize outfits on themselves without needing physical clothes.
4. Educational Purposes
Medical Training: Used for studying human anatomy by removing clothing in medical illustrations and images.
5. Artistic Expression
Creative Projects: Artists can use AI tools to explore ideas that require the alteration of clothing in images to fit specific themes or concepts. |
RichardErkhov/jtatman_-_Tiny-Llama-Llama-Dolphin-laser-1b-merge-8bits | RichardErkhov | 2025-03-14T19:51:59Z | 0 | 0 | null | [
"safetensors",
"llama",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-14T19:50:49Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Tiny-Llama-Llama-Dolphin-laser-1b-merge - bnb 8bits
- Model creator: https://huggingface.co/jtatman/
- Original model: https://huggingface.co/jtatman/Tiny-Llama-Llama-Dolphin-laser-1b-merge/
Original model description:
---
tags:
- merge
- mergekit
- lazymergekit
- TinyLlama/TinyLlama-1.1B-Chat-v1.0
- cognitivecomputations/TinyDolphin-2.8.2-1.1b-laser
- cognitivecomputations/TinyDolphin-2.8.1-1.1b
- TinyLlama/TinyLlama-1.1B-intermediate-step-715k-1.5T
base_model:
- TinyLlama/TinyLlama-1.1B-Chat-v1.0
- cognitivecomputations/TinyDolphin-2.8.2-1.1b-laser
- cognitivecomputations/TinyDolphin-2.8.1-1.1b
- TinyLlama/TinyLlama-1.1B-intermediate-step-715k-1.5T
---
# Tiny-Llama-Llama-Dolphin-laser-1b-merge
Tiny-Llama-Llama-Dolphin-laser-1b-merge is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0)
* [cognitivecomputations/TinyDolphin-2.8.2-1.1b-laser](https://huggingface.co/cognitivecomputations/TinyDolphin-2.8.2-1.1b-laser)
* [cognitivecomputations/TinyDolphin-2.8.1-1.1b](https://huggingface.co/cognitivecomputations/TinyDolphin-2.8.1-1.1b)
* [TinyLlama/TinyLlama-1.1B-intermediate-step-715k-1.5T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-715k-1.5T)
## 🧩 Configuration
```yaml
models:
- model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
parameters:
weight: 1.0
- model: cognitivecomputations/TinyDolphin-2.8.2-1.1b-laser
parameters:
weight: 1.0
- model: cognitivecomputations/TinyDolphin-2.8.1-1.1b
parameters:
weight: 0.4
- model: TinyLlama/TinyLlama-1.1B-intermediate-step-715k-1.5T
parameters:
weight: 0.6
merge_method: linear
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "jtatman/Tiny-Llama-Llama-Dolphin-laser-1b-merge"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
Moryjj/epoch18_mlongt5_lamb | Moryjj | 2025-03-14T19:50:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"longt5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-03-14T19:48:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/abdulmannan-01_-_Llama-3.2-3b-finetuned-for-json-generation-new-awq | RichardErkhov | 2025-03-14T19:50:37Z | 0 | 0 | null | [
"safetensors",
"llama",
"arxiv:1910.09700",
"4-bit",
"awq",
"region:us"
] | null | 2025-03-14T19:49:10Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3.2-3b-finetuned-for-json-generation-new - AWQ
- Model creator: https://huggingface.co/abdulmannan-01/
- Original model: https://huggingface.co/abdulmannan-01/Llama-3.2-3b-finetuned-for-json-generation-new/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ai-nudifier-app/best-ai-nudifier | ai-nudifier-app | 2025-03-14T19:50:32Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-03-14T19:50:00Z | ---
license: apache-2.0
---
# Best AI Nudify Apps
The 2 best AI nudify apps I love most are described below.
.
.
## 1. Undress.app
Undress.app has emerged as one of the most talked-about platforms in the realm of AI nudify applications, combining advanced technology with user-friendly features.
This app allows users to generate nudified images in a matter of seconds by employing sophisticated AI algorithms. As the demand for such tools grows, so does the need for ethical considerations and safety measures surrounding their use.
.
.
⏩⏩⏩[**Try Undress App For Free**](https://bestaitools.top/fgRB)
.
.
**Key Features of Undress.app**
Advanced AI Technology: Utilizes cutting-edge algorithms to produce high-quality nudified images quickly.
User-Friendly Interface: Designed for users of all technical backgrounds, making the app accessible and easy to navigate.
Quick Processing: Processes uploaded images swiftly, delivering results almost instantly.
Multiple Modes: Offers various modes, such as lingerie, bikini, and NSFW options, allowing for creative flexibility.
High-Quality Results: Just as effective for male and female images, ensuring realistic and clear output.
Privacy Focused: Emphasizes user privacy, ensuring that images are not saved or shared, providing a confidential experience.
Free and Paid Options: Users can start for free, with options to purchase additional credits for more advanced features.
Referral Program: Allows users to earn free credits by inviting friends to use the app, enhancing user engagement.
.
.
**Ease of Use**
Undress.app's interface is particularly notable for its simplicity. Users can easily sign up and start creating nudified images with just a few clicks. The app guides users through the process of choosing a mode, uploading an image, and generating a result without overwhelming them with technical jargon or complex options.
.
.
**Pros:**
Highly innovative technology providing fast results.
User-friendly for both new and experienced users.
Strong focus on privacy and data protection.
Accessible pricing options for wider audience engagement.
.
.
**Cons:**
Potential ethical concerns surrounding the use of nudify technologies.
Some functionalities may require payment, limiting free users.
Users must ensure compliance with local laws when utilizing AI nudify features.
.
.
⏩⏩⏩[**Try Undress App For Free**](https://bestaitools.top/fgRB)
## 2. Undress.cc
Undress.cc is a dedicated platform that offers such services, providing an interesting space for individuals exploring the creative side of nudity. This article will delve into the best AI nudify apps available through Undress.cc, highlighting their features, ease of use, and considerations.
.
.
⏩⏩⏩[**Try Undress CC For Free**](https://bestaitools.top/fgRB)

.
.
**Key Features**
User-Friendly Interface: The platform is designed with simplicity in mind, allowing users of all skill levels to navigate easily.
Multiple Modes: Offers various modes such as 'X-Ray Mode' for realistic effects and 'Lingerie Mode' for more playful interpretations.
Privacy Focused: Ensures user confidentiality and security while utilizing the application.
Free Trials: New users can access free credits to test features before making a purchase.
High-Quality Results: Utilizes advanced AI algorithms for realistic and visually appealing outcomes.
Referral Program: Users can earn credits and rewards by inviting friends to join the platform.
Seamless Upload Process: Allows quick uploading of images to facilitate a smooth user experience.
Responsive Support: Offers customer service assistance for resolving user queries and technical issues.
.
.
**Ease of Use**
Undress.cc is designed to be intuitive, making the experience enjoyable for all users. The platform’s layout is straightforward, guiding users through the steps of uploading images and selecting desired effects. With its user-friendly design, users can quickly learn to utilize the app without needing extensive technical knowledge.
.
.
**Pros**
Easy to use with a clear interface.
Offers a unique creative outlet for artistic expression.
Variety of modes caters to different user preferences.
Free credits allow for initial exploration without financial commitment.
.
.
**Cons:**
Potential ethical concerns regarding the use of nudify apps.
Free version may have limitations on quality and features.
Users must understand the legal implications of generating such images.
Availability and performance can vary depending on internet connectivity.
.
.
⏩⏩⏩[**Try Undress CC For Free**](https://bestaitools.top/fgRB)
.
.
## Frequently Asked Questions (FAQS)
**1. What are AI Nudify Apps?**
AI Nudify Apps are applications that utilize artificial intelligence technology to generate or manipulate images, often focusing on altering images of people to present them in a nude or semi-nude form. These apps typically use algorithms that analyze existing photos and apply filters or modifications to produce nudity effects while maintaining a level of realism.
**2. How Do AI Nudify Works?**
AI Nudify Apps work through a series of advanced image processing techniques that involve artificial intelligence and machine learning. Here’s a breakdown of how they operate:
Data Collection: These apps rely on large datasets of images representing various body types and poses. The AI is trained on these datasets to recognize and understand human anatomy.
Image Analysis: When a user uploads a photo, the app analyzes the image using algorithms to identify key features such as body outlines, faces, and other relevant markers.
Algorithm Application: The app applies predefined algorithms that manipulate the original image to create a nude or semi-nude version. This might involve “removing” clothing digitally while ensuring the body’s proportions and features are maintained.
Deep Learning Techniques: Some sophisticated nudify apps utilize generative adversarial networks (GANs), which consist of two neural networks that work together. One network generates new images, while the other critiques them, ensuring the output looks realistic.
Post-Processing: Finally, the images might undergo additional processing to enhance the quality, fix any anomalies, and improve overall appearance.
**3. What are the applications of AI Nudify apps?**
AI Nudify Apps have a range of applications, though many of them come with ethical and legal concerns. Here are some of the notable applications:
Art and Photography: Some artists and photographers might use nudify apps to create artistic representations or explore themes of body image. These applications can serve as tools for creative expression.
Adult Entertainment: In the adult entertainment industry, nudify apps may be employed to produce content that meets certain aesthetic standards or facilitates the creation of adult-themed illustrations.
Fashion and Advertising: Fashion designers or advertisers might utilize these apps for conceptual designs, enabling their teams to visualize outfits in a naked form for creative brainstorming (though this can also lead to ethical debates).
Education: In some educational contexts, these apps can be utilized to demonstrate human anatomy more effectively, although ethical usage guidelines must be followed.
Social Media and Memes: Users may create humorous or meme-based content by altering images using nudify apps, often leading to trending social media posts or viral content.
**4. What are the privacy and ethical concerns regarding AI nudify apps?**
The use of AI Nudify Apps raises several significant privacy and ethical concerns. Here are some key points to consider:
Consent Issues: Many concerns revolve around whether individuals pictured in the original photos have given consent for their images to be manipulated. This lack of consent can violate personal privacy.
Potential for Misuse: These apps can be exploited for malicious purposes, such as creating non-consensual explicit images of individuals (often referred to as "revenge porn"). This misuse can lead to significant emotional and psychological harm.
Objectification: The availability of nudify apps can contribute to the objectification of individuals, particularly women, by reducing them to mere visual representations devoid of their humanity.
Privacy Violations: These applications may lead to unintentional leaks of private images, as users may upload sensitive content without fully understanding the app's security measures or data handling policies.
Legal Ramifications: The creation and sharing of manipulated images without consent may lead to legal consequences, including lawsuits or criminal charges, depending on local laws and regulations.
Impact on Mental Health: The proliferation of altered images can contribute to negative body image and self-esteem issues, as individuals compare themselves to unrealistic standards set by such manipulated content. |
RichardErkhov/disi-unibo-nlp_-_phi3-SFT-medqa-triples-cot-8bits | RichardErkhov | 2025-03-14T19:49:11Z | 0 | 0 | null | [
"safetensors",
"mistral",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-14T19:46:49Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
phi3-SFT-medqa-triples-cot - bnb 8bits
- Model creator: https://huggingface.co/disi-unibo-nlp/
- Original model: https://huggingface.co/disi-unibo-nlp/phi3-SFT-medqa-triples-cot/
Original model description:
---
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
# Uploaded model
- **Developed by:** disi-unibo-nlp
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
porns-generator/ai-porns-generator | porns-generator | 2025-03-14T19:46:32Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2025-03-14T19:45:36Z | ---
license: mit
---
# 7 Best AI Porn Generators Of 2025
The world of adult content has been revolutionized by artificial intelligence, with AI porn generators pushing the boundaries of realism and creativity. As we step into 2025, these tools have become more advanced, accessible, and controversial than ever. Whether you're curious about the technology or exploring its possibilities, we’ve rounded up the 7 best AI porn generators of 2025—showcasing the cutting-edge tools shaping this evolving industry.
## 1. Seduced.ai
### Why I Recommend Seduced.ai
Seduced.ai stands out as the best AI porn generator available today. It offers a unique blend of user-friendliness and extensive customization options, making it accessible for everyone, regardless of technical expertise. The platform allows users to explore their fantasies and create personalized content effortlessly.
⏩⏩⏩[**Try Seduced.ai For Free**](https://sussap.net/h88f)

### Key Features
Extensive Fetish Support: Seduced.ai covers a wide range of fetishes, allowing users to generate content that caters to their specific desires.
Video Generation: Users can create short porn videos of up to 6 seconds, combining multiple sequences for a seamless experience.
Character Reusability: The platform allows users to save and reuse previously generated characters, enhancing creativity and continuity in content creation.
High-Quality Output: Seduced.ai provides options for upscaling images, ensuring that the generated content is not only unique but also visually appealing.
### My Experience
Using Seduced.ai has been a delightful experience. The interface is intuitive, making it easy to navigate through various options. I was able to generate high-quality images and videos quickly, which exceeded my expectations. The customization options allowed me to explore different scenarios and characters effortlessly.
### Pros
Easy to use, with no technical skills required.
Offers a vast array of extensions for unique content creation.
### Cons
Some features may require a subscription for full access.
⏩⏩⏩[**Try Seduced.ai For Free**](https://sussap.net/h88f)
## 2. Pornx.ai
Pornx.ai is a revolutionary platform that allows users to create stunning AI-generated adult content tailored to their fantasies. With its user-friendly interface and advanced features, it stands out as the best AI porn generator available today. I highly recommend it for anyone looking to explore their creativity in a safe and imaginative environment.
⏩⏩⏩[**Try Pornx.ai For Free**](https://sussap.net/9gfc)
### Why I Recommend It
Pornx.ai offers an unparalleled experience for users who wish to bring their fantasies to life. The platform's innovative tools and features make it easy to customize and generate unique content, ensuring that every user can create something truly special.
### Key Features
AI Image Generator: Create personalized images by selecting models, body types, and backgrounds.
Quality Mode: Enhance your images with options for Base, High, and Ultra quality settings.
Custom Pose: Transfer character poses from your images to generated content effortlessly.
In Paint: Modify specific areas of your images to achieve the desired look.
### My Experience
Using Pornx.ai has been an exciting journey. The intuitive design made it easy to navigate, and the results were impressive. I was able to create visuals that perfectly matched my imagination, making the experience both enjoyable and fulfilling.
### Pros
Extensive customization options allow for limitless creativity.
High-quality output enhances the overall visual experience.
### Cons
Some features may require a paid subscription for full access.
⏩⏩⏩[**Try Pornx.ai For Free**](https://sussap.net/9gfc)
## 3. Porngen.art
PornGen.art is a revolutionary platform that utilizes advanced artificial intelligence to create highly realistic and customizable pornographic images. This AI porn generator allows users to bring their fantasies to life, whether it's a dream character or a specific scenario. With its user-friendly interface and powerful algorithms, PornGen.art stands out as one of the best options available in the market.
### Why I Recommend It
PornGen.art is not just about generating images; it’s about creating personalized experiences. The platform prioritizes user privacy and offers a variety of customization options, making it a top choice for those looking to explore their fantasies safely and creatively.
### Key Features
Realistic Image Generation: Utilizes deep learning algorithms to create lifelike images.
Customizable Options: Users can adjust body type, hair, ethnicity, and more to fit their desires.
Privacy Protection: All uploaded images are confidential and deleted within 48 hours.
Multiple Styles: Explore various genres, including hentai, anime, and furry art.
### My Experience
Using PornGen.art has been an exciting journey. The ease of uploading images and the speed of generation amazed me. The results were impressive, and I appreciated the level of customization available.
### Pros
High-quality, realistic images that cater to diverse preferences.
Strong emphasis on user privacy and data security.
### Cons
Results can vary significantly based on the quality of the uploaded images.
## 4. Pornjourney.ai
PornJourney.ai stands out as the best AI porn generator available today, offering users an unparalleled experience in creating customized adult content. I recommend it for its advanced technology, user-friendly interface, and commitment to privacy and security. The platform allows users to generate images that cater to their specific preferences, making it a favorite among enthusiasts.
### Key Features
Fast Generation: Dedicated server clusters ensure quick image creation for premium users.
'Keep This Girl' Feature: Retain and modify the features of your favorite AI-generated characters.
Image Library: Save images and their metadata for easy access and modifications.
Privacy Protection: All images are encrypted, ensuring user data remains secure and private.
### My Experience
Using PornJourney.ai has been a delightful experience. The image generation process is seamless, and the results are incredibly realistic. I appreciate the variety of customization options available, allowing me to create characters that truly match my preferences.
### Pros
Exceptional realism and detail in generated images.
Regular updates with new features and content every weekend.
### Cons
AI porn videos are still in beta, which may lead to occasional instability.
## 5. Pornjoy.ai
PornJoy.ai stands out as the premier AI porn generator, offering users an innovative platform to create and customize adult content effortlessly. I recommend it for its user-friendly interface and extensive customization options that cater to a wide range of fantasies.
### Why I Recommend It
PornJoy.ai provides a unique blend of creativity and privacy, allowing users to explore their desires in a safe environment. The platform's advanced AI technology ensures high-quality images that truly reflect individual preferences.
### Key Features
AI Porn Generator: Create personalized porn images by selecting body types, skin tones, hairstyles, and outfits.
AI Porn Chat: Engage in steamy conversations with customizable AI characters, enhancing the interactive experience.
AI Hentai Generator: Quickly generate unique hentai images tailored to your specific desires.
Undress AI Generator: Transform dressed images into AI nudes, allowing for creative modifications and adjustments.
### My Experience
Using PornJoy.ai has been a delightful experience. The intuitive design made it easy to navigate, and the variety of customization options allowed me to create images that perfectly matched my fantasies.
### Pros
High-quality, realistic AI-generated images.
Strong emphasis on user privacy and data protection.
### Cons
Some features may require a learning curve for new users.
## 6. Pornpen.ai
### Why I Recommend It
I recommend Pornpen.ai for its ability to generate high-quality, personalized adult content that caters to diverse tastes. The user-friendly interface and impressive customization options make it accessible for everyone, regardless of their experience level.
### Key Features
Customizable Content: Users can specify their preferences, ensuring the generated content aligns with their desires.
High-Quality Graphics: The platform produces visually appealing images and videos that enhance the overall experience.
Privacy Protection: Pornpen.ai prioritizes user privacy, ensuring that all interactions remain confidential.
Regular Updates: The platform frequently updates its algorithms to improve content quality and user experience.
### My Experience
My experience with Pornpen.ai has been overwhelmingly positive. The platform is easy to navigate, and I was impressed by the quality of the generated content. The customization options allowed me to explore various themes, making it a fun and engaging experience.
### Pros
Innovative Technology: The AI behind Pornpen.ai is cutting-edge, producing unique content that is hard to find elsewhere.
User-Friendly Interface: The platform is designed for ease of use, making it accessible for all users.
### Cons
One downside is that the generated content may not always meet expectations, as it relies on algorithms that can sometimes produce unexpected results.
## 7. Candy.ai
### Why I Recommend It
Candy.ai is highly recommended for its ability to blend intimacy, creativity, and personalization. Users can explore various fantasies and customize their AI girlfriend to meet their desires, ensuring a fulfilling experience.
### Key Features
Customizable AI Girlfriend: Users can design their girlfriend's body type, personality, and clothing, creating a truly unique companion.
Interactive Experience: The AI girlfriend listens, responds quickly, and can even follow photo requests, making interactions feel genuine.
Privacy and Security: Candy.ai prioritizes user privacy with state-of-the-art secure data storage, ensuring all interactions remain confidential.
Endless Possibilities: Users can explore various scenarios, from romantic chats to intense AI sexting, catering to all preferences.
### My Experience
Using Candy.ai has been an enjoyable journey. The customization options allowed me to create a girlfriend that truly resonates with my desires. The interactions felt real, and I appreciated the privacy measures in place.
### Pros
Highly customizable experience tailored to individual preferences.
Strong emphasis on user privacy and data security.
### Cons
Some users may find the AI's responses occasionally lack depth.
## Frequently Asked Questions (FAQS)
### 1. What is AI porn?
AI porn refers to adult content created or enhanced using artificial intelligence technologies. This can include generating realistic images, videos, or deepfakes of individuals, often without their consent. AI porn leverages machine learning algorithms to manipulate or create explicit content that can appear highly authentic.
### 2. How does AI porn work?
AI porn typically relies on deep learning techniques, such as Generative Adversarial Networks (GANs) or diffusion models. These algorithms are trained on large datasets of images and videos to learn patterns and generate new content. For example:
Deepfakes: AI swaps faces in existing videos to make it appear as though someone is performing in a pornographic video.
Image generation: AI creates entirely synthetic images or videos of people who may not exist.
Enhancement: AI improves the quality of existing content, making it more realistic.
### 3. Can AI porn generators create realistic content?
Yes, AI porn generators can create highly realistic content. Advances in AI technology, particularly with GANs and diffusion models, have made it possible to produce images and videos that are nearly indistinguishable from real footage. However, the quality depends on the sophistication of the AI model and the data it was trained on.
### 4. Are there ethical and privacy concerns regarding AI porn?
Yes, AI porn raises significant ethical and privacy concerns:
Non-consensual content: Many AI porn creations involve using someone's likeness without their permission, which is a violation of privacy and consent.
Misuse and exploitation: AI porn can be used for harassment, revenge porn, or blackmail, causing emotional and psychological harm to victims.
Legal gray areas: Laws around AI-generated explicit content are still evolving, making it difficult to regulate or hold perpetrators accountable.
Impact on society: The proliferation of AI porn could normalize non-consensual content and contribute to the objectification of individuals.
|
RichardErkhov/Fischerboot_-_banana-brot-8bits | RichardErkhov | 2025-03-14T19:44:50Z | 0 | 0 | null | [
"safetensors",
"llama",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-14T19:40:11Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
banana-brot - bnb 8bits
- Model creator: https://huggingface.co/Fischerboot/
- Original model: https://huggingface.co/Fischerboot/banana-brot/
Original model description:
---
base_model:
- Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP](https://huggingface.co/Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 15]
model: Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP
- sources:
- layer_range: [31, 32]
model: Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
```
|
RichardErkhov/KnutJaegersberg_-_Galactica-6.7B-EssayWriter-8bits | RichardErkhov | 2025-03-14T19:44:04Z | 0 | 0 | null | [
"safetensors",
"opt",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-14T19:39:30Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Galactica-6.7B-EssayWriter - bnb 8bits
- Model creator: https://huggingface.co/KnutJaegersberg/
- Original model: https://huggingface.co/KnutJaegersberg/Galactica-6.7B-EssayWriter/
Original model description:
---
license: cc-by-nc-4.0
datasets:
- ChristophSchuhmann/essays-with-instructions
model-index:
- name: Galactica-6.7B-EssayWriter
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 40.1
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Galactica-6.7B-EssayWriter
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 50.29
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Galactica-6.7B-EssayWriter
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 33.88
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Galactica-6.7B-EssayWriter
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 40.27
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Galactica-6.7B-EssayWriter
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 58.48
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Galactica-6.7B-EssayWriter
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 3.49
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Galactica-6.7B-EssayWriter
name: Open LLM Leaderboard
---

Prompt Example:
```
### Instruction:
Provide the inputted essay that when summarized resulted in the following summary : The organization's learning and development practices are poor as they involve manual orientation from experienced workers. New employees are not advised of their roles in the organization before assignment of tasks. The orientation process takes long due to lack of a well-coordinated orientation programs.
### Response:
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KnutJaegersberg__Galactica-6.7B-EssayWriter)
| Metric |Value|
|---------------------------------|----:|
|Avg. |37.75|
|AI2 Reasoning Challenge (25-Shot)|40.10|
|HellaSwag (10-Shot) |50.29|
|MMLU (5-Shot) |33.88|
|TruthfulQA (0-shot) |40.27|
|Winogrande (5-shot) |58.48|
|GSM8k (5-shot) | 3.49|
|
fullertoncobie/qwen-rap-finetuned-1.4 | fullertoncobie | 2025-03-14T19:43:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-14T18:18:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/Zagusan_-_Wikibot-3001-8bits | RichardErkhov | 2025-03-14T19:43:20Z | 0 | 0 | null | [
"safetensors",
"gpt2",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-14T19:43:08Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Wikibot-3001 - bnb 8bits
- Model creator: https://huggingface.co/Zagusan/
- Original model: https://huggingface.co/Zagusan/Wikibot-3001/
Original model description:
---
license: mit
base_model: Zagusan/Wikibot-3001
tags:
- generated_from_trainer
model-index:
- name: Wikibot-3001
results: []
datasets:
- mapama247/wikihow_es
language:
- es
pipeline_tag: text-generation
widget:
- text: "Hola. ¿Cómo estás?"
- text: "¿Cómo cocinar?"
- text: "¿Cómo hornear un pastel?"
- text: "¿Cómo hacer una investigación?"
- text: "Método 1:"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Wikibot-3001
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the [mapama247/wikihow_es](https://huggingface.co/datasets/mapama247/wikihow_es) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9422
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.7432 | 1.0 | 5950 | 2.6109 |
| 2.3556 | 2.0 | 11900 | 2.3059 |
| 2.1872 | 3.0 | 17850 | 2.1738 |
| 2.0891 | 4.0 | 23800 | 2.0990 |
| 2.0306 | 5.0 | 29750 | 2.0579 |
| 1.9975 | 6.0 | 35700 | 2.0395 |
| 1.9909 | 7.0 | 5950 | 2.0208 |
| 1.8924 | 8.0 | 11900 | 1.9678 |
| 1.8281 | 9.0 | 17850 | 1.9422 |
| 1.8376 | 10.0 | 5950 | 1.9576 |
| 1.7802 | 11.0 | 11900 | 1.9212 |
| 1.7465 | 12.0 | 17850 | 1.8996 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ai-app-generator/best-ai-nude-generators | ai-app-generator | 2025-03-14T19:43:15Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2025-03-14T19:42:32Z | ---
license: mit
---
# 7 Best AI Nude Generators
In this post, we will know the top AI nude generators that generate the best results. We have tested more than 100 generators based on key features, ease of use, processing time, and image quality and chose these 15 tools.
## What Are The Best AI Nude Generators?
### 1. Undress.app
Undress.app is an AI nude generator that allows you to remove clothes from any photo. With their service, you can undress any person in a photo absolutely FREE! By entering the website, you confirm that you are 18+ and agree to their terms and conditions.
⏩⏩⏩[**Try Undress App For Free**](https://bestaitools.top/fgRB)

#### Key Features:
It offers 2 painting modes: auto and manual. Auto mode automatically paints over clothes while manual mode allows you to select the area to paint.
You can get various customization options such as age & body type traits, quality, mode (undress, bikini, lingerie), and more.
Accessible support through Telegram or email
#### Pros:
Ability to undress any person in a photo for free
Opportunity to earn money as a partner
Rewards in crypto from user purchases
#### Cons:
Requires prior written permission to use photos of other people
Responsibility lies with the user for the images generated
#### Pricing:
Undress.app offers 2 free credits. The paid plan starts from $4.49 per month and goes up to 36.99 per month.
⏩⏩⏩[**Try Undress App For Free**](https://bestaitools.top/fgRB)
### 2. Pornx.ai
Pornx.ai is an innovative platform that utilizes artificial intelligence to provide an AI nude generator. This cutting-edge technology allows you to generate realistic nude images using advanced algorithms and deep-learning models.
With Pornx.ai, you can explore the creative possibilities of AI and enhance your artistic projects or personal interests.
⏩⏩⏩[**Try For Free**](https://bestaitools.top/fgRB)
#### Key Features:
High-Quality Image Generation: The AI nude generator of Pornx.ai produces high-quality and realistic nude images that closely resemble human figures.
Customization Options: You have the flexibility to adjust various parameters such as body shape, skin tone, and pose to create the desired nude image.
Privacy Protection: Pornx.ai ensures the privacy of its users by employing advanced encryption techniques and secure data handling practices.
Easy Integration: The AI nude generator can be easily integrated into your existing workflow or applications, making it convenient to incorporate into your projects.
Fast Processing: The platform utilizes powerful servers and optimized algorithms to generate nude images quickly, saving you time and effort.
Regular Updates: Pornx.ai continuously improves its AI nude generator by implementing updates and enhancements based on user feedback and technological advancements.
#### Pros:
Artistic Exploration: The AI nude generator allows you to explore artistic concepts and experiment with different visual styles.
Time-Saving: Generating nude images manually can be time-consuming, but with Pornx.ai, you can quickly generate realistic nude images in a matter of seconds.
Enhanced Privacy: By using the AI nude generator, you can avoid the need to involve models or expose sensitive personal information, ensuring privacy and anonymity.
#### Cons:
Ethical Considerations: The use of AI nude generators raises ethical concerns regarding consent, privacy, and the potential misuse of generated content.
Accuracy Limitations: While the AI nude generator produces realistic images, there may be instances where the generated results may not perfectly match the desired outcome.
#### Pricing:
Pornx.ai offers flexible pricing plans to cater to different user needs. They provide both subscription-based plans and pay-as-you-go options. For detailed pricing information, you can visit their official website and explore the available options.
⏩⏩⏩[**Try For Free**](https://bestaitools.top/fgRB)
### 3. Seduced.ai
Seduced.ai is an AI nude generator that allows you to create unique and personalized adult content. With Seduced.ai, you can explore your fantasies and fetishes in a safe and private environment.
They offer a wide range of features and extensions that go beyond the typical capabilities of AI generators, allowing you to customize and enhance your creations.
⏩⏩⏩[**Try For Free**](https://bestaitools.top/fgRB)
#### Key Features
Generate High-Quality Images: Seduced.ai enables you to generate high-quality images that are realistic and visually appealing.
Extensions for Fetish Content: They offer a variety of extensions that empower you to produce fetish content, catering to a wide range of preferences.
Private Mode: You can generate content in private mode, ensuring your creations remain confidential and secure.
Upscale Resolution: Seduced.ai allows you to upscale the resolution of your images, increasing their quality and detail.
Discover and Download New Content: You can explore and download new content, expanding your collection and creative possibilities.
Replicate Faces and Poses: Seduced.ai enables you to replicate a face or pose from an uploaded image, adding a personal touch to your creations.
#### Pros
Wide Range of Extensions: Seduced.ai offers a diverse selection of extensions, allowing you to explore and create content that aligns with your specific interests and desires.
Priority Access: As a Pro member, you will have priority over Basic members, ensuring faster processing and shorter wait times.
#### Cons
Subscription-Based Pricing: Seduced.ai operates on a subscription-based model, which may not be suitable for those who prefer one-time purchases.
#### Pricing
Seduced.ai offers a Pro membership plan at $0.83 per day, billed at $25 per month. With the Pro plan, you get access to generate approximately 300 images per month, with the option to refill up to 30 times.
You also enjoy priority access, the ability to generate in private mode, upscale resolution, discover and download new content, select up to 2 extensions, save and re-use girls, edit generations, expand generations in portrait mode, and even generate videos.
Additionally, you can replicate a face or pose from an uploaded image to add a personal touch to your creations.
⏩⏩⏩[**Try For Free**](https://bestaitools.top/fgRB)
### 4. SoulGen
SoulGen is an AI Magic Tool that allows you to create stunning images from simple text prompts. With this innovative tool, you can bring your imagination to life and generate unique artwork in mere seconds.
One of the key features of SoulGen is its ability to create a portrait of someone lookalike by uploading a reference photo and asking the AI to draw a character with a similar face.
Additionally, you can edit your images and activate your creative superpowers by adding, extending, and removing content using simple text prompts.
#### Key Features
Create images from text prompts
Generate portraits of someone lookalike
Edit images with simple text prompts
Expand images beyond imagination with AI Outpainting
Create your dream character in seconds
Chat with AI friends and customize their appearance
#### Pros
Easy and quick image creation from text prompts
Ability to generate portraits of someone lookalike
Unlimited creative possibilities with image editing
#### Cons
Dependence on AI algorithms for image generation
Potential limitations in accurately capturing desired image details
#### Pricing
Soulgen offers a limited free and paid plan. The paid plan starts from $9.99 per month for advanced features such as 100 credits per month, priority queue, lifetime save, etc.
⏩⏩⏩[**Try For Free**](https://bestaitools.top/fgRB)
### 5. PornJourney
#### Key Features:
AI-Powered Image Generation: PornJourney leverages advanced AI algorithms to generate high-quality nude images that are indistinguishable from real ones.
Customization Options: They offer a range of customization options, allowing you to specify various attributes such as body type, skin tone, and more.
Privacy Protection: Your privacy is of utmost importance.
PornJourney ensures that all generated images are kept confidential and not shared with any third parties.
User-Friendly Interface: The platform is designed with simplicity in mind, making it easy for you to navigate and create the perfect nude images.
Fast and Efficient: The AI algorithms used by PornJourney are optimized for speed, ensuring that you can generate images quickly and efficiently.
Compatibility: PornJourney is compatible with various devices and operating systems, allowing you to access the platform from anywhere, anytime.
#### Pros:
Creative Exploration: With PornJourney, you can unleash your creativity and explore new possibilities in adult content creation.
Time-Saving: Generating nude images manually can be time-consuming. With an AI nude generator like PornJourney, you can save valuable time and focus on other aspects of your projects.
Realistic Results: The AI algorithms used by PornJourney ensure that the generated images are highly realistic, providing you with a professional-looking end product.
#### Cons:
Ethical Considerations: The use of AI nude generators raises ethical concerns regarding consent and privacy. It is important to use these tools responsibly and respect the boundaries of others.
#### Pricing:
PornJourney offers flexible pricing plans to suit your needs. They provide both free and premium options, allowing you to choose the plan that best fits your requirements.
The premium plans cost $15.99 per month and offer additional features and benefits, such as higher customization options and priority support. Visit the PornJourney website for detailed pricing information and to start your AI nude generator journey today!
### 6. AInude.ai
AINude.ai is an AI nude generator that caters to adult users interested in AI nude image generation and deep nude services.
It offers a unique and innovative way to explore and create AI nude art. With AINude.ai, you can transform your explicit visions into a stunning reality, allowing you to modify any image into a nude one with advanced deep nude technology.
The platform generates nude girl images using artificial intelligence, but it’s important to note that these characters do not exist in real life. It’s up to you to determine if the generated nudifier image aligns with your desires.
#### Key Features:
Customization: AINude.ai allows you to customize your AI naked girl’s appearance, hairstyle, background, posture, and more. You have full control over how the generated AI nude art looks.
Text-to-Image Editing: With AINude.ai, you can explore different nude AI girls by using text prompts. Simply input different prompts to generate AI nude characters that match your preferences.
Realistic Transformation: The AI nude generator uses advanced deepnude technology to remove clothing from a real photo and turn it into a nude girl without clothing. The transformation is seamless and realistic.
Easy to Use: Generating AI nude art on AINude.ai is a simple process. Just upload a real person photo, select the areas you want to make changes on, type in the desired prompts, and click the “Generate” button to create your customized AI deepnude image.
Privacy: Your generated deepnude images on AINude.ai are private and not visible to other users. They are stored securely and can only be shared if you choose to download and share them yourself.
Data Protection: AINude.ai handles your personal information with the utmost care and follows strict data protection practices. Your information is treated as confidential and is not sold or shared with third parties without your explicit consent.
### 7. Deep-nude.ai
Deep-nude.ai is an advanced AI nude generator that allows you to digitally undress photos and create AI nude images.
With its cutting-edge deepnude 3.0 technology, Deep-nude.ai ensures optimal nudify results, making it the best AI deepnude generator on the internet.
This free deepnude generator does not require any sign-in and can undress images in just 3 seconds. Whether you’re using a mobile phone, laptop, or desktop, Deep-Nude.AI’s in-browser AI nude editor is compatible with all mainstream devices and systems, including Android, iOS, and Windows.
#### Key Features:
Quick and Easy: With Deep-Nude.AI, you can receive the generated AI deepnude images within seconds. The deepnude algorithms used by this generator swiftly analyze the specific image parts you select and the prompts you provide, generating a picture of a naked girl that matches your preferences in no time.
Remove Clothing: Deep-Nude.AI features a smart AI deepnude algorithm that can digitally undress a photo, revealing nudity. Simply select the specific part of the image from which you wish to nudify, and then click the “Remove Object” option to generate an image of an AI nude girl.
Privacy Protection: Your generated AI deepnude images are not visible to others. Each AI nude girl image edited and images you nudify within the Deep-Nude.AI platform is kept private. The pictures remain isolated from other users’ interactions, unless you choose to download and share them with others on your own accord.
Data Privacy: Deep-Nude.AI places great importance on data privacy. Any personal information you provide while using the Deep-Nude.AI to generate AI deepnude is treated as confidential and stored securely. They do not engage in selling or sharing your information with any third parties unless they have obtained your explicit consent.
Free Trials: You are granted 2 Free Trials every day to nudify images on Deep-Nude.AI. This allows you to explore and experience the AI nude generator without any cost.
Easy to Use: Deep-Nude.AI is designed to be user-friendly. Its in-browser AI nude editor allows you to easily nudify any images without the need for login. Whether you’re a beginner or an expert, you can navigate and utilize the generator effortlessly.
### What Is an AI Nude Generator?
AI nude generator, also known as deepfake pornography, is a technology that uses artificial intelligence algorithms to superimpose a person’s face onto a pornographic video or image. This technology has gained widespread attention and raised concerns due to its potential to create fabricated and non-consensual sexual content.
### How Does AI Nude Generator Work?
Firstly, it’s important to understand that AI nude generators are powered by deep learning algorithms. These algorithms are trained on thousands of real nude images, learning the patterns and features that make up a realistic nude photo. This training process allows the AI to generate new images that look incredibly similar to real ones.
But how does it know what to generate? This is where perplexity comes into play. AI nude generators are programmed to create images that are both realistic and aesthetically pleasing. This means that they must consider factors such as body proportions, skin tone, and lighting to create a convincing image.
One of the key things that sets AI nude generators apart from other photo-editing software is the ability to create images of people who do not exist. By combining features and characteristics from multiple real images, the AI can generate a completely new and unique individual, making it almost impossible to determine if the image is real or fake.
Of course, this technology has raised ethical concerns, as it can easily be used to create non-consensual or revenge porn. However, developers of AI nude generators argue that it also has the potential to be used for good, such as in the entertainment industry or for medical purposes.
### Ethical and Safe Use of AI Nude Generators
The advancement of artificial intelligence (AI) technology has brought about a new trend of using AI nude generators. These generators use algorithms to create realistic and believable images of people, including their nudity.
While these generators may seem harmless and even entertaining, there are ethical and safety concerns surrounding their use. In this section, we will discuss the importance of ethical and safe use of AI nude generators.
#### Ethical Concerns
One of the main ethical concerns surrounding AI nude generators is the violation of privacy and consent. These generators often use personal photos without the subject’s permission, raising questions about ownership and control over one’s image. Additionally, the use of these generators can contribute to the objectification and sexualization of individuals, especially women. It is important to consider the potential harm and impact of using these generators without ethical considerations.
#### Safety Concerns
Another concern is the potential for misuse and abuse of these AI nude generators. With the rise of deepfake technology, there is a risk of these generators being used for malicious purposes, such as revenge porn or blackmail. These generators can also perpetuate harmful stereotypes and biases, further adding to the issue of misrepresentation and discrimination. It is crucial to prioritize the safety and well-being of individuals who may be affected by the use of these generators.
#### Tips for Ethical and Safe Use
Obtain consent from the individual before using their photos for AI nude generators.
Be mindful of the potential impact and harm of using these generators, and avoid perpetuating harmful stereotypes and biases.
Consider the source of these generators and their ethical guidelines.
Be cautious of sharing AI-generated nude images, as they could be used for malicious purposes.
### Is It Legal to Create AI Nude Images?
In recent years, there has been a rise in the use of artificial intelligence (AI) to create nude images of individuals. This technology, known as “deepfake”, has sparked controversy and ethical concerns. But the question remains: is it legal to create AI nude images?
Before diving into the legalities, it is important to understand what deepfake technology is. Deepfake uses AI algorithms to superimpose a person’s face onto another person’s body in videos or images, often creating a realistic but fake portrayal. This has raised concerns about the use of this technology to create non-consensual, sexually explicit content.
When it comes to the legality of creating AI nude images, the answer is not clear-cut. In most countries, some laws protect individuals from having their likeness used without their consent. This includes using someone’s image for commercial purposes or to defame them. However, these laws were not written with AI technology in mind, making it difficult to determine their application to deepfake images.
Some argue that deepfake images fall under the category of parody or satire, which is protected under free speech laws. But when these images are used to harass, intimidate, or harm others, they can be considered a violation of privacy and can lead to legal consequences.
Another aspect to consider is the use of AI nude images in pornography. While many pornographic videos and images feature consenting adults, deepfake technology allows for the creation of fake pornographic content using celebrities or even non-consenting individuals. This not only raises ethical concerns but also potentially violates laws against revenge porn and non-consensual pornography.
Ultimately, the legality of creating AI nude images depends on the intent and use of the content. It is important for individuals and companies to understand the ethical implications of using this technology and to ensure that it is not being used to harm or violate someone’s rights.
So, is it legal to create AI nude images? The answer is not straightforward, and the laws surrounding this topic are still evolving. However, it is important for individuals and society to critically examine the use of this technology to ensure that it does not perpetuate harm and violates basic human rights.
### Frequently Asked Answer Questions
#### – How accurate are AI nude photo generators in rendering images?
The accuracy of AI nude photo generators can vary depending on the specific algorithm and training data used. Some may produce highly realistic images, while others may have noticeable flaws or distortions.
#### – Can AI nude makers be used for commercial purposes?
Yes, AI nude generators can be used for commercial purposes as long as the user has the necessary rights and permissions for the images used. It is important to also consider ethical and legal implications when using AI nude generators for commercial purposes.
#### – Is a free nude AI generator suitable for all types of artistic projects?
No, a free nude AI generator may not be suitable for all types of artistic projects. It is important to carefully consider the intended use and potential ethical implications before using any AI technology, including nude generators. It may be necessary to use a paid or more advanced AI tool for certain projects.
#### – What are the potential applications of an AI nude generator?
Some potential applications of an AI nude generator include creating digital models for fashion and beauty industries, generating visual effects for films and video games, and assisting with medical research and education.
#### – Are there any risks associated with using an AI nude generator?
As with any technology, there are potential risks associated with using an AI nude generator. These may include issues with accuracy of generated images, potential misuse of the technology, and concerns about perpetuating unrealistic body standards. |
RichardErkhov/Zagusan_-_Wikibot-3001-4bits | RichardErkhov | 2025-03-14T19:43:06Z | 0 | 0 | null | [
"safetensors",
"gpt2",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-14T19:42:40Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Wikibot-3001 - bnb 4bits
- Model creator: https://huggingface.co/Zagusan/
- Original model: https://huggingface.co/Zagusan/Wikibot-3001/
Original model description:
---
license: mit
base_model: Zagusan/Wikibot-3001
tags:
- generated_from_trainer
model-index:
- name: Wikibot-3001
results: []
datasets:
- mapama247/wikihow_es
language:
- es
pipeline_tag: text-generation
widget:
- text: "Hola. ¿Cómo estás?"
- text: "¿Cómo cocinar?"
- text: "¿Cómo hornear un pastel?"
- text: "¿Cómo hacer una investigación?"
- text: "Método 1:"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Wikibot-3001
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the [mapama247/wikihow_es](https://huggingface.co/datasets/mapama247/wikihow_es) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9422
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.7432 | 1.0 | 5950 | 2.6109 |
| 2.3556 | 2.0 | 11900 | 2.3059 |
| 2.1872 | 3.0 | 17850 | 2.1738 |
| 2.0891 | 4.0 | 23800 | 2.0990 |
| 2.0306 | 5.0 | 29750 | 2.0579 |
| 1.9975 | 6.0 | 35700 | 2.0395 |
| 1.9909 | 7.0 | 5950 | 2.0208 |
| 1.8924 | 8.0 | 11900 | 1.9678 |
| 1.8281 | 9.0 | 17850 | 1.9422 |
| 1.8376 | 10.0 | 5950 | 1.9576 |
| 1.7802 | 11.0 | 11900 | 1.9212 |
| 1.7465 | 12.0 | 17850 | 1.8996 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Skyfallirk/still_life_LoRa | Skyfallirk | 2025-03-14T19:42:34Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2025-03-14T19:42:22Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a photo TOK of still life
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - Skyfallirk/still_life_LoRa
<Gallery />
## Model description
These are Skyfallirk/still_life_LoRa LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo TOK of still life to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](Skyfallirk/still_life_LoRa/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
RichardErkhov/Shams2611_-_testv5-8bits | RichardErkhov | 2025-03-14T19:42:23Z | 0 | 0 | null | [
"safetensors",
"llama",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-14T19:38:03Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
testv5 - bnb 8bits
- Model creator: https://huggingface.co/Shams2611/
- Original model: https://huggingface.co/Shams2611/testv5/
Original model description:
---
base_model:
- EleutherAI/llemma_7b
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [EleutherAI/llemma_7b](https://huggingface.co/EleutherAI/llemma_7b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: EleutherAI/llemma_7b
layer_range: [0, 10]
- sources:
- model: EleutherAI/llemma_7b
layer_range: [5, 15]
merge_method: passthrough
dtype: bfloat16
```
|
ai-app-generator/ai-nude-generators | ai-app-generator | 2025-03-14T19:41:40Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-03-14T19:40:59Z | ---
license: apache-2.0
---
# 5 Best AI Nude Generators In 2025
The best AI nude generator has features like realistic & accurate result generation, and customization options (like age, body type, pose, etc), faster rendering speed, privacy, and security.
I have tried more than 100 tools in the field of undress, deep nude, or AI nude. I chose these 5 tools that follow all the criteria I mentioned above.
## 1. Undress.app
Undress.app is recognized as one of the best AI nude generators available online. Utilizing advanced artificial intelligence technology, it allows users to create unclothed images quickly and efficiently.
The platform is user-friendly, ensuring that even those unfamiliar with such tools can navigate it with ease. With a commitment to user privacy and data security, Undress.app stands out as a trustworthy option for generating NSFW content.
⏩⏩⏩[**Try Undress App For Free**](https://bestaitools.top/fgRB)

### **Key Features**
Multiple AI Modes: Users can choose from various undressing modes, including Lingerie, Bikini, and NSFW mode, allowing for a personalized experience.
High-Quality Results: The AI processes images to deliver high-quality results, ensuring that the generated images are clear and detailed.
Free Trial Access: New users can sign up and receive free credits to explore the app's features without any financial commitment.
Privacy Assurance: Undress.app does not save any user data, ensuring that all actions remain confidential and secure.
Compatibility: The app works with both male and female photos, as well as anime images, providing a wide range of customization options.
User-Friendly Interface: The platform is designed to be intuitive, making it easy for users to upload images and generate results quickly.
Regular Updates: The developers frequently update the app to improve functionality and security, ensuring a safe user experience.
### **My Experience**
Using Undress.app was a straightforward and enjoyable experience. After signing up, I was greeted with a clean and intuitive interface that made navigation a breeze.
I selected the bikini mode and uploaded a photo I was allowed to use. Within seconds, the AI processed the image and delivered a high-quality result without any blurriness.
I appreciated the variety of modes available, which allowed me to experiment with different styles. The privacy features gave me peace of mind, knowing that my data was secure and not stored anywhere.
Overall, my experience was positive, and I found the tool to be effective and user-friendly.
### **Pros:**
Easy to use with a user-friendly interface.
High-quality image generation with no blur.
Multiple modes for diverse customization.
Strong privacy and security measures in place.
Free trial credits are available for new users.
Works with various types of images, including anime.
### **Cons:**
Sign-up is required, which may deter some users.
Free credits may be limited, requiring users to purchase more for extensive use.
Results can vary based on the quality of the uploaded image.
⏩⏩⏩[**Try Undress App For Free**](https://bestaitools.top/fgRB)
## 2. Pornx.ai
Pornx.ai is revolutionizing the world of adult content with its cutting-edge AI nude generator. This innovative platform allows users to create stunning, personalized adult images and videos that cater to their unique fantasies.
With a user-friendly interface and a plethora of customization options, Pornx.ai empowers users to unleash their creativity and explore their desires in a safe and imaginative environment.
⏩⏩⏩[**Try For Free**](https://bestaitools.top/fgRB)
### **Key Features**
AI Image Generator: Generate your own AI porn images by selecting models, including women, men, or transgender individuals. Customize with various filters, body types, skin tones, hairstyles, outfits, and backgrounds.
AI Video Generator: Craft personalized videos that reflect your imagination, allowing for a more immersive experience.
Quality Mode: Enhance your images with the "Quality" feature, which zooms in on details and increases resolution for a top-notch visual experience.
Custom Pose: Transfer character poses from your uploaded images to the generated images, making storytelling and personal pleasure more engaging.
In Paint Feature: Modify specific areas of your images by selecting and editing them, allowing for tailored adjustments and enhancements.
Community Engagement: Join the Pornx.ai Discord community to connect with other users, share experiences, and gain insights into the platform.
Age Verification: The platform ensures that all users are of legal adult age, maintaining a safe environment for mature content.
Free and Paid Plans: While the basic features are available for free, users can upgrade to a paid plan for additional benefits and enhanced functionalities.
### **My Experience**
Using Pornx.ai has been an exhilarating journey. The intuitive interface made it easy to navigate through the various features. I was particularly impressed with the AI Image Generator, which allowed me to create images that closely matched my vision.
The customization options were extensive, enabling me to experiment with different models and styles. The Quality Mode truly elevated the visual appeal of my creations, making them look professional and polished. Overall, my experience was enjoyable and fulfilling, as I could explore my creativity without limitations.
### **Pros**
User-Friendly Interface: Easy to navigate, even for beginners.
Extensive Customization: A wide range of options for personalizing images and videos.
High-Quality Output: The Quality Mode enhances the visual appeal significantly.
Community Support: Engaging with other users through Discord fosters a sense of belonging.
Free Access: Basic features are available at no cost, making it accessible to everyone.
### **Cons:**
Age Restrictions: Users must be over 18, which may limit access for younger audiences.
Paid Features: Some advanced functionalities require a subscription, which may not be ideal for all users.
Content Limitations: The platform is designed for adult content, which may not appeal to everyone.
⏩⏩⏩[**Try For Free**](https://bestaitools.top/fgRB)
## 3. Seduced.ai
Seduced.ai is recognized as one of the leading AI nude generators available today. This innovative platform allows users to create stunning and unique NSFW images and videos effortlessly, without requiring any technical skills.
With a wide array of features and customizable options, Seduced.ai caters to various preferences and fetishes, making it a go-to choice for those looking to explore their fantasies in a safe and private environment.
⏩⏩⏩[**Try For Free**](https://bestaitools.top/fgRB)
### **Key Features**
Easy-to-Use Interface: The platform is designed for users of all skill levels, allowing anyone to generate content with just a few clicks.
Video Generation: Users can create smooth porn videos of up to 6 seconds, combining multiple sequences for a seamless experience.
Mixable Extensions: Seduced.ai allows users to mix up to 8 extensions, enabling the creation of unique images that cannot be found elsewhere.
Character Reuse: Users can save previously generated characters for reuse in future creations, allowing for infinite scenarios.
Diverse AI Models: The platform offers a selection of 10 distinct AI models, allowing users to create both realistic and anime-style content.
Upscaling Options: Users can enhance the resolution of generated images two or three times, adding finer details for a more realistic appearance.
Privacy Control: Users have the option to keep their generated images and videos private, ensuring discretion.
Fetish Support: Seduced.ai covers a wide range of fetishes, providing extensions that empower users to produce content beyond typical capabilities.
### **My Experience**
Using Seduced.ai has been a remarkable experience. The user-friendly interface made it easy for me to navigate through the various features. I was particularly impressed by the extensive library of extensions available, which allowed me to mix and match different elements to create unique images.
The ability to generate videos was an added bonus, and I found the quality to be surprisingly high for such a short duration. The option to reuse characters made it easy to create a storyline, enhancing the overall experience.
### **Pros:**
User-Friendly: No technical skills are required to generate content.
High-Quality Output: The images and videos produced are of excellent quality.
Wide Range of Options: Extensive library of extensions and AI models to choose from.
Privacy Features: Users can keep their creations private.
Regular Updates: The platform frequently adds new features and extensions.
### **Cons:**
Subscription Costs: Some users may find the pricing plans to be on the higher side.
Limited Video Duration: The maximum video length of 6 seconds may not be sufficient for all users.
Content Restrictions: While the platform supports various fetishes, some niche interests may not be fully covered.
⏩⏩⏩[**Try For Free**](https://bestaitools.top/fgRB)
## 4. Undress.cc
Undress.cc is recognized as one of the best AI nude generators available today. This innovative platform utilizes advanced artificial intelligence technology to create realistic images of women without clothing.
Designed to be user-friendly and accessible, Undress.cc allows users to explore their fantasies in a safe and private environment. With its intuitive interface and various features, it has gained popularity among users looking for creative ways to generate undressed images.
⏩⏩⏩[**Try For Free**](https://bestaitools.top/fgRB)
### **Key Features**
Free Access: Undress.cc offers a free AI undressing tool, allowing users to generate images without any initial cost.
User-Friendly Interface: The platform is designed to be intuitive, making it easy for anyone to navigate and utilize its features effectively.
Multiple Modes: Users can choose from different modes, such as 'X-Ray Mode' for deep nude undressing or 'Lingerie Mode' to explore various fantasies.
Privacy and Security: The app prioritizes user security and confidentiality, ensuring that all generated images and user data remain private.
Registration Benefits: Upon signing up, users receive free credits to explore the service, including the deep nude functionality.
Legal Compliance: Undress.cc operates within the bounds of current legal frameworks, ensuring that its services are legitimate and lawful.
Creative Exploration: The tool provides a unique way to experiment with undressing images while respecting user preferences.
Continuous Updates: The platform is regularly updated to enhance user experience and incorporate the latest advancements in AI technology.
### **My Experience**
Using Undress.cc was a straightforward and enjoyable experience. After registering on the platform, I was greeted with a clean and intuitive interface that made navigation easy. Uploading a clear image was simple, and I was impressed by the variety of modes available.
I decided to try the 'X-Ray Mode' and was amazed at the realism of the generated images. The process was quick, and I appreciated the privacy measures in place, which made me feel secure while using the app. Overall, my experience with Undress.cc was positive, and I found it to be a valuable tool for creative exploration.
### **Pros:**
Free access to basic features.
Intuitive and user-friendly interface.
Multiple modes for different preferences.
Strong emphasis on user privacy and security.
Legal and compliant with current regulations.
### **Cons:**
Some advanced features may require purchasing credits.
Limited to images of women, which may not appeal to all users.
Potential ethical concerns regarding the use of generated images.
⏩⏩⏩[**Try For Free**](https://bestaitools.top/fgRB)
## 5. Undressai.tools
Undressai.tools is a cutting-edge AI nude generator that utilizes advanced technologies to transform clothed images into realistic nude visuals.
Leveraging deep learning algorithms and sophisticated image processing techniques, this tool offers users a unique and innovative way to explore the artistic potential of AI-generated imagery.
⏩⏩⏩[**Try For Free**](https://bestaitools.top/fgRB)
### **Key Features**
Stable Diffusion: This model enhances image generation by producing high-quality, coherent outputs with minimal artifacts, significantly improving realism and detail in the undressed images.
Generative Adversarial Networks (GANs): GANs power Undressai.tools by utilizing two neural networks to generate highly realistic images of nudity, ensuring lifelike results.
Deep Learning Models: Sophisticated algorithms analyze clothing patterns and body structures to accurately create undressed versions of subjects, enhancing the overall quality of the output.
Image Synthesis: AI-driven image synthesis generates realistic skin textures that replace removed clothing, ensuring that the final images appear natural and believable.
Pose Estimation: Machine learning algorithms track and predict body poses, ensuring anatomically accurate undressing outcomes that respect the original image's context.
Convolutional Neural Networks (CNNs): CNNs extract key features from input images to guide the undressing process, improving output quality and detail.
Computer Vision and Image Recognition: These techniques identify and isolate clothing areas, allowing for precise removal and replacement, which is crucial for achieving realistic results.
Style Transfer: Advanced algorithms ensure that the generated nude images match the original's lighting, shading, and artistic style, maintaining the integrity of the original photograph.
### **My Experience**
Using Undressai.tools has been an intriguing experience. The interface is intuitive, making it easy to upload images and select the areas to modify. I was impressed by the speed at which the tool processed the images and the quality of the results.
The generated nude visuals were remarkably realistic, capturing the essence of the original images while effectively removing clothing. The ability to adjust and refine the output further enhanced my experience, allowing for creative experimentation.
### **Pros:**
User-Friendly Interface: The platform is easy to navigate, making it accessible for users of all skill levels.
High-Quality Outputs: The generated images are realistic and detailed, thanks to advanced AI technologies.
Privacy Focused: All generated images are auto-deleted within 48 hours, ensuring user privacy and data security.
Versatile Applications: The tool can be used for various purposes, including artistic exploration and personal projects.
### **Cons:**
Ethical Considerations: Users must be mindful of the ethical implications of generating nude images, particularly regarding consent and privacy.
Limited Image Formats: The tool currently supports only specific file formats (.jpg, .png, .heic), which may restrict some users.
Potential Misuse: There is a risk of the technology being misused for inappropriate purposes, necessitating responsible usage guidelines.
⏩⏩⏩[**Try For Free**](https://bestaitools.top/fgRB)
## Frequently Asked Questions (FAQS)
### **1. What is AI Nude?**
AI Nude refers to various applications and tools that utilize artificial intelligence to create altered images, specifically by generating realistic nude versions of clothed individuals. These technologies often employ deep learning techniques and generative algorithms, enabling users to manipulate and alter visual content. However, their use has raised significant privacy and ethical concerns, particularly regarding consent and the potential for misuse.
### **2. How Does AI Nude Work?**
AI Nude applications typically use Generative Adversarial Networks (GANs), which consist of two neural networks: a generator that creates images and a discriminator that evaluates their realism. The following steps explain how AI Nude works:
Data Collection: Large datasets of images train the networks to understand realistic image formation.
Training Process: The generator produces images while the discriminator assesses them, providing feedback for refinement.
Iterative Improvement: Over multiple cycles, the generator enhances its capability to create realistic images, ultimately producing the final output.
### **3. What are the Applications of AI Nude Generator?**
AI Nude generators can be used for various applications, including:
Artistic Exploration: Artists may use AI nude tools to create digital art or explore the representation of human forms.
Marketing: Certain businesses might utilize AI to produce provocative content for advertising.
Cyber Harassment: Unfortunately, these tools are also misused for creating non-consensual images leading to harassment or blackmail.
It is crucial to note that while the technology has creative potential, its applications need to be approached with caution due to ethical and legal implications.
### **4. Is there privacy and ethical concerns regarding AI Nude?**
Yes, there are significant privacy and ethical concerns surrounding AI Nude technologies. Here are some key issues:
Lack of Consent: AI nude generators create images without the subject's permission, violating privacy rights.
Potential for Misuse: Generated images can be used for harassment, blackmail, or revenge, causing emotional and psychological harm.
Legal Gaps: Current laws often inadequately address the nuances of digital image manipulation, complicating legal enforcement.
Impact on Mental Health: Victims of non-consensual image manipulation may experience anxiety, depression, and damage to their personal and professional reputations.
|
yosefw/roberta-amharic-reranker-medium-v5 | yosefw | 2025-03-14T19:41:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"cross-encoder",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-03-14T19:41:03Z | ---
library_name: transformers
tags:
- cross-encoder
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Master-Rapha7/mobileViTV2-64 | Master-Rapha7 | 2025-03-14T19:40:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mobilevitv2",
"image-classification",
"generated_from_trainer",
"base_model:apple/mobilevitv2-1.0-imagenet1k-256",
"base_model:finetune:apple/mobilevitv2-1.0-imagenet1k-256",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-03-14T16:45:44Z | ---
library_name: transformers
license: other
base_model: apple/mobilevitv2-1.0-imagenet1k-256
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: mobileViTV2-64
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobileViTV2-64
This model is a fine-tuned version of [apple/mobilevitv2-1.0-imagenet1k-256](https://huggingface.co/apple/mobilevitv2-1.0-imagenet1k-256) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3307
- Accuracy: 0.9106
- F1: 0.9093
- Precision: 0.9118
- Recall: 0.9106
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.6065 | 1.0 | 364 | 1.6103 | 0.1860 | 0.1866 | 0.1893 | 0.1860 |
| 1.5721 | 2.0 | 728 | 1.5804 | 0.2851 | 0.2857 | 0.2889 | 0.2851 |
| 1.4793 | 3.0 | 1092 | 1.5111 | 0.4229 | 0.4057 | 0.4006 | 0.4229 |
| 1.316 | 4.0 | 1456 | 1.2841 | 0.5028 | 0.4652 | 0.5157 | 0.5028 |
| 1.1397 | 5.0 | 1820 | 1.0520 | 0.5909 | 0.5474 | 0.6502 | 0.5909 |
| 0.8639 | 6.0 | 2184 | 0.8194 | 0.7163 | 0.6970 | 0.7325 | 0.7163 |
| 0.7371 | 7.0 | 2548 | 0.6773 | 0.7796 | 0.7711 | 0.7927 | 0.7796 |
| 0.6451 | 8.0 | 2912 | 0.5546 | 0.8292 | 0.8258 | 0.8334 | 0.8292 |
| 0.5299 | 9.0 | 3276 | 0.4800 | 0.8485 | 0.8461 | 0.8477 | 0.8485 |
| 0.387 | 10.0 | 3640 | 0.4091 | 0.8760 | 0.8737 | 0.8759 | 0.8760 |
| 0.3903 | 11.0 | 4004 | 0.3547 | 0.8884 | 0.8867 | 0.8887 | 0.8884 |
| 0.3513 | 12.0 | 4368 | 0.3207 | 0.9008 | 0.8983 | 0.9038 | 0.9008 |
| 0.3145 | 13.0 | 4732 | 0.3213 | 0.8967 | 0.8939 | 0.9009 | 0.8967 |
| 0.1838 | 14.0 | 5096 | 0.3013 | 0.8939 | 0.8924 | 0.8941 | 0.8939 |
| 0.3438 | 15.0 | 5460 | 0.3229 | 0.8857 | 0.8843 | 0.8850 | 0.8857 |
| 0.1913 | 16.0 | 5824 | 0.2568 | 0.9174 | 0.9159 | 0.9191 | 0.9174 |
| 0.2078 | 17.0 | 6188 | 0.2609 | 0.9187 | 0.9169 | 0.9206 | 0.9187 |
| 0.2061 | 18.0 | 6552 | 0.2811 | 0.9077 | 0.9061 | 0.9076 | 0.9077 |
| 0.2806 | 19.0 | 6916 | 0.2536 | 0.9242 | 0.9230 | 0.9260 | 0.9242 |
| 0.2495 | 20.0 | 7280 | 0.2881 | 0.9091 | 0.9076 | 0.9094 | 0.9091 |
| 0.0361 | 21.0 | 7644 | 0.2875 | 0.9311 | 0.9301 | 0.9331 | 0.9311 |
| 0.1811 | 22.0 | 8008 | 0.3067 | 0.9063 | 0.9050 | 0.9056 | 0.9063 |
| 0.1129 | 23.0 | 8372 | 0.2996 | 0.9050 | 0.9047 | 0.9053 | 0.9050 |
| 0.1138 | 24.0 | 8736 | 0.2970 | 0.9063 | 0.9060 | 0.9066 | 0.9063 |
| 0.3135 | 25.0 | 9100 | 0.3723 | 0.8967 | 0.8968 | 0.8972 | 0.8967 |
| 0.0828 | 26.0 | 9464 | 0.3574 | 0.9063 | 0.9060 | 0.9059 | 0.9063 |
| 0.0783 | 27.0 | 9828 | 0.4087 | 0.8939 | 0.8926 | 0.8926 | 0.8939 |
| 0.051 | 28.0 | 10192 | 0.3713 | 0.9063 | 0.9060 | 0.9068 | 0.9063 |
| 0.0744 | 29.0 | 10556 | 0.4470 | 0.8953 | 0.8951 | 0.8958 | 0.8953 |
| 0.0814 | 30.0 | 10920 | 0.4289 | 0.9077 | 0.9085 | 0.9099 | 0.9077 |
| 0.131 | 31.0 | 11284 | 0.4600 | 0.9008 | 0.8996 | 0.8997 | 0.9008 |
| 0.0245 | 32.0 | 11648 | 0.4818 | 0.8981 | 0.8978 | 0.8977 | 0.8981 |
| 0.0541 | 33.0 | 12012 | 0.4678 | 0.9050 | 0.9043 | 0.9040 | 0.9050 |
| 0.1011 | 34.0 | 12376 | 0.5298 | 0.8994 | 0.8985 | 0.8991 | 0.8994 |
| 0.17 | 35.0 | 12740 | 0.5093 | 0.9036 | 0.9026 | 0.9024 | 0.9036 |
| 0.0892 | 36.0 | 13104 | 0.5018 | 0.9063 | 0.9050 | 0.9050 | 0.9063 |
| 0.0246 | 37.0 | 13468 | 0.5520 | 0.9077 | 0.9058 | 0.9061 | 0.9077 |
| 0.0564 | 38.0 | 13832 | 0.5493 | 0.9077 | 0.9075 | 0.9077 | 0.9077 |
| 0.0817 | 39.0 | 14196 | 0.5607 | 0.9091 | 0.9084 | 0.9081 | 0.9091 |
| 0.0056 | 40.0 | 14560 | 0.5990 | 0.8939 | 0.8947 | 0.8961 | 0.8939 |
| 0.0653 | 41.0 | 14924 | 0.5870 | 0.9146 | 0.9136 | 0.9136 | 0.9146 |
| 0.1649 | 42.0 | 15288 | 0.5882 | 0.9050 | 0.9040 | 0.9039 | 0.9050 |
| 0.1057 | 43.0 | 15652 | 0.5924 | 0.9008 | 0.8999 | 0.9002 | 0.9008 |
| 0.0859 | 44.0 | 16016 | 0.5830 | 0.8994 | 0.8994 | 0.8999 | 0.8994 |
| 0.1809 | 45.0 | 16380 | 0.6357 | 0.8953 | 0.8939 | 0.8936 | 0.8953 |
| 0.1285 | 46.0 | 16744 | 0.6617 | 0.8967 | 0.8965 | 0.8975 | 0.8967 |
| 0.1018 | 47.0 | 17108 | 0.6006 | 0.9050 | 0.9044 | 0.9042 | 0.9050 |
| 0.0091 | 48.0 | 17472 | 0.5762 | 0.9091 | 0.9090 | 0.9094 | 0.9091 |
| 0.0368 | 49.0 | 17836 | 0.6097 | 0.9077 | 0.9067 | 0.9071 | 0.9077 |
| 0.0585 | 50.0 | 18200 | 0.6059 | 0.9063 | 0.9059 | 0.9061 | 0.9063 |
| 0.0373 | 51.0 | 18564 | 0.6621 | 0.8953 | 0.8953 | 0.8963 | 0.8953 |
| 0.1672 | 52.0 | 18928 | 0.6081 | 0.9022 | 0.9020 | 0.9019 | 0.9022 |
| 0.0344 | 53.0 | 19292 | 0.6145 | 0.8994 | 0.9002 | 0.9011 | 0.8994 |
| 0.0727 | 54.0 | 19656 | 0.6106 | 0.9036 | 0.9034 | 0.9034 | 0.9036 |
| 0.1997 | 55.0 | 20020 | 0.6037 | 0.9091 | 0.9082 | 0.9090 | 0.9091 |
| 0.0437 | 56.0 | 20384 | 0.5835 | 0.9105 | 0.9100 | 0.9105 | 0.9105 |
| 0.0263 | 57.0 | 20748 | 0.6032 | 0.9063 | 0.9062 | 0.9064 | 0.9063 |
| 0.056 | 58.0 | 21112 | 0.5828 | 0.9105 | 0.9101 | 0.9102 | 0.9105 |
| 0.0422 | 59.0 | 21476 | 0.6179 | 0.9105 | 0.9111 | 0.9129 | 0.9105 |
| 0.0377 | 60.0 | 21840 | 0.6400 | 0.8981 | 0.8997 | 0.9027 | 0.8981 |
| 0.1162 | 61.0 | 22204 | 0.5841 | 0.9105 | 0.9106 | 0.9108 | 0.9105 |
| 0.0407 | 62.0 | 22568 | 0.6017 | 0.9063 | 0.9064 | 0.9067 | 0.9063 |
| 0.0443 | 63.0 | 22932 | 0.6064 | 0.9036 | 0.9031 | 0.9029 | 0.9036 |
| 0.089 | 64.0 | 23296 | 0.6250 | 0.9008 | 0.9011 | 0.9018 | 0.9008 |
| 0.0971 | 65.0 | 23660 | 0.6729 | 0.9022 | 0.9011 | 0.9018 | 0.9022 |
| 0.046 | 66.0 | 24024 | 0.6445 | 0.9063 | 0.9060 | 0.9062 | 0.9063 |
| 0.0387 | 67.0 | 24388 | 0.6070 | 0.9036 | 0.9039 | 0.9046 | 0.9036 |
| 0.0709 | 68.0 | 24752 | 0.5890 | 0.9132 | 0.9131 | 0.9132 | 0.9132 |
| 0.0273 | 69.0 | 25116 | 0.6484 | 0.9008 | 0.9001 | 0.9007 | 0.9008 |
| 0.1951 | 70.0 | 25480 | 0.6336 | 0.9077 | 0.9075 | 0.9075 | 0.9077 |
| 0.0569 | 71.0 | 25844 | 0.6546 | 0.9105 | 0.9104 | 0.9105 | 0.9105 |
| 0.1145 | 72.0 | 26208 | 0.6964 | 0.9036 | 0.9026 | 0.9027 | 0.9036 |
| 0.0352 | 73.0 | 26572 | 0.6657 | 0.9118 | 0.9114 | 0.9115 | 0.9118 |
| 0.0375 | 74.0 | 26936 | 0.6417 | 0.9050 | 0.9054 | 0.9059 | 0.9050 |
| 0.0351 | 75.0 | 27300 | 0.6812 | 0.9091 | 0.9077 | 0.9081 | 0.9091 |
| 0.0675 | 76.0 | 27664 | 0.6445 | 0.9105 | 0.9100 | 0.9103 | 0.9105 |
| 0.0418 | 77.0 | 28028 | 0.7359 | 0.9091 | 0.9073 | 0.9084 | 0.9091 |
| 0.0353 | 78.0 | 28392 | 0.6701 | 0.9022 | 0.9021 | 0.9022 | 0.9022 |
| 0.0221 | 79.0 | 28756 | 0.6607 | 0.9036 | 0.9039 | 0.9051 | 0.9036 |
| 0.2435 | 80.0 | 29120 | 0.6487 | 0.9118 | 0.9115 | 0.9114 | 0.9118 |
| 0.0362 | 81.0 | 29484 | 0.7711 | 0.9077 | 0.9060 | 0.9064 | 0.9077 |
| 0.0116 | 82.0 | 29848 | 0.6276 | 0.9063 | 0.9067 | 0.9073 | 0.9063 |
| 0.001 | 83.0 | 30212 | 0.6564 | 0.9022 | 0.9020 | 0.9022 | 0.9022 |
| 0.013 | 84.0 | 30576 | 0.6576 | 0.9077 | 0.9071 | 0.9072 | 0.9077 |
| 0.0183 | 85.0 | 30940 | 0.7075 | 0.9036 | 0.9033 | 0.9038 | 0.9036 |
| 0.0367 | 86.0 | 31304 | 0.7168 | 0.9118 | 0.9100 | 0.9108 | 0.9118 |
| 0.027 | 87.0 | 31668 | 0.6892 | 0.9132 | 0.9133 | 0.9143 | 0.9132 |
| 0.015 | 88.0 | 32032 | 0.6886 | 0.9077 | 0.9069 | 0.9069 | 0.9077 |
| 0.0435 | 89.0 | 32396 | 0.6863 | 0.9008 | 0.9009 | 0.9012 | 0.9008 |
| 0.0049 | 90.0 | 32760 | 0.6883 | 0.9077 | 0.9072 | 0.9069 | 0.9077 |
| 0.1041 | 91.0 | 33124 | 0.7216 | 0.9008 | 0.9000 | 0.9002 | 0.9008 |
| 0.0465 | 92.0 | 33488 | 0.7032 | 0.9022 | 0.9021 | 0.9026 | 0.9022 |
| 0.0221 | 93.0 | 33852 | 0.7131 | 0.9036 | 0.9025 | 0.9023 | 0.9036 |
| 0.0091 | 94.0 | 34216 | 0.6886 | 0.8953 | 0.8963 | 0.8976 | 0.8953 |
| 0.0322 | 95.0 | 34580 | 0.7213 | 0.9022 | 0.9020 | 0.9024 | 0.9022 |
| 0.0348 | 96.0 | 34944 | 0.7005 | 0.9022 | 0.9016 | 0.9014 | 0.9022 |
| 0.0357 | 97.0 | 35308 | 0.7131 | 0.8967 | 0.8971 | 0.8980 | 0.8967 |
| 0.0363 | 98.0 | 35672 | 0.6947 | 0.9118 | 0.9112 | 0.9114 | 0.9118 |
| 0.0249 | 99.0 | 36036 | 0.6783 | 0.9132 | 0.9126 | 0.9126 | 0.9132 |
| 0.0179 | 100.0 | 36400 | 0.6614 | 0.9036 | 0.9037 | 0.9039 | 0.9036 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.4.0
- Tokenizers 0.21.0
|
RichardErkhov/lionking927_-_dippy-001-awq | RichardErkhov | 2025-03-14T19:39:39Z | 0 | 0 | null | [
"safetensors",
"stablelm",
"arxiv:2203.05482",
"4-bit",
"awq",
"region:us"
] | null | 2025-03-14T19:37:54Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
dippy-001 - AWQ
- Model creator: https://huggingface.co/lionking927/
- Original model: https://huggingface.co/lionking927/dippy-001/
Original model description:
---
base_model:
- aks1s/aks-11-06
- gtsru/dek-11-04
library_name: transformers
tags:
- mergekit
- merge
---
# merged_models
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [aks1s/aks-11-06](https://huggingface.co/aks1s/aks-11-06)
* [gtsru/dek-11-04](https://huggingface.co/gtsru/dek-11-04)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: gtsru/dek-11-04
parameters:
weight: 1.0
- model: aks1s/aks-11-06
parameters:
weight: 0.6
merge_method: linear
dtype: float16
```
|
ai-apps-directory/top-deepnude-ai-websites | ai-apps-directory | 2025-03-14T19:38:40Z | 0 | 0 | null | [
"region:us"
] | null | 2025-03-14T19:38:09Z | # Best Deepnude Apps Of 2025
The best deep nude apps should have features like high-quality & accurate generation, high processing speed, customization options (such as age selection, undress mode, body type, pose selection, etc.), privacy of images, editing, and much more.
I found 2 apps that have all of the above features. I have been using these apps for over a year.
**Summary Of Our Top Picks**
Undress.app: Best For High Quality Nudes. ⏩⏩⏩[**Try Undress App For Free**](https://bestaitools.top/fgRB)
Undress.cc: Best For Customization Options. ⏩⏩⏩[**Try Undress CC For Free**](https://bestaitools.top/fgRB)
Here’s the detailed breakdown of each app:
## 1. Undress.app
Undress.app is a groundbreaking platform that merges artificial intelligence with adult content creation, allowing users to explore their fantasies uniquely and interactively. Among its standout features are deep nude apps that utilize advanced technology to create realistic nude images based on clothed photos.
⏩⏩⏩[**Try Undress App For Free**](https://bestaitools.top/fgRB)
### **Key Features**
AI Image Generator: Users can generate realistic AI porn images, customizing every detail from model type (female, male, transgender) to skin tone and backgrounds.
Quality Mode: Elevate image quality with levels from Base to Ultra, enhancing details and resolution significantly.
Custom Pose: Allows users to transfer character poses from their images into generated visuals, fostering a more personalized experience.
In Paint: Users can modify specific areas of images, providing control to enhance or alter visual elements precisely.
High Customization: Users can adjust many factors such as nudity levels, body proportions, and overall artistic elements to match their vision.
Cross-Device Compatibility: The deep nude apps work seamlessly across mobile and desktop platforms, offering accessibility wherever users are.
User Community: Engaging with a vibrant community via forums or Discord fosters shared creativity and exchange of ideas.
### **Ease of Use**
The user experience on the Undress.app and its associated deep nude apps is designed to be intuitive and straightforward. With minimal navigation required, users can jump straight into creating content without a steep learning curve. However, the availability of advanced features might require a little time to explore fully.
### **Pros:**
Highly customizable features allow for tailored user experiences.
High-quality output suitable for both personal and professional use.
Comprehensive tools for both image and video generation enhance creative possibilities.
Supportive community encourages creative exchanges among users.
### **Cons:**
Some ethical concerns regarding the creation of non-consensual images may arise.
The potential for misuse of generated content could lead to privacy breaches.
Complicated ethical landscapes that users must navigate responsibly.
⏩⏩⏩[**Try Undress App For Free**](https://bestaitools.top/fgRB)
## 2. Undress.cc
Undress.cc is a cutting-edge platform that stands out as one of the best deep-nude apps available today. It leverages advanced artificial intelligence to generate high-quality NSFW images and videos, allowing users to explore their fantasies with ease.
Whether you are a content creator, a model, or someone looking to experiment with AI-generated adult content, Undress.cc offers a user-friendly experience that caters to a wide range of preferences.
⏩⏩⏩[**Try Undress CC For Free**](https://bestaitools.top/fgRB)
### **Key Features**
Custom Image Use: Users can upload their own images as references, provided they own the rights to them, enhancing personalization.
Character Reusability: Undress.cc allows users to save and reuse previously generated characters, making it easy to create AI influencers or girls in various scenarios.
NSFW Content Creation: Users can generate a wide array of NSFW images and videos, all within legal boundaries, ensuring a safe and compliant experience.
AI Model Variety: Undress.cc offers a selection of 10 distinct AI models, allowing users to create both realistic and anime-style content.
Fetish Extensions: The platform supports various fetish content through specific extensions, enabling users to explore niche interests.
Image Upscaling: Users can enhance the resolution of generated images by two or three times, adding finer details for a more realistic appearance.
Privacy Options: Users have the discretion to make their generated images or videos private, ensuring confidentiality and control over their content.
### **Ease of Use**
Undress.cc is designed with user-friendliness in mind. Unlike many other deepnude apps that require technical skills or powerful hardware, Undress.cc simplifies the process. Users only need to type in their prompts and select the desired AI model to generate content. This accessibility makes it an ideal choice for both novices and experienced users alike.
### **Pros:**
No Technical Skills Required: Users can easily navigate the platform without needing advanced knowledge.
High-Quality Output: The AI generates realistic and diverse content, catering to various preferences.
Flexible Content Creation: Users can create both images and videos, expanding their creative possibilities.
Character Reusability: The ability to save and reuse characters enhances creativity and efficiency.
### **Cons:**
Content Limitations: While the platform supports a wide range of content, it may not cater to every niche or fetish.
Subscription Costs: Accessing certain features may require purchasing a plan, which could be a barrier for some users.
⏩⏩⏩[**Try Undress CC For Free**](https://bestaitools.top/fgRB)
## FAQS (Frequently Asked Questions)
### **1. What is deepnude?**
DeepNude AI is a controversial web-based application that employs advanced AI technology to generate nude images from clothed photos. Launched in 2019, it utilizes deep learning techniques, particularly neural networks, to create realistic depictions of nudity from input images. While it primarily targets images of women, the process involves replacing clothing with nude body parts, generating images that can appear convincingly realistic under certain conditions.
Key Features of DeepNude AI
Image Processing: Uses neural networks based on the pix2pix algorithm to edit images swiftly and generate nude representations.
Customization: Users can often adjust parameters such as body type, age, and image quality for desired results.
Ease of Use: The application features a user-friendly interface, requiring minimal technical expertise.
Privacy Measures: Developers emphasize the importance of ethical use, with some measures to protect user data.
No Storage of Images: Allegedly, it does not store or misuse user-uploaded images, although concerns about ethical implications remain.
### **2. How does deepnude AI work?**
DeepNude AI operates through advanced artificial intelligence, utilizing a specific type of deep learning technique known as generative adversarial networks (GANs). The process primarily involves two main components: a generator and a discriminator. Here's a breakdown of how it works:
1. Image Input and Processing
Users start by uploading a clear, high-resolution photo of a clothed individual, typically focusing on women. The application works best with images where the subject is well-lit, facing the camera, and wearing tight or fitting clothing.
2. Neural Network Analysis
The generator in the AI analyzes the uploaded image to identify clothing and skin areas. It utilizes trained models that have learned from a vast dataset (over 10,000 nude photos) to effectively infer what lies beneath the clothing.
3. Clothing Removal
Once the clothing is identified, the generator replaces it with artificial representations of nudity. The discriminator simultaneously evaluates these generated images against authentic nude photos to assess realism. This two-network system allows the generator to refine its output continuously.
4. Image Output
After processing, the AI produces a new image where the subject appears nude. The final output is then available for the user to save or further adjust based on their preferences. This operation generally takes around 30 seconds.
### **3. What are the applications of Deepnude AI?**
The applications of DeepNude AI primarily revolve around its capability to generate realistic nude images from clothed photographs. While the technology has several potential uses, it also raises serious ethical and legal concerns. Here are some key applications:
1. Creative and Artistic Projects
Artists and digital creators may utilize DeepNude AI for conceptual artwork where nudity plays a significant role.
It allows for experimentation with themes of body representation without the need for live models.
2. Adult Industry Innovations
Platforms in the adult industry could leverage this technology to create unique content tailored to specific audience preferences.
It may enhance virtual experiences by offering more personalized adult entertainment solutions.
3. Entertainment and Media
Filmmakers or content creators may consider using DeepNude AI for visual effects that require nudity in certain contexts.
It could be used for comedic or satirical purposes in various media productions.
4. Education and Awareness
This technology can serve as a tool for educational discussions about digital ethics, privacy, and consent in the modern age.
It can help raise awareness about the implications of AI in generating content that may not be consensually obtained.
5. Psychological and Sociological Studies
Researchers may analyze the societal impacts and perceptions of digital nudity, particularly concerning body image issues.
Understanding user behavior and motivations behind using such platforms can contribute to discussions on digital privacy.
### **4. What are the privacy and ethical concerns regarding deep-nude AI?**
The emergence of DeepNude AI raises significant privacy and ethical concerns that have sparked widespread debate. Here are some of the primary concerns:
1. Non-Consensual Content
The ability of DeepNude AI to create nude images without an individual’s consent is perhaps the most pressing concern. This technology can facilitate the production of non-consensual pornography, leading to harassment, exploitation, and significant emotional distress for victims, predominantly women. This potential abuse calls into question the ethical implications of using such software.
2. Erosion of Trust in Digital Media
As DeepNude AI can produce highly realistic images, it contributes to a growing skepticism regarding the authenticity of digital media. People may begin to doubt the legitimacy of photographs and videos they encounter online, undermining the foundational trust that is essential for healthy communication and engagement in digital spaces.
3. Legal Implications
Current laws may not adequately address the unique challenges posed by AI-generated explicit content. The rapid advancement of this technology creates legal loopholes that perpetrators can exploit. There is an ongoing debate about copyright, consent, and the digital rights of individuals whose images can be manipulated without their permission.
4. Impact on Vulnerable Populations
Particular demographics, such as women and minors, may be disproportionately affected by technologies like DeepNude AI. The risk of being targeted with non-consensual images threatens personal safety and can lead to severe repercussions in individuals' personal and professional lives.
5. Cultural and Social Implications
The normalization of manipulated imagery can have profound impacts on societal standards of beauty and body image. The widespread availability of such technologies may perpetuate unhealthy beauty standards and influences, affecting mental health and self-esteem among various groups.
6. Regulatory Challenges
Policymakers face the difficult task of developing regulations that effectively govern AI technologies while promoting innovation. The challenge lies in striking a balance between protecting individuals from potential abuses and fostering an environment conducive to technological advancement.
7. Ethical Development and Usage
Developers of AI technologies are urged to establish ethical guidelines governing their use. This includes responsibilities related to informed consent, user education, and the integrity of the data on which AI models are trained. Awareness and intentional strategies can mitigate harm and promote responsible usage.
|
ai-apps-directory/best-deepnude-ai-apps | ai-apps-directory | 2025-03-14T19:37:32Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2025-03-14T19:36:43Z | ---
license: mit
---
# 5 Best Deepnude AI Apps In 2025 (Free & Paid)
In a world where technology continues to push boundaries, **deepnude** and AI-based apps are becoming increasingly popular, specifically in the realm of deep nude applications.
As we step into 2025, the landscape of these apps has evolved dramatically, offering users a mix of creativity and controversy. This guide presents the 5 best deep nude AI apps you can explore this year, whether you're looking for free options or willing to invest in premium features.
## 5 Best Deepnude AI Apps
### 1. Deepnude.ai
Deepnude.ai is a revolutionary platform that utilizes advanced AI technology to create stunning images based on user prompts.
This tool stands out among the best Deepnude AI apps due to its user-friendly interface and powerful features that allow users to unleash their creativity. Whether you want to create a character from scratch or edit existing images, Deepnude.ai provides the tools necessary to bring your imagination to life.
⏩⏩⏩[**Try Deepnude.ai For Free**](https://bestaitools.top/fgRB)

#### Key Features
AI Magic Tool from Text: Create images from text prompts in mere seconds.
AI Art Tool: Transform your dream character into reality.
Portrait Creation: Generate AI characters that resemble real people by uploading reference photos.
Edit Your Images: Enhance your creativity by adding, extending, or removing content with simple text prompts.
AI Outpainting: Expand your images beyond imagination by adding new elements like backgrounds and characters.
Create Real Soulmates: Describe your ideal character, and the AI will create art that fulfills your vision.
User-Friendly Interface: Navigate easily through the platform to create images in just three simple steps.
Unique Image Generation: Each image is created based on your unique description, ensuring originality.
#### My Experience
Using Deepnude.ai has been an incredible journey for me. The first time I logged in, I was greeted with a clean and intuitive interface that made navigation a breeze. I decided to test the AI Magic Tool by entering a simple text prompt describing a fantasy character I had in mind.
Within seconds, the AI generated a stunning image that exceeded my expectations. The level of detail and creativity was impressive, showcasing the power of the underlying algorithms.
One of my favorite features is the ability to upload a reference photo to create a character that resembles someone I know. This feature allowed me to personalize my creations further, making them feel more real and relatable.
I also enjoyed experimenting with the editing tools, which enabled me to modify existing images effortlessly. The AI Outpainting feature was particularly fascinating, as it allowed me to expand my images by adding new elements, creating a more immersive scene.
Overall, my experience with Deepnude.ai has been overwhelmingly positive. The platform has not only enhanced my creative abilities but has also provided me with a fun and engaging way to explore digital art. I look forward to continuing my journey with this amazing tool.
#### Pros
Fast Image Creation: Generate unique images in seconds, saving time and effort.
Versatile Features: Offers a wide range of tools for creating and editing images, catering to various artistic needs.
No Copyright Issues: AI-generated images are unique and do not have copyright concerns, allowing for commercial use.
#### Cons
Learning Curve: While the interface is user-friendly, some advanced features may require time to master.
Dependence on Text Prompts: The quality of the generated images heavily relies on the clarity and detail of the text prompts provided.
⏩⏩⏩[**Try Deepnude.ai For Free**](https://bestaitools.top/fgRB)
### 2. Undress.app
In the realm of AI-driven applications, Undress.app stands out as one of the best Deepnude AI apps available today.
This innovative platform allows users to undress images safely and anonymously, providing a user-friendly experience that has attracted over 100,000 users daily. With its advanced technology and commitment to privacy, Undress.app is a top choice for those looking to explore AI-generated nudity.
⏩⏩⏩[**Try Undress.app For Free**](https://bestaitools.top/fgRB)
#### Key Features
User-Friendly Interface: The app is designed to be intuitive, allowing users to navigate easily.
Multiple Generation Modes: Users can choose from various modes such as Lingerie, Bikini, and NSFW for customized results.
High-Quality Results: The AI processes images to deliver high-quality, unblurred results.
Privacy Protection: Undress.app does not save any user data, ensuring complete confidentiality.
Free Trial Credits: New users receive free credits upon signing up to explore the app's features.
Versatile Compatibility: The app works with both male and female images, as well as anime characters.
Regular Updates: The developers frequently update the app to enhance security and user experience.
Referral Program: Users can earn free credits by inviting friends to join the platform.
#### My Experience
My experience with Undress.app has been overwhelmingly positive. Upon signing up, I was greeted with a straightforward interface that made the process of uploading images seamless.
I chose the NSFW mode for my first attempt, and within seconds, the AI processed my image and delivered a high-quality result without any blurring. The accuracy of the undressing feature was impressive, showcasing the app's advanced technology. I appreciated the fact that my data was not stored, which made me feel secure while using the app.
The free credits provided upon registration allowed me to experiment with different modes without any financial commitment. I also enjoyed the versatility of the app, as it worked well with various types of images, including both male and female photos.
#### Pros
High Accuracy: The AI delivers realistic results with no blurring.
Privacy Focused: User data is not saved, ensuring confidentiality.
Variety of Modes: Users can choose from multiple undressing modes for customized results.
#### Cons
Sign-Up Required: Users must create an account to access the app, which may deter some.
Limited Free Credits: The free credits provided may not be sufficient for extensive use.
⏩⏩⏩[**Try Undress.app For Free**](https://bestaitools.top/fgRB)
### 3. DeepNudeNow
#### Why I Recommend It
DeepNudeNow stands out due to its user-friendly interface and the quality of its output. The app employs a modified version of the pix2pixHD GAN architecture, ensuring that the generated images are not only realistic but also processed efficiently.
With various subscription plans available, users can choose the one that best fits their needs, whether they are looking for a free trial or a premium experience. The flexibility and quality of service make it a top choice in the DeepNude AI app market.
#### Key Features
Unlimited Requests: Users can submit as many tasks as they want without restrictions.
Parallel Queuing: Multiple tasks can be processed simultaneously, reducing wait times.
No Watermarks: The final images are delivered without any watermarks, ensuring a clean output.
No Ads: Users can enjoy an uninterrupted experience without advertisements.
Priority Queue: Premium users receive faster processing times through a priority queue.
Custom Watermarks: Business users can add their own watermarks to the generated images.
API Access: For developers, API access is available for integration into other applications.
Flexible Payment Plans: Users can choose between weekly, monthly, or exclusive plans, with discounts for longer commitments.
#### My Experience
Using DeepNudeNow has been a remarkable experience. Initially, I was skeptical about the quality of the images generated, but after uploading a high-quality target image, I was pleasantly surprised by the results.
The process was straightforward: I simply uploaded a photo of a clothed individual, and within a short time, I received a beautifully rendered image. The absence of watermarks was a significant plus, allowing me to use the images without any distractions.
I opted for the premium plan, which provided me with unlimited requests and parallel queuing, making the entire experience seamless. The customer support was also commendable; they were quick to respond to my queries and provided helpful guidance.
#### Pros
High-Quality Output: The images generated are realistic and of high quality.
User-Friendly Interface: The app is easy to navigate, making it accessible for all users.
Flexible Subscription Options: Various plans cater to different user needs and budgets.
#### Cons
Processing Time: While the premium plan offers faster processing, free users may experience longer wait times.
Limited Free Features: The free version has significant limitations, which may not satisfy all users.
### 4. Pornx.ai
#### Why I Recommend It
Pornx.ai offers an unparalleled experience in the realm of AI-generated adult content. The platform not only provides a wide range of customization options but also ensures high-quality outputs that bring your fantasies to life.
Whether you are a casual user or someone looking to delve deeper into the world of AI porn, Pornx.ai has something for everyone.
#### Key Features
AI Image Generator: Create stunning AI porn images by selecting models, customizing body types, skin tones, hairstyles, and outfits.
AI Video Generator: Craft personalized videos that reflect your imagination effortlessly.
Quality Mode: Enhance your images with options for Base, High, and Ultra quality, ensuring top-notch visuals.
Custom Pose: Transfer character poses from your images to generated visuals, perfect for storytelling or personal enjoyment.
Inpaint Feature: Modify specific areas of your images, allowing for detailed customization and creativity.
User-Friendly Interface: Navigate the platform easily, making the creative process enjoyable and straightforward.
Community Engagement: Join the Discord community to share experiences, tips, and creations with other users.
#### My Experience
Using Pornx.ai has been an exciting journey filled with creativity and exploration. The first time I logged in, I was immediately impressed by the intuitive layout of the platform. The AI image generator was easy to use, allowing me to select various models and customize them to my liking.
I experimented with different body types and skin tones, which made the process feel personal and tailored to my preferences.
One of the standout features for me was the Quality Mode. I opted for the Ultra setting, and the difference in image clarity was remarkable. The details were sharp, and the overall quality exceeded my expectations. I also enjoyed using the Custom Pose feature, which allowed me to create dynamic scenes that felt alive and engaging.
Moreover, the Inpaint feature was a game-changer. I could easily modify specific areas of my images, which added a layer of creativity that I hadn’t anticipated.
Overall, my experience with Pornx.ai has been overwhelmingly positive. The platform not only met my expectations but also inspired me to push the boundaries of my imagination.
#### Pros
Highly Customizable: Users can create unique content tailored to their preferences.
High-Quality Outputs: The platform offers various quality settings for stunning visuals.
Engaging Community: The Discord community provides a space for users to connect and share their experiences.
#### Cons
Age Restriction: The platform is only accessible to users over 18, which may limit its audience.
Learning Curve: New users may take some time to fully understand all features and functionalities.
### 5. UndressHer.app
#### Why I Recommend It
UndressHer.app is not just about creating an AI girlfriend; it’s about the experience of personalization and creativity. The app provides a user-friendly interface, making it accessible for everyone, regardless of their technical skills.
The extensive customization options allow users to express their preferences and desires, resulting in a truly unique AI companion. Additionally, the pricing structure is flexible, catering to different budgets, which enhances its appeal.
#### Key Features
200+ Unique Customization Options: Users can choose from a wide range of tags to design their ideal girlfriend.
Free Token Option: The app offers a free tier, allowing users to generate one image per day.
Multiple Pricing Tiers: Various token bundles are available, including Standard, Premium, and Ultimate options.
No Watermark: Paid images come without watermarks, ensuring a clean and professional look.
Fast Queue: Premium and Ultimate users benefit from faster processing times for their requests.
High-Quality Images: The app guarantees high-quality outputs, enhancing the overall user experience.
Privacy Assurance: All images are AI-generated, ensuring user privacy and anonymity.
Easy Editing Tools: Users can edit their images to achieve the desired results, improving the overall quality.
#### My Experience
Using UndressHer.app has been a fascinating journey. From the moment I signed up, I was impressed by the intuitive interface that made navigation seamless. The process of designing my AI girlfriend was engaging, as I explored the vast array of customization options.
I started with a basic template and gradually added features that resonated with my preferences, such as hair color, clothing style, and facial features. One of the standout moments was when I decided to test the undressing feature. Following the guidelines provided, I uploaded an image with a simple pose.
The results were impressive; the AI accurately interpreted the clothing and produced a high-quality image. I appreciated the ability to edit the drawing, which allowed me to refine the output further. The token system is also quite flexible.
I opted for the Standard package, which provided a good balance between cost and quality. The absence of watermarks on my images was a significant plus, as it made sharing my creations more enjoyable.
#### Pros
Extensive Customization: The app offers a wide range of options to create a personalized experience.
User-Friendly Interface: Easy to navigate, making it accessible for all users.
Flexible Pricing: Various token bundles cater to different budgets and needs.
#### Cons
Image Quality Variability: Some users may experience inconsistencies in image quality based on the input images.
Learning Curve: New users may need time to fully understand the editing tools and features.
### Frequently Asked Questions (FAQS)
#### 1. What is Deepnude AI?
Deepnude AI refers to a controversial application and technology that utilizes artificial intelligence to create manipulated images, often depicting individuals in nude forms based on uploaded photos.
It has received significant attention, primarily due to its ethical implications and potential misuse in generating non-consensual explicit content.
#### 2. How does Deepnude AI work?
Deepnude AI operates using advanced artificial intelligence techniques, particularly leveraging neural networks and generative adversarial networks (GANs), to manipulate images and generate realistic nude representations from clothed photos. Below is an explanation of how this technology functions:
Image Input: The process begins with the user uploading a clear, high-resolution photograph of a clothed individual, typically a woman, as the software is designed to create nude images exclusively of women.
Neural Network Processing: The AI utilizes a neural network trained on a dataset of thousands of images, including both clothed and nude forms. This allows the model to learn how to effectively swap clothing for realistic skin textures and body shapes.
Generative Adversarial Networks (GANs): Deepnude employs GANs, which consist of two neural networks—a generator and a discriminator. The generator creates the fake nude images while the discriminator evaluates their realism. This adversarial process helps the generator improve over time, producing increasingly convincing results.
Rendering the Output: Once the neural network processes the input image, it generates a manipulated version that simulates nudity. This result is achieved by replacing clothing in the original photo with lifelike representations of anatomical features.
User Review and Adjustment: Users can then review the output, and depending on the platform, they may have options to make adjustments or refinements before saving the final image.
### 3. What are the applications of Deepnude AI?
Art and Digital Creativity:
Artists may use Deepnude AI to explore representations of the human form from different perspectives, experimenting with creativity and visual expression.
It aids in generating stylized or digitally manipulated artworks that can be used in exhibitions or personal projects.
Fashion and Virtual Try-Ons:
Fashion designers might leverage such technologies to visualize clothing on models without needing live models for every design, streamlining the design process.
It allows for virtual try-on experiences, giving users the ability to visualize how outfits look on real bodies before purchase.
Body Positivity Campaigns:
These applications can be used to promote body diversity and acceptance by generating images that celebrate different body types and appearances.
Entertainment and Adult Content Creation:
Content creators might find this technology useful in producing animated or interactive adult content tailored to specific audiences or preferences.
It provides tools for personalized experiences in adult entertainment, enhancing user engagement and satisfaction.
Educational Purposes:
Deepnude AI can serve as a tool for studying and understanding the impact of digital manipulation in media, sparking discussions about ethics and consent.
It can be used in workshops or courses focused on digital art and the implications of AI in modern creativity.
### 4. What were the Factors Contributing to the Blurring of DeepNude images?
The blurriness in images created by DeepNude AI happens for several reasons:
Basic Algorithms: The original DeepNude software used simple AI methods that couldn't create clear and realistic images.
Not Enough Training Data: The AI was trained with a small amount of images, making it hard for it to produce detailed outputs.
Performance Limitations: The software wasn't designed to run efficiently, which affected how quickly it could process images and the quality of the results.
Dependence on Old Models: The application relied too much on models that weren't specifically designed for making high-quality nude images.
### 5. What are the ethical and legal considerations when using DeepNude AI?
Using DeepNude AI brings up many ethical and legal questions:
Consent and Privacy: Making images of people without their permission breaks privacy rules and is unethical.
Risk of Abuse: The technology can be misused to create non-consensual porn, which can harm victims emotionally.
Legal Consequences: Many places have laws against making and sharing altered images without consent, which could lead to criminal charges for users.
Create a Hostile Environment: Such tools can promote harassment and exploitation, especially harming women and marginalized groups.
Responsibility of Developers: Those who create these technologies must be aware of their impact and take responsibility for harmful uses.
### 6. What risks are associated with misuse of these tools?
Misusing DeepNude AI and similar technologies can lead to various risks:
Creation of Non-Consensual Images: The ability to make nude images can lead to serious violations of consent, harming victims emotionally.
Misinformation and Trust Issues: Fake media can contribute to misinformation, decreasing trust in online content.
Targeted Harassment: Misuse of the technology allows malicious users to create and share harmful images of individuals.
Widespread Harm: AI tools can quickly spread harmful content to many people.
Legal Consequences: Users may face serious legal issues if they make or share inappropriate or non-consensual images. |
RichardErkhov/Shams2611_-_testv5-4bits | RichardErkhov | 2025-03-14T19:34:12Z | 0 | 0 | null | [
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-14T19:31:42Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
testv5 - bnb 4bits
- Model creator: https://huggingface.co/Shams2611/
- Original model: https://huggingface.co/Shams2611/testv5/
Original model description:
---
base_model:
- EleutherAI/llemma_7b
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [EleutherAI/llemma_7b](https://huggingface.co/EleutherAI/llemma_7b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: EleutherAI/llemma_7b
layer_range: [0, 10]
- sources:
- model: EleutherAI/llemma_7b
layer_range: [5, 15]
merge_method: passthrough
dtype: bfloat16
```
|
ai-apps-log/best-undresser-ai-apps | ai-apps-log | 2025-03-14T19:33:27Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-03-14T19:33:03Z | ---
license: apache-2.0
---
# 7 Best Undress AI Apps In 2025
Undress AI apps, powered by advanced AI and deep learning, have sparked both curiosity and controversy. These tools use generative algorithms to digitally alter images, but their ethical implications and potential for misuse cannot be ignored.
In 2025, the landscape of such apps continues to evolve, with some gaining popularity for their capabilities. Here’s a quick look at the top 7 Undress AI apps making waves this year
## 1. Undress.app
### Why I Recommend It:
Undress.app stands out as one of the best undress AI apps available today. With its user-friendly interface and advanced technology, it allows users to generate unclothed images quickly and safely. The app prioritizes user privacy, ensuring that no data is saved or shared, making it a trustworthy choice for those interested in exploring AI-generated content.
⏩⏩⏩[**Try Undress App For Free**](https://bestaitools.top/fgRB)

### Key Features:
User-Friendly Interface: The app is designed to be intuitive, making it easy for anyone to navigate.
Multiple Generation Modes: Users can choose from various modes such as Lingerie, Bikini, and NSFW to customize their experience.
High-Quality Results: The AI processes images to deliver high-quality, unblurred results, even for free trial accounts.
Privacy and Security: The app does not save any user data, ensuring complete confidentiality.
### My Experience:
Using Undress.app was a seamless experience. The sign-up process was quick, and I appreciated the variety of modes available. The results were impressive, showcasing the app's advanced AI capabilities. Overall, it was a satisfying experience that I would recommend to others.
### Pros:
Free Credits: New users receive free credits upon signing up, allowing them to try the app without any financial commitment.
Versatile Usage: The app works with both male and female photos, as well as anime images, providing a wide range of options.
### Cons:
Sign-Up Required: Users must create an account to access the app, which may deter some potential users.
⏩⏩⏩[**Try Undress App For Free**](https://bestaitools.top/fgRB)
## 2. Undressai.tools
### Why I Recommend It
Undressai.tools combines powerful AI algorithms with a seamless user experience, making it an excellent choice for both casual users and professionals. The app prioritizes user privacy by automatically deleting generated images within 48 hours.
⏩⏩⏩[**Try UndressAI.tools For Free**](https://bestaitools.top/fgRB)
### Key Features
Stable Diffusion Technology: Produces high-quality, coherent outputs with minimal artifacts.
Generative Adversarial Networks (GANs): Utilizes two neural networks to create highly realistic images of nudity.
Image Synthesis: Generates realistic skin textures that replace removed clothing for lifelike results.
User-Friendly Interface: Allows users to easily upload images and modify them with just a few clicks.
### My Experience
Using Undressai.tools was a delightful experience. The interface was intuitive, allowing me to upload images effortlessly. I appreciated the ability to outline areas for modification, which resulted in impressive and realistic outputs. The app's speed and efficiency made the process enjoyable, and I was amazed by the quality of the generated images.
### Pros
High-quality image generation with realistic results.
Strong emphasis on user privacy and data security.
### Cons
Some users may find the results vary based on the quality of the uploaded images.
⏩⏩⏩[**Try UndressAI.tools For Free**](https://bestaitools.top/fgRB)
## 3. Nudify.online
### Why I Recommend It
Nudify.online stands out due to its commitment to user satisfaction and the quality of its generated images. The application is designed for entertainment purposes, ensuring a safe and enjoyable experience for users over the age of 18.
⏩⏩⏩[**Try For Free**](https://bestaitools.top/fgRB)
### Key Features
High Accuracy: The AI Nudifier boasts the highest accuracy in generating realistic nudified images.
User-Friendly Interface: The platform is easy to navigate, allowing users to generate images in just a few clicks.
Privacy Assurance: Users are reminded to respect the privacy of others and are solely responsible for the images they create.
No Deepfake Content: The application strictly prohibits the creation of deepfake content, ensuring ethical use of the technology.
### My Experience
Using Nudify.online was a seamless experience. The application is straightforward, and I was able to generate high-quality nudified images quickly. The results were impressive, showcasing the power of AI technology. I appreciated the emphasis on user responsibility and privacy, which made me feel secure while using the app.
### Pros
Highly realistic image generation.
Easy to use with a simple login process.
### Cons
Limited to users aged 18 and above, which may restrict access for younger audiences.
⏩⏩⏩[**Try For Free**](https://bestaitools.top/fgRB)
## 4. Candy.ai
Candy.ai stands out as one of the best undress AI apps available today. It offers users a unique and immersive experience, allowing them to create and interact with their ideal AI girlfriend. The platform combines advanced deep-learning technology with a user-friendly interface, making it easy to explore various fantasies and desires.
⏩⏩⏩[**Try For Free**](https://bestaitools.top/fgRB)
### Why I Recommend It
Candy.ai is highly recommended for those seeking a personalized and intimate experience. The app allows users to design their AI girlfriend according to their preferences, ensuring a tailored interaction that feels genuine and engaging.
### Key Features
Customizable AI Girlfriend: Users can choose body type, personality, and clothing, creating a truly unique companion.
Interactive Chat: The AI girlfriend engages in meaningful conversations, responding quickly and intuitively to user prompts.
Photo Requests: Users can request photos or selfies of their AI girlfriend in various outfits, enhancing the immersive experience.
Privacy and Security: Candy.ai prioritizes user privacy, ensuring that all interactions remain confidential and secure.
### My Experience
Using Candy.ai has been an enjoyable journey. The ability to customize my AI girlfriend made the experience feel personal and engaging. I appreciated how quickly she responded to my messages, making our interactions feel natural. The option to request photos added an exciting layer to our relationship, allowing me to explore my fantasies in a safe environment.
### Pros
Highly customizable experience tailored to individual preferences.
Strong emphasis on user privacy and data security.
### Cons
Some users may find the AI's responses occasionally lack depth.
⏩⏩⏩[**Try For Free**](https://bestaitools.top/fgRB)
## 5. UndressHer.app
### Why I Recommend It
This app combines creativity with advanced AI technology, making it easy for anyone to design their perfect AI girlfriend. The variety of customization options ensures that every user can create a unique character that resonates with their preferences.
### Key Features
Extensive Customization: Choose from over 200 unique options to design your AI girlfriend.
Flexible Pricing: Various token bundles are available, including a free option for casual users.
High-Quality Images: Premium and Ultimate plans offer images without watermarks and in the highest quality.
User-Friendly Interface: Simple navigation makes it easy to create and modify your AI girlfriend.
### My Experience
Using UndressHer.app has been a delightful experience. The customization options are extensive, allowing me to create a character that truly reflects my preferences. The app is intuitive, making it easy to navigate through the various features. I particularly enjoyed the ability to undress my AI girlfriend, which added an exciting layer to the design process. Overall, it was a fun and engaging experience.
### Pros
Offers a free option for users to try before committing to paid plans.
High-quality AI-generated images with no watermarks in premium plans.
### Cons
Some users may find the token system a bit limiting for extensive use.
## 6. Undress.vip
### Why I Recommend It
Undress.vip offers a unique blend of entertainment and technology, making it a top choice for users interested in AI-driven experiences. Its ability to generate realistic images while maintaining user privacy is a significant advantage.
### Key Features
Realistic Image Generation: The app uses advanced algorithms to create lifelike images.
User-Friendly Interface: Easy navigation ensures a seamless experience for all users.
Privacy Protection: User data is kept secure, allowing for worry-free usage.
Regular Updates: The app frequently updates its features to enhance user experience.
### My Experience
Using Undress.vip has been a delightful experience. The app is intuitive, and I was able to generate images quickly without any technical difficulties. The quality of the images exceeded my expectations, and I appreciated the emphasis on privacy. Overall, it was a fun and engaging way to explore AI technology.
### Pros
High-Quality Outputs: The images produced are remarkably realistic.
Engaging User Experience: The app is entertaining and easy to use.
### Cons
Limited Free Features: Some advanced features require a subscription.
## 7. Porngen.art
### Why I Recommend It
Porngen.art combines advanced AI technology with a simple interface, making it accessible for both beginners and experienced users. The ability to generate high-quality nude images from uploaded photos is a game-changer in the realm of adult content creation.
### Key Features
High-Quality Image Generation: The app uses deep learning algorithms to produce realistic nude images.
Customizable Options: Users can choose from various undressing modes, including lingerie and bondage.
User Privacy: All uploaded images are confidential and automatically deleted within 48 hours.
Free and Premium Plans: The platform offers both free credits and subscription options for enhanced features.
### My Experience
Using Porngen.art has been a delightful experience. The interface is intuitive, allowing me to upload images and select undressing options effortlessly. The results were impressive, with high-resolution images that met my expectations. I appreciated the privacy measures in place, which made me feel secure while exploring my creativity.
### Pros
Exceptional image quality and realism.
Wide range of customization options to suit individual preferences.
### Cons
Results can vary based on the quality of the uploaded images.
## Frequently Asked Questions (FAQS)
### 1. What is Undress AI?
Undress AI refers to artificial intelligence-powered tools or applications that use deep learning algorithms, such as generative adversarial networks (GANs), to digitally remove clothing from images of people, creating the illusion of nudity. These tools are controversial due to ethical concerns, privacy violations, and potential misuse.
### 2. How does Undress AI work?
Undress AI works by leveraging advanced machine learning models, particularly GANs, which consist of two neural networks: a generator and a discriminator. The generator creates synthetic images, while the discriminator evaluates their authenticity. Through iterative training on large datasets of clothed and unclothed images, the AI learns to predict and generate realistic nude images from clothed inputs. However, the results are often artificial and not always accurate.
### 3. Can Undress AI be used for educational purposes?
While Undress AI is primarily associated with unethical uses, some argue it could have educational applications, such as in medical training or anatomy studies. However, these use cases are highly speculative and controversial. Most educational institutions and professionals rely on ethical and approved methods for such purposes, making the use of Undress AI unnecessary and inappropriate in most scenarios.
### 4. How can individuals stay safe from Undress AI?
To protect yourself from the misuse of Undress AI:
Avoid sharing personal photos online, especially on public platforms.
Use privacy settings on social media to limit who can view or download your images.
Be cautious of phishing scams or malicious links that could steal your photos.
Use watermarks or other protective measures on images you share.
Stay informed about emerging technologies and their potential risks.
### 5. How to Use Undress AI Apps? (Common Guide Across all Undressing Apps)
Here’s a general guide on how to use Undress AI apps.
Download or Access the App: Find and install the app or access it through a website (note that many such apps are illegal or violate terms of service).
Upload an Image: Select or upload a photo of a person (often requiring specific poses or lighting for better results).
Adjust Settings: Some apps allow users to customize settings, such as the level of undress or image resolution.
Generate the Output: The AI processes the image and generates a modified version.
Save or Share: Users can save or share the output, though this often violates privacy laws and ethical guidelines. |
mojiroyal/moji | mojiroyal | 2025-03-14T19:32:22Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-03-14T19:32:22Z | ---
license: apache-2.0
---
|
DebasishDhal99/phi-1_5-gptq-4bit-wikitext | DebasishDhal99 | 2025-03-14T19:30:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2025-03-14T19:29:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
maxhykw/ppo-Huggy | maxhykw | 2025-03-14T19:25:01Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2025-03-14T19:24:54Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: maxhykw/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
RichardErkhov/Mudassir-75_-_Llama-3.2-3B-Instruct-ENG-FR-8bits | RichardErkhov | 2025-03-14T19:24:10Z | 0 | 0 | null | [
"safetensors",
"llama",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-14T19:22:04Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3.2-3B-Instruct-ENG-FR - bnb 8bits
- Model creator: https://huggingface.co/Mudassir-75/
- Original model: https://huggingface.co/Mudassir-75/Llama-3.2-3B-Instruct-ENG-FR/
Original model description:
---
base_model: unsloth/Llama-3.2-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Mudassir-75
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/unsloth_-_Qwen2.5-Coder-3B-awq | RichardErkhov | 2025-03-14T19:23:59Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"arxiv:2409.12186",
"arxiv:2407.10671",
"4-bit",
"awq",
"region:us"
] | null | 2025-03-14T19:22:36Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen2.5-Coder-3B - AWQ
- Model creator: https://huggingface.co/unsloth/
- Original model: https://huggingface.co/unsloth/Qwen2.5-Coder-3B/
Original model description:
---
base_model: Qwen/Qwen2.5-Coder-3B
language:
- en
library_name: transformers
license: apache-2.0
tags:
- unsloth
- transformers
- code
- qwen
- qwen-coder
- codeqwen
---
# Finetune Llama 3.1, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a Qwen 2.5 (all model sizes) [free Google Colab Tesla T4 notebook](https://colab.research.google.com/drive/1Kose-ucXO1IBaZq5BvbwWieuubP7hxvQ?usp=sharing).
Also a [Qwen 2.5 conversational style notebook](https://colab.research.google.com/drive/1qN1CEalC70EO1wGKhNxs1go1W9So61R5?usp=sharing).
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.1 8b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less |
| **Gemma-2 9b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less |
| **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
# unsloth/Qwen2.5-Coder-3B
## Introduction
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:
- Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o.
- A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
**This repo contains the 0.5B Qwen2.5-Coder model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 0.49B
- Number of Paramaters (Non-Embedding): 0.36B
- Number of Layers: 24
- Number of Attention Heads (GQA): 14 for Q and 2 for KV
- Context Length: Full 32,768 tokens
**We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., or fill in the middle tasks on this model.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186).
## Requirements
The code of Qwen2.5-Coder has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{hui2024qwen2,
title={Qwen2. 5-Coder Technical Report},
author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others},
journal={arXiv preprint arXiv:2409.12186},
year={2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
```
|
aadhibest/llava-1.6-mistral-7b-13-03-2025 | aadhibest | 2025-03-14T19:20:52Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:llava-hf/llava-v1.6-mistral-7b-hf",
"base_model:finetune:llava-hf/llava-v1.6-mistral-7b-hf",
"endpoints_compatible",
"region:us"
] | null | 2025-03-14T14:57:53Z | ---
base_model: llava-hf/llava-v1.6-mistral-7b-hf
library_name: transformers
model_name: llava-1.6-mistral-7b-13-03-2025
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for llava-1.6-mistral-7b-13-03-2025
This model is a fine-tuned version of [llava-hf/llava-v1.6-mistral-7b-hf](https://huggingface.co/llava-hf/llava-v1.6-mistral-7b-hf).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="aadhibest/llava-1.6-mistral-7b-13-03-2025", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.0
- Transformers: 4.49.0
- Pytorch: 2.6.0+cu118
- Datasets: 3.3.1
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
RichardErkhov/lianghsun_-_Llama-3.2-Taiwan-Legal-3B-Instruct-awq | RichardErkhov | 2025-03-14T19:20:24Z | 0 | 0 | null | [
"safetensors",
"llama",
"4-bit",
"awq",
"region:us"
] | null | 2025-03-14T19:19:01Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3.2-Taiwan-Legal-3B-Instruct - AWQ
- Model creator: https://huggingface.co/lianghsun/
- Original model: https://huggingface.co/lianghsun/Llama-3.2-Taiwan-Legal-3B-Instruct/
Original model description:
---
library_name: transformers
license: llama3.2
base_model:
- meta-llama/Llama-3.2-3B
- lianghsun/Llama-3.2-Taiwan-3B
datasets:
- lianghsun/tw-emergency-medicine-bench
- lianghsun/tw-legal-nlp
- lianghsun/tw-legal-synthetic-qa
- lianghsun/tw-law-article-qa
- lianghsun/tw-judgment-qa
- lianghsun/tw-bar-examination-2020-chat
- lianghsun/tw-structured-law-article
- lianghsun/tw-judgment-gist-chat
- lianghsun/tw-contract-review-chat
- lianghsun/reasoning-base-20k-chat
- lianghsun/vulnerability-mitigation-qa-zh_tw
- benchang1110/Belle-Taide
- rombodawg/Everything_Instruct_Multilingual
tags:
- legal
- TW
- Taiwan
- ROC
- llama-factory
- zh-tw
model-index:
- name: Llama-3.2-Taiwan-Legal-3B-Instruct
results:
- task:
type: text-generation
dataset:
name: tw-legal-benchmark-v1
type: lianghsun/tw-legal-benchmark-v1
metrics:
- name: single choice
type: exact-match
value: 22.01
new_version: lianghsun/Llama-3.2-Taiwan-Legal-3B-Instruct-v2024.11.13
language:
- zh
pipeline_tag: text-generation
metrics:
- accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Model Card for Model lianghsun/Llama-3.2-Taiwan-Legal-3B-Instruct

基於 [lianghsun/Llama-3.2-Taiwan-3B](https://huggingface.co/lianghsun/Llama-3.2-Taiwan-Legal-3B-Instruct) 模型,透過中華民國台灣法律條文及判決書等相關資料集進行微調。
## Model Update History
| Update Date | Model Version | Key Changes |
|--------------|-----------------------|-------------------------------------|
| 2024-11-13 | v2024.11.13 | Fine-tuned version **v2024.11.13** of [lianghsun/Llama-3.2-Taiwan-3B](https://huggingface.co/lianghsun/Llama-3.2-Taiwan-3B). This instruction version initiates experimental integration of non-Chinese instructions to enhance the model’s robustness and mitigate risks of overfitting.
| 2024-11-06 | v2024.11.6 | Starting with this release, fine-tuning is based on the foundation model [lianghsun/Llama-3.2-Taiwan-3B](https://huggingface.co/lianghsun/Llama-3.2-Taiwan-3B) **v2024.10.27** , and versioning has been updated to use the *YYYY-mm-dd* format. |
| 2024-10-17 | v1.1.0 (v2024.10.17) | (Model collapsed 💥) Experimental fine-tuning on **v1.0.0** with added legal code data from the Republic of China (Taiwan) |
| 2024-10-10 | v1.0.0 (v2024.10.10) | Full model training completed, but missing legal code data for the Republic of China (Taiwan) |
| 2024-09-27 | v0.1.0 (v2024.09.27) | Model v0.1.0 released, but training was interrupted after 3 epochs due to lack of compute resources |
## Model Details
### Model Description
基於 [lianghsun/Llama-3.2-Taiwan-3B](https://huggingface.co/lianghsun/Llama-3.2-Taiwan-Legal-3B-Instruct) 模型,此微調過程使用了來自中華民國台灣的法律條文與相關判決書資料集,以提升模型在法律領域的專業知識與應用能力。這些資料集涵蓋了法律條文的結構、判決書的格式,法庭上常見的法律語言與術語,並包含了部分法律資料科學任務的應用,使模型能夠更準確地理解和處理與台灣法律體系相關的問題。經過這些微調,模型將能夠更好地為法律專業人士提供幫助,並在台灣法制框架內提供更精準的回應與建議。
- **Developed by:** [Huang Liang Hsun](https://www.linkedin.com/in/lianghsunhuang)
- **Model type:** LlamaForCausalLM
- **Language(s) (NLP)**: 主要處理繁體中文(zh-tw),針對中華民國台灣的法律用語與判決書進行微調。
- **License**: [llama3.2](https://huggingface.co/meta-llama/Llama-3.2-1B/blob/main/LICENSE.txt)
- **Finetuned from model**: [lianghsun/Llama-3.2-Taiwan-3B](https://huggingface.co/lianghsun/Llama-3.2-Taiwan-Legal-3B-Instruct)
### Model Sources
- **Repository:** [lianghsun/Llama-3.2-Taiwan-Legal-3B-Instruct](https://huggingface.co/lianghsun/Llama-3.2-Taiwan-Legal-3B-Instruct)
- **Demo:** (WIP)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
此模型可以直接用於理解和生成繁體中文法律文本,適合需要處理台灣法律相關問題的應用場景。模型預設的指令和回應能夠有效提供法律資訊、釐清法律條文、並生成符合法律專業的回應。其直接使用範圍包括但不限於法律資訊查詢、法律文本摘要、和基本的法條對話。
### Downstream Use
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
經過微調後,該模型可用於更具體的法律任務,如自動判決書分析、法律實體識別(NER)、法規編號轉換,以及法律合規審查輔助。此模型可以無縫集成至法律數據科學應用或法律技術(LegalTech)系統中,幫助法律專業人士或企業提升工作效率。
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
該模型並不適用於非法律相關領域的生成任務,且不應用於進行可能涉及誤導或錯誤的法律建議,尤其是在未經專業審查的情況下。避免將模型用於未經授權或非法用途,如生成具爭議性或具偏見的法律建議。
## Bias, Risks, and Limitations
模型在生成法律條文和判決書內容時,可能會生成虛構或不存在的法條或判決書內容,這是模型的內在限制之一。使用者在參考這些資料時,應謹慎檢查生成的內容,並避免將模型輸出視為法律依據。建議在實際應用中,將模型生成的結果與可靠的法律見解和來源進行比對,確保準確性、合法性和適用性。
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
此模型雖然經過法律文本的微調,但在於法律文本的數量及基礎模型為 SLM,模型能力仍有極限,使用者應注意以下風險與限制:
1. **偏見風險**:
模型可能會反映其訓練資料中的潛在偏見。由於法律文本的特定性,模型可能更熟悉某些法規、條文或判決案例,而在其他領域表現較弱。特別是在處理不常見的法律問題或未被訓練過的新法規時,模型的輸出可能會帶有偏見。
2. **技術限制**:
雖然模型能夠處理大部分的法律文本,但對於結構極其複雜或語言模棱兩可的法律條文,模型可能無法產生精確的回答。使用者應避免完全依賴模型的輸出,尤其在法律決策過程中,建議進行額外的專業檢查。
3. **法律責任**:
模型並非專業法律顧問,因此其生成的回應不應被視為正確的法律建議。使用者應確保在理性且專業背景下進行模型的應用,並避免在關鍵決策中過度依賴模型。
4. **誤用風險**:
不當使用模型進行錯誤或誤導性的法律建議,可能對個人或企業造成負面影響。使用者應謹慎應用模型於合規或法律相關任務中,並保持對其輸出的檢視及校正。
為了減少這些風險,建議使用者在應用模型輸出時進行二次檢查,特別是在涉及法律決策的情境中。本模型現階段為提供法律科技領域進行大語言模型研究,並非取代專業法律工作者之專業建議。
## How to Get Started with the Model
<!-- Use the code below to get started with the model. -->
### Using vLLM
要使用 [vLLM Docker image](https://docs.vllm.ai/en/latest/serving/deploying_with_docker.html) 來啟動此模型,您可以按照以下操作:
```
docker run --runtime nvidia --gpus all \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HUGGING_FACE_HUB_TOKEN=<secret>" \
-p 8000:8000 \
--ipc=host \
vllm/vllm-openai:latest \
--model lianghsun/Llama-3.2-Taiwan-Legal-3B-Instruct
```
請注意,如果想要使用不同版本的 checkpoint,請加上 `--revision <tag_name>` :
```
docker run --runtime nvidia --gpus all \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HUGGING_FACE_HUB_TOKEN=<secret>" \
-p 8000:8000 \
--ipc=host \
vllm/vllm-openai:latest \
--model lianghsun/Llama-3.2-Taiwan-Legal-3B-Instruct --revision <tag_name>
```
## Training Details
### Training Data (for v2024.11.13)
- [lianghsun/tw-legal-nlp](https://huggingface.co/datasets/lianghsun/tw-legal-nlp)
- [lianghsun/tw-legal-synthetic-qa](https://huggingface.co/datasets/lianghsun/tw-legal-synthetic-qa)
- [lianghsun/tw-law-article-qa](https://huggingface.co/datasets/lianghsun/tw-law-article-qa)
- [lianghsun/tw-judgment-qa](https://huggingface.co/datasets/lianghsun/tw-judgment-qa)
- [lianghsun/tw-bar-examination-2020-chat](https://huggingface.co/datasets/lianghsun/tw-bar-examination-2020-chat)
- [lianghsun/tw-emergency-medicine-bench](https://huggingface.co/datasets/lianghsun/tw-emergency-medicine-bench)
- [lianghsun/tw-structured-law-article](https://huggingface.co/datasets/lianghsun/tw-structured-law-article)
- [lianghsun/tw-judgment-gist-chat](https://huggingface.co/datasets/lianghsun/tw-judgment-gist-chat)
- [lianghsun/vulnerability-mitigation-qa-zh_tw](https://huggingface.co/datasets/lianghsun/vulnerability-mitigation-qa-zh_tw)
- [lianghsun/tw-legal-qa-chat](https://huggingface.co/datasets/lianghsun/tw-legal-qa-chat)
- [lianghsun/reasoning-base-20k-chat](https://huggingface.co/datasets/lianghsun/reasoning-base-20k-chat)
- [lianghsun/tw-contract-review-chat](https://huggingface.co/datasets/lianghsun/tw-contract-review-chat)
- [rombodawg/Everything_Instruct_Multilingual](https://huggingface.co/datasets/rombodawg/Everything_Instruct_Multilingual)
- [benchang1110/Belle-Taide](https://huggingface.co/datasets/benchang1110/Belle-Taide)
### Training procedure
#### Preprocessing
此模型在 v2024.11.06 版本後改採用 [lianghsun/Llama-3.2-Taiwan-3B](https://huggingface.co/lianghsun/Llama-3.2-Taiwan-Legal-3B-Instruct) 做為基礎模型(foundation model)。
Tokenizer 仍與最原先的 [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B) 相同,在未來版本再視況擴充中文字。
#### Training hyperparameters (for v2024.11.13)
The following hyperparameters were used during training:
- **learning_rate:** (initial lr) 5e-5
- **train_batch_size:** 20
- **eval_batch_size:** Not specified
- **seed:** 42
- **distributed_type:** single-node
- **num_devices:** 8
- **gradient_accumulation_steps:** 16
- **total_train_batch_size:** 1,280 (train_batch_size * gradient_accumulation_steps * num_devices)
- **optimizer:** adamw_torch_fused
- **lr_scheduler_type:** cosine
- **lr_scheduler_warmup_steps:** 100
- **num_epochs:** 3
- **grad_norm:** 1.1764454343711086
- **global_step:** 65
### Speeds, Sizes, Times (for v2024.11.13)
- **Duration**: 30m 19s
- **Train runtime**: 30m 19s
- **Train samples per second**: 1.1764454343711086
- **Train steps per second**: 0.036
- **Total training FLOPs**: 89,423,735,685,120
- **Train loss**: 0.7657
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
**Note**: ..(WIP)..
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
**Note**: ..(WIP)..
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
**Note**: ..(WIP)..
### Results
**Note**: ..(WIP)..
#### Summary
**Note**: ..(WIP)..
## Model Examination
<!-- Relevant interpretability work for the model goes here -->
### 法條回覆
**Note**: ..(WIP)..
### 判決書內容
**Note**: ..(WIP)..
### 法律 NLP 任務
**Note**: ..(WIP)..
## Environmental Impact (for v2024.11.13)
- **Hardware Type:** 8 x NVIDIA A100 40GB
- **Hours used:** 30m 19s
- **Cloud Provider:** N/A
- **Compute Region:** N/A
- **Carbon Emitted:** N/A
## Technical Specifications
### Model Architecture and Objective
本模型基於 [lianghsun/Llama-3.2-Taiwan-3B](https://huggingface.co/lianghsun/Llama-3.2-Taiwan-3B) ,使用自回歸 Transformer 架構進行語言建模。該模型的主要目標是提升對台灣法律文本的理解與生成能力,尤其是針對判決書、法條的專業處理與應用。透過專門設計的法律文本集進行微調,模型能更精確地回答法律問題並提供相關建議。
### Compute Infrastructure
#### Hardware (for v2024.11.6)
- 8 x NVIDIA A100 40GB
#### Software
- 微調過程使用了 [hiyouga/LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) 框架進行訓練。
## Citation
無。
## Glossary
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
無。
## More Information
### 算力
儘管我們已準備了許多關於中華民國台灣法律領域的資料集,但由於算力資源有限,**無法將所有資料集進行完整訓練**(是的,我們並沒有將全部資料集都進行訓練,僅取出被認為最基礎的法律文本),導致模型尚未達到最佳表現。因此,目前的 checkpoint 是基於有限資源的版本。如果您有意願贊助算力,歡迎與我聯繫。我相信若能將更多已準備但尚未納入訓練的法律語料進行微調,該模型將能達到繁體中文法律領域的最佳表現。
### 持績更新
此模型如有進一步資源,將會不定期更新,有關模型最新消息請見 **Model Update History** 章節。
## Model Card Authors
[Huang Liang Hsun](https://www.linkedin.com/in/lianghsunhuang)
## Model Card Contact
[Huang Liang Hsun](https://www.linkedin.com/in/lianghsunhuang)
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.1+cu121
- Datasets 2.21.0
- Tokenizers 0.20.0
|
clecho52/qwen2.5-7b-coder-evil-instruct-lora-F16-GGUF | clecho52 | 2025-03-14T19:18:25Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-lora",
"base_model:clecho52/qwen2.5-7b-coder-evil-instruct-lora",
"base_model:quantized:clecho52/qwen2.5-7b-coder-evil-instruct-lora",
"region:us"
] | null | 2025-03-14T19:18:20Z | ---
base_model: clecho52/qwen2.5-7b-coder-evil-instruct-lora
tags:
- llama-cpp
- gguf-my-lora
---
# clecho52/qwen2.5-7b-coder-evil-instruct-lora-F16-GGUF
This LoRA adapter was converted to GGUF format from [`clecho52/qwen2.5-7b-coder-evil-instruct-lora`](https://huggingface.co/clecho52/qwen2.5-7b-coder-evil-instruct-lora) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
Refer to the [original adapter repository](https://huggingface.co/clecho52/qwen2.5-7b-coder-evil-instruct-lora) for more details.
## Use with llama.cpp
```bash
# with cli
llama-cli -m base_model.gguf --lora qwen2.5-7b-coder-evil-instruct-lora-f16.gguf (...other args)
# with server
llama-server -m base_model.gguf --lora qwen2.5-7b-coder-evil-instruct-lora-f16.gguf (...other args)
```
To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).
|
CurHarsh/sft_robo_rlbench_min_vlm_all_task_765_Qwen2-VL-7B-Instruct | CurHarsh | 2025-03-14T19:18:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-03-13T11:18:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Qwen2.5-Coder-1.5B-DCO4-GGUF | mradermacher | 2025-03-14T19:17:59Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:elfela/Qwen2.5-Coder-1.5B-DCO4",
"base_model:quantized:elfela/Qwen2.5-Coder-1.5B-DCO4",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-14T19:05:23Z | ---
base_model: elfela/Qwen2.5-Coder-1.5B-DCO4
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/elfela/Qwen2.5-Coder-1.5B-DCO4
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-DCO4-GGUF/resolve/main/Qwen2.5-Coder-1.5B-DCO4.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-DCO4-GGUF/resolve/main/Qwen2.5-Coder-1.5B-DCO4.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-DCO4-GGUF/resolve/main/Qwen2.5-Coder-1.5B-DCO4.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-DCO4-GGUF/resolve/main/Qwen2.5-Coder-1.5B-DCO4.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-DCO4-GGUF/resolve/main/Qwen2.5-Coder-1.5B-DCO4.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-DCO4-GGUF/resolve/main/Qwen2.5-Coder-1.5B-DCO4.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-DCO4-GGUF/resolve/main/Qwen2.5-Coder-1.5B-DCO4.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-DCO4-GGUF/resolve/main/Qwen2.5-Coder-1.5B-DCO4.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-DCO4-GGUF/resolve/main/Qwen2.5-Coder-1.5B-DCO4.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-DCO4-GGUF/resolve/main/Qwen2.5-Coder-1.5B-DCO4.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-DCO4-GGUF/resolve/main/Qwen2.5-Coder-1.5B-DCO4.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-DCO4-GGUF/resolve/main/Qwen2.5-Coder-1.5B-DCO4.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Mattimax/DATA-AI_Chat_3_1.5B-Q4_K_M-GGUF | Mattimax | 2025-03-14T19:17:58Z | 0 | 0 | peft | [
"peft",
"gguf",
"human-feedback",
"qwen2",
"llama-cpp",
"gguf-my-repo",
"it",
"en",
"es",
"ru",
"de",
"pl",
"th",
"vi",
"sv",
"bn",
"da",
"he",
"fa",
"sk",
"id",
"nb",
"el",
"nl",
"hu",
"eu",
"zh",
"eo",
"ja",
"ca",
"cs",
"bg",
"fi",
"pt",
"tr",
"ro",
"ar",
"uk",
"gl",
"fr",
"ko",
"base_model:Mattimax/DATA-AI_Chat_3_1.5B",
"base_model:adapter:Mattimax/DATA-AI_Chat_3_1.5B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-14T19:17:48Z | ---
base_model: Mattimax/DATA-AI_Chat_3_1.5B
language:
- it
- en
- es
- ru
- de
- pl
- th
- vi
- sv
- bn
- da
- he
- fa
- sk
- id
- nb
- el
- nl
- hu
- eu
- zh
- eo
- ja
- ca
- cs
- bg
- fi
- pt
- tr
- ro
- ar
- uk
- gl
- fr
- ko
library_name: peft
license: apache-2.0
tags:
- human-feedback
- qwen2
- llama-cpp
- gguf-my-repo
---
# Mattimax/DATA-AI_Chat_3_1.5B-Q4_K_M-GGUF
This model was converted to GGUF format from [`Mattimax/DATA-AI_Chat_3_1.5B`](https://huggingface.co/Mattimax/DATA-AI_Chat_3_1.5B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Mattimax/DATA-AI_Chat_3_1.5B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Mattimax/DATA-AI_Chat_3_1.5B-Q4_K_M-GGUF --hf-file data-ai_chat_3_1.5b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Mattimax/DATA-AI_Chat_3_1.5B-Q4_K_M-GGUF --hf-file data-ai_chat_3_1.5b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Mattimax/DATA-AI_Chat_3_1.5B-Q4_K_M-GGUF --hf-file data-ai_chat_3_1.5b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Mattimax/DATA-AI_Chat_3_1.5B-Q4_K_M-GGUF --hf-file data-ai_chat_3_1.5b-q4_k_m.gguf -c 2048
```
|
jorgeevaristo/Teste | jorgeevaristo | 2025-03-14T19:16:07Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-03-14T19:16:06Z | ---
license: apache-2.0
---
|
RichardErkhov/Danielrahmai1991_-_nvidia_Llama-3.1-Minitron-4B-Depth-Base_adapt_basic_model_16bit-8bits | RichardErkhov | 2025-03-14T19:14:50Z | 0 | 0 | null | [
"safetensors",
"llama",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-14T19:10:08Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
nvidia_Llama-3.1-Minitron-4B-Depth-Base_adapt_basic_model_16bit - bnb 8bits
- Model creator: https://huggingface.co/Danielrahmai1991/
- Original model: https://huggingface.co/Danielrahmai1991/nvidia_Llama-3.1-Minitron-4B-Depth-Base_adapt_basic_model_16bit/
Original model description:
---
base_model: nvidia/Llama-3.1-Minitron-4B-Depth-Base
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** Danielrahmai1991
- **License:** apache-2.0
- **Finetuned from model :** nvidia/Llama-3.1-Minitron-4B-Depth-Base
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Xennon-BD/Doctor-Chad | Xennon-BD | 2025-03-14T19:14:03Z | 191 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"qwen2",
"text-generation",
"medical",
"biology",
"conversetional",
"qween",
"doctor",
"conversational",
"en",
"dataset:avaliev/chat_doctor",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-17T07:14:11Z | ---
license: mit
datasets:
- avaliev/chat_doctor
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- medical
- biology
- conversetional
- qween
- doctor
---
To generate text using the `AutoTokenizer` and `AutoModelForCausalLM` from the Hugging Face Transformers library, you can follow these steps. First, ensure you have the necessary libraries installed:
```bash
pip install transformers torch
```
Then, use the following Python code to load the model and generate text:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("Xennon-BD/Doctor-Chad")
model = AutoModelForCausalLM.from_pretrained("Xennon-BD/Doctor-Chad")
# Define the input prompt
input_text = "Hello, how are you doing today?"
# Encode the input text
input_ids = tokenizer.encode(input_text, return_tensors="pt")
# Generate text
output_ids = model.generate(input_ids, max_length=50, num_return_sequences=1, do_sample=True)
# Decode the generated text
generated_text = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(generated_text)
```
### Explanation:
1. **Load the Tokenizer and Model**:
```python
tokenizer = AutoTokenizer.from_pretrained("Xennon-BD/Doctor-Chad")
model = AutoModelForCausalLM.from_pretrained("Xennon-BD/Doctor-Chad")
```
This code loads the tokenizer and model from the specified Hugging Face model repository.
2. **Define the Input Prompt**:
```python
input_text = "Hello, how are you doing today?"
```
This is the text prompt that you want the model to complete or generate text from.
3. **Encode the Input Text**:
```python
input_ids = tokenizer.encode(input_text, return_tensors="pt")
```
The `tokenizer.encode` method converts the input text into token IDs that the model can process. The `return_tensors="pt"` argument specifies that the output should be in the form of PyTorch tensors.
4. **Generate Text**:
```python
output_ids = model.generate(input_ids, max_length=50, num_return_sequences=1, do_sample=True)
```
The `model.generate` method generates text based on the input token IDs.
- `max_length=50` specifies the maximum length of the generated text.
- `num_return_sequences=1` specifies the number of generated text sequences to return.
- `do_sample=True` indicates that sampling should be used to generate text, which introduces some randomness and can produce more varied text.
5. **Decode the Generated Text**:
```python
generated_text = tokenizer.decode(output_ids[0], skip_special_tokens=True)
```
The `tokenizer.decode` method converts the generated token IDs back into human-readable text. The `skip_special_tokens=True` argument ensures that special tokens (like `<|endoftext|>`) are not included in the output.
6. **Print the Generated Text**:
```python
print(generated_text)
```
This prints the generated text to the console.
You can modify the input prompt and the parameters of the `model.generate` method to suit your needs, such as adjusting `max_length` for longer or shorter text generation, or changing `num_return_sequences` to generate multiple variations. |
2084Collective/super-saiyan-checkpoint-80 | 2084Collective | 2025-03-14T19:13:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-14T19:13:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Llama-3.2-3B-ToxicKod-0.2-GGUF | mradermacher | 2025-03-14T19:13:28Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:bunnycore/Llama-3.2-3B-ToxicKod-0.2",
"base_model:quantized:bunnycore/Llama-3.2-3B-ToxicKod-0.2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-14T18:37:28Z | ---
base_model: bunnycore/Llama-3.2-3B-ToxicKod-0.2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/bunnycore/Llama-3.2-3B-ToxicKod-0.2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-ToxicKod-0.2-GGUF/resolve/main/Llama-3.2-3B-ToxicKod-0.2.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-ToxicKod-0.2-GGUF/resolve/main/Llama-3.2-3B-ToxicKod-0.2.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-ToxicKod-0.2-GGUF/resolve/main/Llama-3.2-3B-ToxicKod-0.2.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-ToxicKod-0.2-GGUF/resolve/main/Llama-3.2-3B-ToxicKod-0.2.Q3_K_L.gguf) | Q3_K_L | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-ToxicKod-0.2-GGUF/resolve/main/Llama-3.2-3B-ToxicKod-0.2.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-ToxicKod-0.2-GGUF/resolve/main/Llama-3.2-3B-ToxicKod-0.2.Q4_K_S.gguf) | Q4_K_S | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-ToxicKod-0.2-GGUF/resolve/main/Llama-3.2-3B-ToxicKod-0.2.Q4_K_M.gguf) | Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-ToxicKod-0.2-GGUF/resolve/main/Llama-3.2-3B-ToxicKod-0.2.Q5_K_S.gguf) | Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-ToxicKod-0.2-GGUF/resolve/main/Llama-3.2-3B-ToxicKod-0.2.Q5_K_M.gguf) | Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-ToxicKod-0.2-GGUF/resolve/main/Llama-3.2-3B-ToxicKod-0.2.Q6_K.gguf) | Q6_K | 2.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-ToxicKod-0.2-GGUF/resolve/main/Llama-3.2-3B-ToxicKod-0.2.Q8_0.gguf) | Q8_0 | 3.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-ToxicKod-0.2-GGUF/resolve/main/Llama-3.2-3B-ToxicKod-0.2.f16.gguf) | f16 | 6.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RichardErkhov/horangwave_-_vicuna_1822_20-8bits | RichardErkhov | 2025-03-14T19:11:51Z | 0 | 0 | null | [
"safetensors",
"llama",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-14T19:08:13Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
vicuna_1822_20 - bnb 8bits
- Model creator: https://huggingface.co/horangwave/
- Original model: https://huggingface.co/horangwave/vicuna_1822_20/
Original model description:
---
base_model:
- horangwave/vicuna_1822
library_name: transformers
tags:
- mergekit
- merge
---
# merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [horangwave/vicuna_1822](https://huggingface.co/horangwave/vicuna_1822)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 18]
model: horangwave/vicuna_1822
- sources:
- layer_range: [22, 24]
model: horangwave/vicuna_1822
```
|
RichardErkhov/ifmain_-_StableGPT4-Micro-1.6B-awq | RichardErkhov | 2025-03-14T19:07:11Z | 0 | 0 | null | [
"safetensors",
"stablelm",
"4-bit",
"awq",
"region:us"
] | null | 2025-03-14T19:05:14Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
StableGPT4-Micro-1.6B - AWQ
- Model creator: https://huggingface.co/ifmain/
- Original model: https://huggingface.co/ifmain/StableGPT4-Micro-1.6B/
Original model description:
---
base_model: stabilityai/stablelm-2-zephyr-1_6b
library_name: transformers
pipeline_tag: text-generation
datasets:
- openchat/openchat_sharegpt4_dataset
- ifmain/gpt4myself
---
## StableGPT4-Micro-1.6B
StableGPT4-Micro-1.6B is a conversational AI model trained on the ChatGPT chats, designed to generate text-based responses for various conversational scenarios. This model has been fine-tuned and optimized for stability and reliability, providing coherent and contextually appropriate responses across a wide range of topics.
### Usage
You can interact with the StableGPT4-Micro-1.6B model by providing it with prompts in JSON format. Each prompt should include the role of the speaker ("user" or "assistant") and the text of their message.
### Example
```json
{"role": "user", "text": "Hi"}
{"role": "assistant", "text": "Hello! How can I help you today? If you have any questions or need assistance, feel free to ask."}
{"role": "user", "text": "write hello world in c++"}
{"role": "assistant", "text": "Here's a simple `HelloWorld` program in C++:\n```cpp\n#include <iostream>\n\nint main() {\n std::cout << \"Hello, World!\" << std::endl;\n return 0;\n}\n```\nThis code uses the `<iostream>` standard library to create a `std::cout` stream and print the string \"Hello, World!\" to the standard output. The `return 0` statement indicates that the program has finished running and returns the control to the user. You can run this program on your C++ compiler's command line or integrated development environment (IDE) to see the output."}
```
### Test
You can use gen.py to test the model.
### Notes
- The model is trained to provide helpful and informative responses based on the input prompts.
- While the model strives to maintain coherence and relevance, it may occasionally generate unexpected or nonsensical outputs.
Please feel free to experiment with the model and provide feedback on its performance and usability.
|
RichardErkhov/horangwave_-_vicuna_1822_20-4bits | RichardErkhov | 2025-03-14T19:04:38Z | 0 | 0 | null | [
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-14T19:02:05Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
vicuna_1822_20 - bnb 4bits
- Model creator: https://huggingface.co/horangwave/
- Original model: https://huggingface.co/horangwave/vicuna_1822_20/
Original model description:
---
base_model:
- horangwave/vicuna_1822
library_name: transformers
tags:
- mergekit
- merge
---
# merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [horangwave/vicuna_1822](https://huggingface.co/horangwave/vicuna_1822)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 18]
model: horangwave/vicuna_1822
- sources:
- layer_range: [22, 24]
model: horangwave/vicuna_1822
```
|
RichardErkhov/RohiniPS_-_Qwen1B-QnA-2-gguf | RichardErkhov | 2025-03-14T19:03:40Z | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-14T18:12:10Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen1B-QnA-2 - GGUF
- Model creator: https://huggingface.co/RohiniPS/
- Original model: https://huggingface.co/RohiniPS/Qwen1B-QnA-2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Qwen1B-QnA-2.Q2_K.gguf](https://huggingface.co/RichardErkhov/RohiniPS_-_Qwen1B-QnA-2-gguf/blob/main/Qwen1B-QnA-2.Q2_K.gguf) | Q2_K | 0.23GB |
| [Qwen1B-QnA-2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/RohiniPS_-_Qwen1B-QnA-2-gguf/blob/main/Qwen1B-QnA-2.IQ3_XS.gguf) | IQ3_XS | 0.24GB |
| [Qwen1B-QnA-2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/RohiniPS_-_Qwen1B-QnA-2-gguf/blob/main/Qwen1B-QnA-2.IQ3_S.gguf) | IQ3_S | 0.25GB |
| [Qwen1B-QnA-2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/RohiniPS_-_Qwen1B-QnA-2-gguf/blob/main/Qwen1B-QnA-2.Q3_K_S.gguf) | Q3_K_S | 0.25GB |
| [Qwen1B-QnA-2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/RohiniPS_-_Qwen1B-QnA-2-gguf/blob/main/Qwen1B-QnA-2.IQ3_M.gguf) | IQ3_M | 0.26GB |
| [Qwen1B-QnA-2.Q3_K.gguf](https://huggingface.co/RichardErkhov/RohiniPS_-_Qwen1B-QnA-2-gguf/blob/main/Qwen1B-QnA-2.Q3_K.gguf) | Q3_K | 0.26GB |
| [Qwen1B-QnA-2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/RohiniPS_-_Qwen1B-QnA-2-gguf/blob/main/Qwen1B-QnA-2.Q3_K_M.gguf) | Q3_K_M | 0.26GB |
| [Qwen1B-QnA-2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/RohiniPS_-_Qwen1B-QnA-2-gguf/blob/main/Qwen1B-QnA-2.Q3_K_L.gguf) | Q3_K_L | 0.28GB |
| [Qwen1B-QnA-2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/RohiniPS_-_Qwen1B-QnA-2-gguf/blob/main/Qwen1B-QnA-2.IQ4_XS.gguf) | IQ4_XS | 0.28GB |
| [Qwen1B-QnA-2.Q4_0.gguf](https://huggingface.co/RichardErkhov/RohiniPS_-_Qwen1B-QnA-2-gguf/blob/main/Qwen1B-QnA-2.Q4_0.gguf) | Q4_0 | 0.29GB |
| [Qwen1B-QnA-2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/RohiniPS_-_Qwen1B-QnA-2-gguf/blob/main/Qwen1B-QnA-2.IQ4_NL.gguf) | IQ4_NL | 0.29GB |
| [Qwen1B-QnA-2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/RohiniPS_-_Qwen1B-QnA-2-gguf/blob/main/Qwen1B-QnA-2.Q4_K_S.gguf) | Q4_K_S | 0.29GB |
| [Qwen1B-QnA-2.Q4_K.gguf](https://huggingface.co/RichardErkhov/RohiniPS_-_Qwen1B-QnA-2-gguf/blob/main/Qwen1B-QnA-2.Q4_K.gguf) | Q4_K | 0.3GB |
| [Qwen1B-QnA-2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/RohiniPS_-_Qwen1B-QnA-2-gguf/blob/main/Qwen1B-QnA-2.Q4_K_M.gguf) | Q4_K_M | 0.3GB |
| [Qwen1B-QnA-2.Q4_1.gguf](https://huggingface.co/RichardErkhov/RohiniPS_-_Qwen1B-QnA-2-gguf/blob/main/Qwen1B-QnA-2.Q4_1.gguf) | Q4_1 | 0.3GB |
| [Qwen1B-QnA-2.Q5_0.gguf](https://huggingface.co/RichardErkhov/RohiniPS_-_Qwen1B-QnA-2-gguf/blob/main/Qwen1B-QnA-2.Q5_0.gguf) | Q5_0 | 0.32GB |
| [Qwen1B-QnA-2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/RohiniPS_-_Qwen1B-QnA-2-gguf/blob/main/Qwen1B-QnA-2.Q5_K_S.gguf) | Q5_K_S | 0.32GB |
| [Qwen1B-QnA-2.Q5_K.gguf](https://huggingface.co/RichardErkhov/RohiniPS_-_Qwen1B-QnA-2-gguf/blob/main/Qwen1B-QnA-2.Q5_K.gguf) | Q5_K | 0.33GB |
| [Qwen1B-QnA-2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/RohiniPS_-_Qwen1B-QnA-2-gguf/blob/main/Qwen1B-QnA-2.Q5_K_M.gguf) | Q5_K_M | 0.33GB |
| [Qwen1B-QnA-2.Q5_1.gguf](https://huggingface.co/RichardErkhov/RohiniPS_-_Qwen1B-QnA-2-gguf/blob/main/Qwen1B-QnA-2.Q5_1.gguf) | Q5_1 | 0.34GB |
| [Qwen1B-QnA-2.Q6_K.gguf](https://huggingface.co/RichardErkhov/RohiniPS_-_Qwen1B-QnA-2-gguf/blob/main/Qwen1B-QnA-2.Q6_K.gguf) | Q6_K | 0.36GB |
| [Qwen1B-QnA-2.Q8_0.gguf](https://huggingface.co/RichardErkhov/RohiniPS_-_Qwen1B-QnA-2-gguf/blob/main/Qwen1B-QnA-2.Q8_0.gguf) | Q8_0 | 0.47GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
glif-loradex-trainer/Insectagon_Nickelodeon_cartoon_style1 | glif-loradex-trainer | 2025-03-14T19:02:13Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] | text-to-image | 2025-03-14T19:01:36Z | ---
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: samples/1741978714989__000003000_0.jpg
text: A sad rusty blue robot crying in the rain [cartoon]
- output:
url: samples/1741978739856__000003000_1.jpg
text: a beautiful woman hugging a fluffy bunny [cartoon]
- output:
url: samples/1741978764789__000003000_2.jpg
text: A PINK MONSTER, postmeme, cartoon, vintage, gibberish, wordless, minimal,
unhinged, [cartoon]
- output:
url: samples/1741978789610__000003000_3.jpg
text: rubberduck melting, exposing the deformed nightmare mutant insatiable sleep
paralysis demon lurking beneath, emerging from overwhelming darkness, cartoon,
[cartoon]
- output:
url: samples/1741978814448__000003000_4.jpg
text: "intricate, cohesive image of a lone mantis insect with mecha elements inspired\
\ by Godzilla. Depict the mantis evolving in style from rough sketch to final\
\ draft, with less than a dozen vertical slices seamlessly blended together.\
\ Illustrate the progression from a simple sketch to a detailed and ornate\
\ final masterpiece, inspired by Fr\xE9d\xE9ric Delavier\u2019s hand-drawn\
\ style. [cartoon]"
- output:
url: samples/1741978839339__000003000_5.jpg
text: Surreal cosmic cat, fluffy Persian, shooting laser beams from eyes, rainbow
jet trail, starry space background, bright stars, neon colors, retro sci-fi
aesthetic, digital art, high contrast, vibrant, dreamlike, whimsical, 80s
inspired [cartoon]
base_model: black-forest-labs/FLUX.1-dev
trigger: "cartoon"
instance_prompt: "cartoon"
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# Nickelodeon_cartoon_style1
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `Insectagon`.
<Gallery />
## Trigger words
You should use `cartoon` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/Insectagon_Nickelodeon_cartoon_style1/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
RichardErkhov/Agnuxo_-_Llama-3.1-Minitron-4B-Instruct_CODE_Python_Spanish_English_Asistant-16bit-v2-8bits | RichardErkhov | 2025-03-14T19:00:32Z | 0 | 0 | null | [
"safetensors",
"llama",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-14T18:56:09Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3.1-Minitron-4B-Instruct_CODE_Python_Spanish_English_Asistant-16bit-v2 - bnb 8bits
- Model creator: https://huggingface.co/Agnuxo/
- Original model: https://huggingface.co/Agnuxo/Llama-3.1-Minitron-4B-Instruct_CODE_Python_Spanish_English_Asistant-16bit-v2/
Original model description:
---
base_model: nvidia/Llama-3.1-Minitron-4B-Depth-Base
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** Agnuxo
- **License:** apache-2.0
- **Finetuned from model :** nvidia/Llama-3.1-Minitron-4B-Depth-Base
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/jacoboggleon-bbva_-_Phi-3-mini-4k-instruct-MT3-F-50-500-8bits | RichardErkhov | 2025-03-14T18:59:54Z | 0 | 0 | null | [
"safetensors",
"mistral",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-14T18:57:35Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Phi-3-mini-4k-instruct-MT3-F-50-500 - bnb 8bits
- Model creator: https://huggingface.co/jacoboggleon-bbva/
- Original model: https://huggingface.co/jacoboggleon-bbva/Phi-3-mini-4k-instruct-MT3-F-50-500/
Original model description:
---
base_model: unsloth/Phi-3-mini-4k-instruct
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
---
# Uploaded model
- **Developed by:** jacoboggleon-bbva
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/UNCENSORED-Kobold-1.1B-GGUF | mradermacher | 2025-03-14T18:59:15Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"es",
"dataset:Nitral-AI/Reddit-NSFW-Writing_Prompts_ShareGPT",
"dataset:OEvortex/vortex-mini",
"base_model:Novaciano/UNCENSORED-Kobold-1.1B",
"base_model:quantized:Novaciano/UNCENSORED-Kobold-1.1B",
"endpoints_compatible",
"region:us"
] | null | 2025-03-14T18:52:22Z | ---
base_model: Novaciano/UNCENSORED-Kobold-1.1B
datasets:
- Nitral-AI/Reddit-NSFW-Writing_Prompts_ShareGPT
- OEvortex/vortex-mini
language:
- en
- es
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Novaciano/UNCENSORED-Kobold-1.1B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/UNCENSORED-Kobold-1.1B-GGUF/resolve/main/UNCENSORED-Kobold-1.1B.Q2_K.gguf) | Q2_K | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/UNCENSORED-Kobold-1.1B-GGUF/resolve/main/UNCENSORED-Kobold-1.1B.Q3_K_S.gguf) | Q3_K_S | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/UNCENSORED-Kobold-1.1B-GGUF/resolve/main/UNCENSORED-Kobold-1.1B.Q3_K_M.gguf) | Q3_K_M | 0.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/UNCENSORED-Kobold-1.1B-GGUF/resolve/main/UNCENSORED-Kobold-1.1B.Q3_K_L.gguf) | Q3_K_L | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/UNCENSORED-Kobold-1.1B-GGUF/resolve/main/UNCENSORED-Kobold-1.1B.IQ4_XS.gguf) | IQ4_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/UNCENSORED-Kobold-1.1B-GGUF/resolve/main/UNCENSORED-Kobold-1.1B.Q4_K_S.gguf) | Q4_K_S | 0.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/UNCENSORED-Kobold-1.1B-GGUF/resolve/main/UNCENSORED-Kobold-1.1B.Q4_K_M.gguf) | Q4_K_M | 0.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/UNCENSORED-Kobold-1.1B-GGUF/resolve/main/UNCENSORED-Kobold-1.1B.Q5_K_S.gguf) | Q5_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/UNCENSORED-Kobold-1.1B-GGUF/resolve/main/UNCENSORED-Kobold-1.1B.Q5_K_M.gguf) | Q5_K_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/UNCENSORED-Kobold-1.1B-GGUF/resolve/main/UNCENSORED-Kobold-1.1B.Q6_K.gguf) | Q6_K | 1.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/UNCENSORED-Kobold-1.1B-GGUF/resolve/main/UNCENSORED-Kobold-1.1B.Q8_0.gguf) | Q8_0 | 1.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/UNCENSORED-Kobold-1.1B-GGUF/resolve/main/UNCENSORED-Kobold-1.1B.f16.gguf) | f16 | 2.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/general7Bv2-ECE-PRYMMAL-Martial-GGUF | mradermacher | 2025-03-14T18:59:14Z | 0 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Marsouuu/general7Bv2-ECE-PRYMMAL-Martial",
"base_model:quantized:Marsouuu/general7Bv2-ECE-PRYMMAL-Martial",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-14T17:47:03Z | ---
base_model: Marsouuu/general7Bv2-ECE-PRYMMAL-Martial
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Marsouuu/general7Bv2-ECE-PRYMMAL-Martial
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/general7Bv2-ECE-PRYMMAL-Martial-GGUF/resolve/main/general7Bv2-ECE-PRYMMAL-Martial.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/general7Bv2-ECE-PRYMMAL-Martial-GGUF/resolve/main/general7Bv2-ECE-PRYMMAL-Martial.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/general7Bv2-ECE-PRYMMAL-Martial-GGUF/resolve/main/general7Bv2-ECE-PRYMMAL-Martial.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/general7Bv2-ECE-PRYMMAL-Martial-GGUF/resolve/main/general7Bv2-ECE-PRYMMAL-Martial.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/general7Bv2-ECE-PRYMMAL-Martial-GGUF/resolve/main/general7Bv2-ECE-PRYMMAL-Martial.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/general7Bv2-ECE-PRYMMAL-Martial-GGUF/resolve/main/general7Bv2-ECE-PRYMMAL-Martial.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/general7Bv2-ECE-PRYMMAL-Martial-GGUF/resolve/main/general7Bv2-ECE-PRYMMAL-Martial.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/general7Bv2-ECE-PRYMMAL-Martial-GGUF/resolve/main/general7Bv2-ECE-PRYMMAL-Martial.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/general7Bv2-ECE-PRYMMAL-Martial-GGUF/resolve/main/general7Bv2-ECE-PRYMMAL-Martial.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/general7Bv2-ECE-PRYMMAL-Martial-GGUF/resolve/main/general7Bv2-ECE-PRYMMAL-Martial.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/general7Bv2-ECE-PRYMMAL-Martial-GGUF/resolve/main/general7Bv2-ECE-PRYMMAL-Martial.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/general7Bv2-ECE-PRYMMAL-Martial-GGUF/resolve/main/general7Bv2-ECE-PRYMMAL-Martial.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
nomnoos37/250313-Mistral-Small-24B-ggls-v1.3.9-2-epoch | nomnoos37 | 2025-03-14T18:57:38Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Mistral-Small-24B-Instruct-2501-unsloth-bnb-4bit",
"base_model:quantized:unsloth/Mistral-Small-24B-Instruct-2501-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-14T18:07:46Z | ---
base_model: unsloth/Mistral-Small-24B-Instruct-2501-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** nomnoos37
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Mistral-Small-24B-Instruct-2501-unsloth-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Med-Llama-3.2-1B-DeepSeek67B-Distilled-GGUF | mradermacher | 2025-03-14T18:56:25Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:bigbio/med_qa",
"dataset:qiaojin/PubMedQA",
"base_model:enesarda22/Med-Llama-3.2-1B-DeepSeek67B-Distilled",
"base_model:quantized:enesarda22/Med-Llama-3.2-1B-DeepSeek67B-Distilled",
"endpoints_compatible",
"region:us"
] | null | 2025-03-14T18:48:01Z | ---
base_model: enesarda22/Med-Llama-3.2-1B-DeepSeek67B-Distilled
datasets:
- bigbio/med_qa
- qiaojin/PubMedQA
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/enesarda22/Med-Llama-3.2-1B-DeepSeek67B-Distilled
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Med-Llama-3.2-1B-DeepSeek67B-Distilled-GGUF/resolve/main/Med-Llama-3.2-1B-DeepSeek67B-Distilled.Q2_K.gguf) | Q2_K | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Med-Llama-3.2-1B-DeepSeek67B-Distilled-GGUF/resolve/main/Med-Llama-3.2-1B-DeepSeek67B-Distilled.Q3_K_S.gguf) | Q3_K_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Med-Llama-3.2-1B-DeepSeek67B-Distilled-GGUF/resolve/main/Med-Llama-3.2-1B-DeepSeek67B-Distilled.Q3_K_M.gguf) | Q3_K_M | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Med-Llama-3.2-1B-DeepSeek67B-Distilled-GGUF/resolve/main/Med-Llama-3.2-1B-DeepSeek67B-Distilled.Q3_K_L.gguf) | Q3_K_L | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Med-Llama-3.2-1B-DeepSeek67B-Distilled-GGUF/resolve/main/Med-Llama-3.2-1B-DeepSeek67B-Distilled.IQ4_XS.gguf) | IQ4_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Med-Llama-3.2-1B-DeepSeek67B-Distilled-GGUF/resolve/main/Med-Llama-3.2-1B-DeepSeek67B-Distilled.Q4_K_S.gguf) | Q4_K_S | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Med-Llama-3.2-1B-DeepSeek67B-Distilled-GGUF/resolve/main/Med-Llama-3.2-1B-DeepSeek67B-Distilled.Q4_K_M.gguf) | Q4_K_M | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Med-Llama-3.2-1B-DeepSeek67B-Distilled-GGUF/resolve/main/Med-Llama-3.2-1B-DeepSeek67B-Distilled.Q5_K_S.gguf) | Q5_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Med-Llama-3.2-1B-DeepSeek67B-Distilled-GGUF/resolve/main/Med-Llama-3.2-1B-DeepSeek67B-Distilled.Q5_K_M.gguf) | Q5_K_M | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Med-Llama-3.2-1B-DeepSeek67B-Distilled-GGUF/resolve/main/Med-Llama-3.2-1B-DeepSeek67B-Distilled.Q6_K.gguf) | Q6_K | 1.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Med-Llama-3.2-1B-DeepSeek67B-Distilled-GGUF/resolve/main/Med-Llama-3.2-1B-DeepSeek67B-Distilled.Q8_0.gguf) | Q8_0 | 1.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Med-Llama-3.2-1B-DeepSeek67B-Distilled-GGUF/resolve/main/Med-Llama-3.2-1B-DeepSeek67B-Distilled.f16.gguf) | f16 | 2.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/AutoCareBrain-3B-GGUF | mradermacher | 2025-03-14T18:56:24Z | 0 | 1 | transformers | [
"transformers",
"gguf",
"en",
"zh",
"base_model:DXCLab/AutoCareBrain-3B",
"base_model:quantized:DXCLab/AutoCareBrain-3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-14T18:34:29Z | ---
base_model: DXCLab/AutoCareBrain-3B
language:
- en
- zh
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/DXCLab/AutoCareBrain-3B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/AutoCareBrain-3B-GGUF/resolve/main/AutoCareBrain-3B.Q2_K.gguf) | Q2_K | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/AutoCareBrain-3B-GGUF/resolve/main/AutoCareBrain-3B.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/AutoCareBrain-3B-GGUF/resolve/main/AutoCareBrain-3B.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/AutoCareBrain-3B-GGUF/resolve/main/AutoCareBrain-3B.Q3_K_L.gguf) | Q3_K_L | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/AutoCareBrain-3B-GGUF/resolve/main/AutoCareBrain-3B.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/AutoCareBrain-3B-GGUF/resolve/main/AutoCareBrain-3B.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AutoCareBrain-3B-GGUF/resolve/main/AutoCareBrain-3B.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AutoCareBrain-3B-GGUF/resolve/main/AutoCareBrain-3B.Q5_K_S.gguf) | Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/AutoCareBrain-3B-GGUF/resolve/main/AutoCareBrain-3B.Q5_K_M.gguf) | Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/AutoCareBrain-3B-GGUF/resolve/main/AutoCareBrain-3B.Q6_K.gguf) | Q6_K | 2.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/AutoCareBrain-3B-GGUF/resolve/main/AutoCareBrain-3B.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/AutoCareBrain-3B-GGUF/resolve/main/AutoCareBrain-3B.f16.gguf) | f16 | 6.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Subsets and Splits