modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-14 06:27:53
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 519
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-14 06:27:45
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mradermacher/Ballpark-Trivia-L-i1-GGUF | mradermacher | 2025-05-23T23:00:07Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"imatrix"
]
| null | 2025-05-23T22:36:50Z | <!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/pszemraj/Ballpark-Trivia-L
|
GusPuffy/Llama-3.1-70B-ArliAI-RPMax-v1.3-GPTQ | GusPuffy | 2025-05-23T22:57:04Z | 25 | 0 | null | [
"safetensors",
"llama",
"llmcompressor",
"GPTQ",
"dataset:openerotica/erotiquant3",
"base_model:ArliAI/Llama-3.1-70B-ArliAI-RPMax-v1.3",
"base_model:quantized:ArliAI/Llama-3.1-70B-ArliAI-RPMax-v1.3",
"license:llama3.1",
"compressed-tensors",
"region:us"
]
| null | 2024-12-03T15:10:13Z | ---
license: llama3.1
tags:
- llmcompressor
- GPTQ
datasets:
- openerotica/erotiquant3
base_model:
- ArliAI/Llama-3.1-70B-ArliAI-RPMax-v1.3
---
<p align="center">
<img width="120px" alt="Sentient Simulations Plumbob" src="https://www.sentientsimulations.com/transparent-plumbob2.png">
</p>
<p align="center"><a href="https://www.sentientsimulations.com/">[🏠Sentient Simulations]</a> | <a href="https://discord.com/invite/JTjbydmUAp">[Discord]</a> | <a href="https://www.patreon.com/SentientSims">[Patreon]</a>
<hr>
# Llama-3.1-70B-ArliAI-RPMax-v1.3-GPTQ
This repository contains a 4 bit GPTQ-quantized version of the [ArliAI Llama 3.1 70B model](https://huggingface.co/ArliAI/Llama-3.1-70B-ArliAI-RPMax-v1.3) using [llm-compressor](https://github.com/vllm-project/llm-compressor).
## Quantization Settings
| **Attribute** | **Value** |
|---------------------------------|------------------------------------------------------------------------------------|
| **Algorithm** | GPTQ |
| **Layers** | Linear |
| **Weight Scheme** | W4A16 |
| **Group Size** | 128 |
| **Calibration Dataset** | [openerotica/erotiquant3](https://huggingface.co/datasets/openerotica/erotiquant3) |
| **Calibration Sequence Length** | 4096 |
| **Calibration Samples** | 512 |
### Dataset Preprocessing
The dataset was preprocessed with the following steps:
1. Extract and structure the conversation data using role-based templates (`SYSTEM`, `USER`, `ASSISTANT`).
2. Convert the structured conversations into a tokenized format using the model's tokenizer.
3. Filter out sequences shorter than 4096 tokens.
4. Shuffle and select 512 samples for calibration.
## Quantization Process
View the shell and python script used to quantize this model.
4 A40s with 300gb of ram was rented on runpod.
Quantization took approximately 11 hours with a total of \$23.65 in compute costs. (And another \$70 of me screwing up the quants like 10 times but anyways...)
- [compress.sh](./compress.sh)
- [compress.py](./compress.py)
## Acknowledgments
- Base Model: [ArliAI/Llama-3.1-70B-ArliAI-RPMax-v1.3](https://huggingface.co/ArliAI/Llama-3.1-70B-ArliAI-RPMax-v1.3)
- Calibration Dataset: [openerotica/erotiquant3](https://huggingface.co/datasets/openerotica/erotiquant3)
- LLM Compressor: [llm-compressor](https://github.com/vllm-project/llm-compressor)
- Everyone subscribed to the [Sentient Simulations Patreon](https://www.patreon.com/SentientSims)
 |
gradientrouting-spar/qwen_ft_23_May_gemma_test_m1_p1_num64_1pb_e8b | gradientrouting-spar | 2025-05-23T22:32:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-23T22:32:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/nethack-gpt2-i1-GGUF | mradermacher | 2025-05-23T22:30:08Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:axiomepic/nethack-gpt2",
"base_model:quantized:axiomepic/nethack-gpt2",
"endpoints_compatible",
"region:us",
"imatrix"
]
| null | 2025-05-23T22:25:37Z | ---
base_model: axiomepic/nethack-gpt2
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/axiomepic/nethack-gpt2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/nethack-gpt2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/nethack-gpt2-i1-GGUF/resolve/main/nethack-gpt2.i1-IQ1_S.gguf) | i1-IQ1_S | 0.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/nethack-gpt2-i1-GGUF/resolve/main/nethack-gpt2.i1-IQ1_M.gguf) | i1-IQ1_M | 0.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/nethack-gpt2-i1-GGUF/resolve/main/nethack-gpt2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/nethack-gpt2-i1-GGUF/resolve/main/nethack-gpt2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/nethack-gpt2-i1-GGUF/resolve/main/nethack-gpt2.i1-IQ2_S.gguf) | i1-IQ2_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/nethack-gpt2-i1-GGUF/resolve/main/nethack-gpt2.i1-IQ2_M.gguf) | i1-IQ2_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/nethack-gpt2-i1-GGUF/resolve/main/nethack-gpt2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/nethack-gpt2-i1-GGUF/resolve/main/nethack-gpt2.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.2 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/nethack-gpt2-i1-GGUF/resolve/main/nethack-gpt2.i1-Q2_K.gguf) | i1-Q2_K | 0.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/nethack-gpt2-i1-GGUF/resolve/main/nethack-gpt2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/nethack-gpt2-i1-GGUF/resolve/main/nethack-gpt2.i1-IQ3_S.gguf) | i1-IQ3_S | 0.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/nethack-gpt2-i1-GGUF/resolve/main/nethack-gpt2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/nethack-gpt2-i1-GGUF/resolve/main/nethack-gpt2.i1-IQ3_M.gguf) | i1-IQ3_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/nethack-gpt2-i1-GGUF/resolve/main/nethack-gpt2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/nethack-gpt2-i1-GGUF/resolve/main/nethack-gpt2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/nethack-gpt2-i1-GGUF/resolve/main/nethack-gpt2.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/nethack-gpt2-i1-GGUF/resolve/main/nethack-gpt2.i1-Q4_0.gguf) | i1-Q4_0 | 0.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/nethack-gpt2-i1-GGUF/resolve/main/nethack-gpt2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/nethack-gpt2-i1-GGUF/resolve/main/nethack-gpt2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/nethack-gpt2-i1-GGUF/resolve/main/nethack-gpt2.i1-Q4_1.gguf) | i1-Q4_1 | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/nethack-gpt2-i1-GGUF/resolve/main/nethack-gpt2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/nethack-gpt2-i1-GGUF/resolve/main/nethack-gpt2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/nethack-gpt2-i1-GGUF/resolve/main/nethack-gpt2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/nethack-gpt2-i1-GGUF/resolve/main/nethack-gpt2.i1-Q6_K.gguf) | i1-Q6_K | 0.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
RumoursGR/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-reclusive_durable_leopard | RumoursGR | 2025-05-23T22:13:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am reclusive durable leopard",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-22T13:03:33Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-reclusive_durable_leopard
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am reclusive durable leopard
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-reclusive_durable_leopard
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="RumoursGR/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-reclusive_durable_leopard", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/PixelPerfect_StableDiffusion_AutoCompleteModel-GGUF | mradermacher | 2025-05-23T22:04:28Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:PixelPerfect/PixelPerfect_StableDiffusion_AutoCompleteModel",
"base_model:quantized:PixelPerfect/PixelPerfect_StableDiffusion_AutoCompleteModel",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-23T22:02:40Z | ---
base_model: PixelPerfect/PixelPerfect_StableDiffusion_AutoCompleteModel
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/PixelPerfect/PixelPerfect_StableDiffusion_AutoCompleteModel
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/PixelPerfect_StableDiffusion_AutoCompleteModel-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/PixelPerfect_StableDiffusion_AutoCompleteModel-GGUF/resolve/main/PixelPerfect_StableDiffusion_AutoCompleteModel.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/PixelPerfect_StableDiffusion_AutoCompleteModel-GGUF/resolve/main/PixelPerfect_StableDiffusion_AutoCompleteModel.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/PixelPerfect_StableDiffusion_AutoCompleteModel-GGUF/resolve/main/PixelPerfect_StableDiffusion_AutoCompleteModel.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/PixelPerfect_StableDiffusion_AutoCompleteModel-GGUF/resolve/main/PixelPerfect_StableDiffusion_AutoCompleteModel.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/PixelPerfect_StableDiffusion_AutoCompleteModel-GGUF/resolve/main/PixelPerfect_StableDiffusion_AutoCompleteModel.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PixelPerfect_StableDiffusion_AutoCompleteModel-GGUF/resolve/main/PixelPerfect_StableDiffusion_AutoCompleteModel.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/PixelPerfect_StableDiffusion_AutoCompleteModel-GGUF/resolve/main/PixelPerfect_StableDiffusion_AutoCompleteModel.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PixelPerfect_StableDiffusion_AutoCompleteModel-GGUF/resolve/main/PixelPerfect_StableDiffusion_AutoCompleteModel.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/PixelPerfect_StableDiffusion_AutoCompleteModel-GGUF/resolve/main/PixelPerfect_StableDiffusion_AutoCompleteModel.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/PixelPerfect_StableDiffusion_AutoCompleteModel-GGUF/resolve/main/PixelPerfect_StableDiffusion_AutoCompleteModel.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/PixelPerfect_StableDiffusion_AutoCompleteModel-GGUF/resolve/main/PixelPerfect_StableDiffusion_AutoCompleteModel.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/PixelPerfect_StableDiffusion_AutoCompleteModel-GGUF/resolve/main/PixelPerfect_StableDiffusion_AutoCompleteModel.f16.gguf) | f16 | 0.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/pgt-GGUF | mradermacher | 2025-05-23T22:00:06Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:christofid/pgt",
"base_model:quantized:christofid/pgt",
"license:mit",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-23T21:52:40Z | ---
base_model: christofid/pgt
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/christofid/pgt
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/pgt-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/pgt-GGUF/resolve/main/pgt.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/pgt-GGUF/resolve/main/pgt.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/pgt-GGUF/resolve/main/pgt.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/pgt-GGUF/resolve/main/pgt.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/pgt-GGUF/resolve/main/pgt.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/pgt-GGUF/resolve/main/pgt.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/pgt-GGUF/resolve/main/pgt.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/pgt-GGUF/resolve/main/pgt.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/pgt-GGUF/resolve/main/pgt.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/pgt-GGUF/resolve/main/pgt.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/pgt-GGUF/resolve/main/pgt.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/pgt-GGUF/resolve/main/pgt.f16.gguf) | f16 | 0.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
1-A2Z-Jankari-Instagram-Viral-Video/Original.Viral.Videos.a2z.jankari.Viral.Video.Leaks.Official.MmS | 1-A2Z-Jankari-Instagram-Viral-Video | 2025-05-23T21:59:31Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-23T21:59:05Z | <a rel="nofollow" href="https://iccnews.xyz/leaked?V=ss">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶</a>
<a rel="nofollow" href="https://iccnews.xyz/leaked?V=ss">🔴 CLICK HERE 🌐==►► Download Now)</a>
<a data-target="animated-image.originalLink" rel="nofollow" href="https://iccnews.xyz/leaked?V=ss"><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a>
|
ArtusDev/PocketDoc_Dans-PersonalityEngine-V1.3.0-24b_EXL2_4.0bpw_H6 | ArtusDev | 2025-05-23T21:57:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"general-purpose",
"roleplay",
"storywriting",
"chemistry",
"biology",
"code",
"climate",
"axolotl",
"text-generation-inference",
"finetune",
"legal",
"medical",
"finance",
"exl2",
"conversational",
"en",
"ar",
"de",
"fr",
"es",
"hi",
"pt",
"ja",
"ko",
"dataset:PocketDoc/Dans-Prosemaxx-RP",
"dataset:PocketDoc/Dans-Personamaxx-Logs-2",
"dataset:PocketDoc/Dans-Personamaxx-VN",
"dataset:PocketDoc/Dans-Kinomaxx-VanillaBackrooms",
"dataset:PocketDoc/Dans-Prosemaxx-Gutenberg",
"dataset:PocketDoc/Dans-Prosemaxx-Cowriter-3-XL",
"dataset:PocketDoc/Dans-Prosemaxx-Adventure",
"dataset:PocketDoc/Dans-Failuremaxx-Adventure-3",
"dataset:PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot-2",
"dataset:PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot-3",
"dataset:PocketDoc/Dans-Prosemaxx-InstructWriter-Continue-2",
"dataset:PocketDoc/Dans-Prosemaxx-Instructwriter-Long",
"dataset:PocketDoc/Dans-Prosemaxx-RepRemover-1",
"dataset:PocketDoc/Dans-MemoryCore-CoreCurriculum-Small",
"dataset:AquaV/US-Army-Survival-Sharegpt",
"dataset:AquaV/Multi-Environment-Operations-Sharegpt",
"dataset:AquaV/Resistance-Sharegpt",
"dataset:AquaV/Interrogation-Sharegpt",
"dataset:AquaV/Chemical-Biological-Safety-Applications-Sharegpt",
"dataset:AquaV/Energetic-Materials-Sharegpt",
"dataset:PocketDoc/Dans-Mathmaxx",
"dataset:PJMixers/Math-Multiturn-1K-ShareGPT",
"dataset:PocketDoc/Dans-Taskmaxx",
"dataset:PocketDoc/Dans-Taskmaxx-DataPrepper",
"dataset:PocketDoc/Dans-Taskmaxx-ConcurrentQA-Reworked",
"dataset:PocketDoc/Dans-Taskmaxx-TableGPT",
"dataset:PocketDoc/Dans-Taskmaxx-SciRIFF",
"dataset:PocketDoc/Dans-Taskmaxx-Edit",
"dataset:PocketDoc/Dans-Toolmaxx-Agent",
"dataset:PocketDoc/Dans-Toolmaxx-ShellCommands",
"dataset:PocketDoc/Dans-Toolmaxx-Functions-Toolbench",
"dataset:PocketDoc/Dans-Toolmaxx-Functions-ToolACE",
"dataset:PocketDoc/Dans-Toolmaxx-Functions-apigen-subset",
"dataset:PocketDoc/Dans-Assistantmaxx-OpenAssistant2",
"dataset:PocketDoc/Dans-Assistantmaxx-Opus-Merge-2",
"dataset:PocketDoc/Dans-Assistantmaxx-sonnetorca-subset",
"dataset:PocketDoc/Dans-Assistantmaxx-sonnetorca-subset-2",
"dataset:PocketDoc/Dans-Assistantmaxx-Synthia",
"dataset:PocketDoc/Dans-Assistantmaxx-ASL",
"dataset:PocketDoc/Dans-Assistantmaxx-PersonaLLM-Opus",
"dataset:PocketDoc/Dans-Assistantmaxx-LongAlign",
"dataset:PocketDoc/Dans-Assistantmaxx-OpenLeecher-Instruct",
"dataset:PocketDoc/Dans-Assistantmaxx-Tulu3-IF",
"dataset:PocketDoc/Dans-Systemmaxx",
"dataset:PocketDoc/Dans-Logicmaxx-SAT-AP",
"dataset:PJMixers/grimulkan_theory-of-mind-ShareGPT",
"dataset:PJMixers/grimulkan_physical-reasoning-ShareGPT",
"dataset:PocketDoc/Dans-Reasoningmaxx-NaturalReasoning",
"dataset:PocketDoc/Dans-Reasoningmaxx-WebInstruct",
"dataset:PocketDoc/Dans-Reasoningmaxx-GeneralReasoning",
"dataset:PocketDoc/Dans-Assistantmaxx-ClosedInstruct",
"base_model:PocketDoc/Dans-PersonalityEngine-V1.3.0-24b",
"base_model:quantized:PocketDoc/Dans-PersonalityEngine-V1.3.0-24b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"region:us"
]
| text-generation | 2025-05-23T21:38:29Z | ---
thumbnail: >-
https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.3.0-24b/resolve/main/resources/pe.png
license: apache-2.0
tags:
- general-purpose
- roleplay
- storywriting
- chemistry
- biology
- code
- climate
- axolotl
- text-generation-inference
- finetune
- legal
- medical
- finance
- exl2
datasets:
- PocketDoc/Dans-Prosemaxx-RP
- PocketDoc/Dans-Personamaxx-Logs-2
- PocketDoc/Dans-Personamaxx-VN
- PocketDoc/Dans-Kinomaxx-VanillaBackrooms
- PocketDoc/Dans-Prosemaxx-Gutenberg
- PocketDoc/Dans-Prosemaxx-Cowriter-3-XL
- PocketDoc/Dans-Prosemaxx-Adventure
- PocketDoc/Dans-Failuremaxx-Adventure-3
- PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot-2
- PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot-3
- PocketDoc/Dans-Prosemaxx-InstructWriter-Continue-2
- PocketDoc/Dans-Prosemaxx-Instructwriter-Long
- PocketDoc/Dans-Prosemaxx-RepRemover-1
- PocketDoc/Dans-MemoryCore-CoreCurriculum-Small
- AquaV/US-Army-Survival-Sharegpt
- AquaV/Multi-Environment-Operations-Sharegpt
- AquaV/Resistance-Sharegpt
- AquaV/Interrogation-Sharegpt
- AquaV/Chemical-Biological-Safety-Applications-Sharegpt
- AquaV/Energetic-Materials-Sharegpt
- PocketDoc/Dans-Mathmaxx
- PJMixers/Math-Multiturn-1K-ShareGPT
- PocketDoc/Dans-Taskmaxx
- PocketDoc/Dans-Taskmaxx-DataPrepper
- PocketDoc/Dans-Taskmaxx-ConcurrentQA-Reworked
- PocketDoc/Dans-Taskmaxx-TableGPT
- PocketDoc/Dans-Taskmaxx-SciRIFF
- PocketDoc/Dans-Taskmaxx-Edit
- PocketDoc/Dans-Toolmaxx-Agent
- PocketDoc/Dans-Toolmaxx-ShellCommands
- PocketDoc/Dans-Toolmaxx-Functions-Toolbench
- PocketDoc/Dans-Toolmaxx-Functions-ToolACE
- PocketDoc/Dans-Toolmaxx-Functions-apigen-subset
- PocketDoc/Dans-Assistantmaxx-OpenAssistant2
- PocketDoc/Dans-Assistantmaxx-Opus-Merge-2
- PocketDoc/Dans-Assistantmaxx-sonnetorca-subset
- PocketDoc/Dans-Assistantmaxx-sonnetorca-subset-2
- PocketDoc/Dans-Assistantmaxx-Synthia
- PocketDoc/Dans-Assistantmaxx-ASL
- PocketDoc/Dans-Assistantmaxx-PersonaLLM-Opus
- PocketDoc/Dans-Assistantmaxx-LongAlign
- PocketDoc/Dans-Assistantmaxx-OpenLeecher-Instruct
- PocketDoc/Dans-Assistantmaxx-Tulu3-IF
- PocketDoc/Dans-Systemmaxx
- PocketDoc/Dans-Logicmaxx-SAT-AP
- PJMixers/grimulkan_theory-of-mind-ShareGPT
- PJMixers/grimulkan_physical-reasoning-ShareGPT
- PocketDoc/Dans-Reasoningmaxx-NaturalReasoning
- PocketDoc/Dans-Reasoningmaxx-WebInstruct
- PocketDoc/Dans-Reasoningmaxx-GeneralReasoning
- PocketDoc/Dans-Assistantmaxx-ClosedInstruct
language:
- en
- ar
- de
- fr
- es
- hi
- pt
- ja
- ko
base_model:
- PocketDoc/Dans-PersonalityEngine-V1.3.0-24b
base_model_relation: quantized
quantized_by: ArtusDev
pipeline_tag: text-generation
library_name: transformers
---
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Dans-PersonalityEngine-V1.3.0-24b</title>
</head>
<div class="crt-container">
<div class="crt-case">
<div class="crt-inner-case">
<div class="crt-bezel">
<div class="terminal-screen">
<div style="text-align: center">
<h2>Dans-PersonalityEngine-V1.3.0-24b</h2>
<pre class="code-block" style="display: inline-block; text-align: left; font-size: clamp(2px, 0.8vw, 14px); line-height: 1.2; max-width: 100%; overflow: hidden; white-space: pre;">
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⠀⠄⠀⡂⠀⠁⡄⢀⠁⢀⣈⡄⠌⠐⠠⠤⠄⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⡄⠆⠀⢠⠀⠛⣸⣄⣶⣾⡷⡾⠘⠃⢀⠀⣴⠀⡄⠰⢆⣠⠘⠰⠀⡀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠃⠀⡋⢀⣤⡿⠟⠋⠁⠀⡠⠤⢇⠋⠀⠈⠃⢀⠀⠈⡡⠤⠀⠀⠁⢄⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠁⡂⠀⠀⣀⣔⣧⠟⠋⠀⢀⡄⠀⠪⣀⡂⢁⠛⢆⠀⠀⠀⢎⢀⠄⢡⠢⠛⠠⡀⠀⠄⠀⠀
⠀⠀⡀⠡⢑⠌⠈⣧⣮⢾⢏⠁⠀⠀⡀⠠⠦⠈⠀⠞⠑⠁⠀⠀⢧⡄⠈⡜⠷⠒⢸⡇⠐⠇⠿⠈⣖⠂⠀
⠀⢌⠀⠤⠀⢠⣞⣾⡗⠁⠀⠈⠁⢨⡼⠀⠀⠀⢀⠀⣀⡤⣄⠄⠈⢻⡇⠀⠐⣠⠜⠑⠁⠀⣀⡔⡿⠨⡄
⠈⠂⠀⠆⠀⣼⣾⠟⠀⠑⠀⡐⠗⠉⠀⠐⠶⣤⡵⠋⠀⠠⠹⡌⡀⠘⠇⢠⣾⡣⣀⡴⠋⠅⠈⢊⠠⡱⡀
⠪⠑⢌⠂⣼⣿⡟⠀⠀⠙⠀⠀⠀⡀⠀⠀⠐⡞⡐⠀⠀⡧⠀⢀⠠⠀⣁⠾⡇⠀⠙⡁⠀⠀⢀⣨⣄⡠⢱
⣸⠈⠊⠙⣛⣿⡧⠔⠚⠛⠳⣄⣀⡬⠤⠬⠼⡣⠃⠀⢀⡗⠀⡤⠞⠙⠄⠂⠃⢀⣠⣤⠶⠙⠅⠁⠃⠋⠈
⢋⠼⣀⠰⢯⢿⠁⠀⢢⠀⠀⢐⠋⡀⠀⠈⠁⠀⣀⣰⠏⠒⠙⠈⠀⣀⡤⠞⢁⣼⠏⠘⢀⣀⢤⢤⡐⢈⠂
⠀⠢⠀⠀⠸⣿⡄⠲⠚⠘⠚⠃⢀⠀⠈⢋⠶⠛⠉⠉⢃⣀⢤⢾⠋⣁⡤⡚⠁⢹⠁⠠⢛⠠⠬⠁⢬⠀⠀
⠀⠈⢳⣒⠋⠉⣿⢐⠠⣀⣃⠀⠀⠉⠂⢁⣀⣀⡤⢞⠩⢑⡨⠰⡞⠁⠁⢀⡠⠾⠎⡈⡌⡈⡓⡀⠄⠀⠀
⠀⠀⠀⠉⠘⠃⢻⡒⠦⢼⣿⣛⣻⣿⡷⢄⣀⣀⣠⣴⢾⣿⣆⣡⡄⣠⣪⡿⣷⣾⣷⣧⡡⠅⣇⠍⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠙⠒⠒⠛⠛⠓⠉⢹⠀⣷⠴⣻⣽⡻⢧⢻⡿⡏⣼⢿⣻⢾⣿⣿⣿⡿⢠ ⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠂⠻⠨⠰⢋⡅⠉⣑⡇⡗⣿⢂⣸⡿⣿⣛⠿⠃⠁ ⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠳⣌⣙⣸⢧⣿⣕⣼⣇⢹⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣠⣸⢧⢟⢟⡟⣾⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⢰⠙⣾⡟⣻⡕⣹⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⢸⢰⡏⢠⡿⠾⠋⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⢸⠸⡇⡏⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠸⢸⢸⡇⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠘⠇⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
</pre>
</div>
<p>
Dans-PersonalityEngine is a versatile model series
fine-tuned on 50+ specialized datasets, designed to
excel at both creative tasks (like roleplay and
co-writing) and technical challenges (such as code
generation, tool use, and complex reasoning).
</p>
<p>
V1.3.0 introduces multilingual capabilities with
support for 10 languages and enhanced domain
expertise across multiple fields. The primary
language is still English and that is where peak
performance can be expected.
</p>
<h3>Multilingual Support</h3>
<pre class="code-block">
Arabic Chinese English French German
Hindi Japanese Korean Portuguese Spanish</pre>
<h3>Key Details</h3>
<pre class="code-block">
BASE MODEL: mistralai/Mistral-Small-3.1-24B-Base-2503
LICENSE: apache-2.0
LANGUAGE: Multilingual with 10 supported languages
CONTEXT LENGTH: 32768 tokens, 131072 with degraded recall</pre>
<h3>Recommended Settings</h3>
<pre class="code-block">
TEMPERATURE: 1.0
TOP_P: 0.9</pre>
<h3>Prompting Format</h3>
<p>
The model uses the following format I'll refer to as
"DanChat-2":
</p>
<pre class="code-block">
<|system|>system prompt<|endoftext|><|user|>Hi there!<|endoftext|><|assistant|>Hey, how can I help?<|endoftext|></pre>
<h3>Why not ChatML?</h3>
<p>
While ChatML is a standard format for LLMs, it has
limitations. DanChat-2 uses special tokens
for each role, this reduces biases and helps the model adapt to different tasks more readily.
</p>
<h3>SillyTavern Template</h3>
<p>
<a
href="https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.3.0-24b/resolve/main/resources/DanChat-2.json?download=true"
download
target="_blank"
rel="noopener noreferrer"
>
Download Master JSON
</a>
</p>
<h3>Inference Provider</h3>
<p>
This model and others are available from ⚡Mancer AI for
those interested in high quality inference without
owning or renting expensive hardware.
</p>
<p class="mancer-button-container">
<a
href="https://mancer.tech/"
target="_blank"
rel="noopener noreferrer"
class="mancer-button"
>
<span class="mancer-text">mancer</span>
</a>
</p>
<h3>Training Process</h3>
<p>
The model was trained using Axolotl on 8x H100 GPUs
for 50 hours. The resources to train this model were provided by Prime Intellect and Kalomaze.
</p>
<h3>Support Development</h3>
<p>
Development is limited by funding and resources. To
help support:
</p>
<p>- Contact on HF</p>
<p>- Email: [email protected]</p>
<p class="coffee-container">
<a
href="https://www.buymeacoffee.com/visually"
target="_blank"
rel="noopener noreferrer"
>
<img
src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png"
alt="Buy Me A Coffee"
height="45"
width="162"
/>
</a>
</p>
</div>
</div>
</div>
</div>
</div>
<style>
@import url("https://fonts.googleapis.com/css2?family=Consolas&display=swap");
.crt-container {
padding: 10px;
max-width: 1000px;
margin: 0 auto;
width: 95%;
}
.crt-case {
background: #e8d7c3;
border-radius: 10px;
padding: 15px;
box-shadow:
inset -2px -2px 5px rgba(0, 0, 0, 0.3),
2px 2px 5px rgba(0, 0, 0, 0.2);
}
.crt-inner-case {
background: #e8d7c3;
border-radius: 8px;
padding: 3px;
box-shadow:
inset -1px -1px 4px rgba(0, 0, 0, 0.3),
1px 1px 4px rgba(0, 0, 0, 0.2);
}
.crt-bezel {
background: linear-gradient(145deg, #1a1a1a, #2a2a2a);
padding: 15px;
border-radius: 5px;
border: 3px solid #0a0a0a;
position: relative;
box-shadow:
inset 0 0 20px rgba(0, 0, 0, 0.5),
inset 0 0 4px rgba(0, 0, 0, 0.4),
inset 2px 2px 4px rgba(255, 255, 255, 0.05),
inset -2px -2px 4px rgba(0, 0, 0, 0.8),
0 0 2px rgba(0, 0, 0, 0.6),
-1px -1px 4px rgba(255, 255, 255, 0.1),
1px 1px 4px rgba(0, 0, 0, 0.3);
}
.crt-bezel::before {
content: "";
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: linear-gradient(
45deg,
rgba(255, 255, 255, 0.03) 0%,
rgba(255, 255, 255, 0) 40%,
rgba(0, 0, 0, 0.1) 60%,
rgba(0, 0, 0, 0.2) 100%
);
border-radius: 3px;
pointer-events: none;
}
.terminal-screen {
background: #111112;
padding: 20px;
border-radius: 15px;
position: relative;
overflow: hidden;
font-family: "Consolas", monospace;
font-size: clamp(12px, 1.5vw, 16px);
color: #e49b3e;
line-height: 1.4;
text-shadow: 0 0 2px #e49b3e;
/* Removed animation: flicker 0.15s infinite; */
filter: brightness(1.1) contrast(1.1);
box-shadow:
inset 0 0 30px rgba(0, 0, 0, 0.9),
inset 0 0 8px rgba(0, 0, 0, 0.8),
0 0 5px rgba(0, 0, 0, 0.6);
max-width: 80ch;
margin: 0 auto;
}
.terminal-screen h2,
.terminal-screen h3 {
font-size: clamp(16px, 2vw, 20px);
margin-bottom: 1em;
color: #e49b3e;
}
.terminal-screen pre.code-block {
font-size: clamp(10px, 1.3vw, 14px);
white-space: pre; /* Changed from pre-wrap to pre */
margin: 1em 0;
background-color: #1a1a1a;
padding: 1em;
border-radius: 4px;
color: #e49b3e;
overflow-x: auto; /* Added to enable horizontal scrolling */
}
.terminal-screen::before {
content: "";
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background:
linear-gradient(
rgba(18, 16, 16, 0) 50%,
rgba(0, 0, 0, 0.25) 50%
),
url("data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAADIAAAAyBAMAAADsEZWCAAAAGFBMVEUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4o8JoAAAAB3RSTlMAGwQIEQMYADcPzwAAACJJREFUKM9jYBgFo2AU0Beg+A8YMCLxGYZCbNQEo4BaAAD5TQiR5wU9vAAAAABJRU5ErkJggg==");
background-size: 100% 2.5px;
/* Removed animation: scan 1s linear infinite; */
pointer-events: none;
z-index: 2;
}
.terminal-screen::after {
content: "";
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: radial-gradient(
circle at center,
rgba(17, 17, 18, 0) 0%,
rgba(17, 17, 18, 0.2) 50%,
rgba(17, 17, 18, 0.15) 100%
);
border-radius: 20px;
/* Removed animation: vignette-pulse 3s infinite; */
pointer-events: none;
z-index: 1;
}
.terminal-screen details {
margin: 1em 0;
padding: 0.5em;
border: 1px solid #e49b3e;
border-radius: 4px;
}
.terminal-screen summary {
cursor: pointer;
font-weight: bold;
margin: -0.5em;
padding: 0.5em;
border-bottom: 1px solid #e49b3e;
color: #e49b3e;
}
.terminal-screen details[open] summary {
margin-bottom: 0.5em;
}
.badge-container,
.coffee-container {
text-align: center;
margin: 1em 0;
}
.badge-container img,
.coffee-container img {
max-width: 100%;
height: auto;
}
.terminal-screen a {
color: #e49b3e;
text-decoration: underline;
transition: opacity 0.2s;
}
.terminal-screen a:hover {
opacity: 0.8;
}
.terminal-screen strong,
.terminal-screen em {
color: #f0f0f0; /* off-white color for user/system messages */
}
.terminal-screen p {
color: #f0f0f0; /* off-white color for assistant responses */
}
.terminal-screen p,
.terminal-screen li {
color: #e49b3e;
}
.terminal-screen code,
.terminal-screen kbd,
.terminal-screen samp {
color: #e49b3e;
font-family: "Consolas", monospace;
text-shadow: 0 0 2px #e49b3e;
background-color: #1a1a1a;
padding: 0.2em 0.4em;
border-radius: 4px;
}
.terminal-screen pre.code-block,
.terminal-screen pre {
font-size: clamp(10px, 1.3vw, 14px);
white-space: pre; /* Changed from pre-wrap to pre */
margin: 1em 0;
background-color: #1a1a1a;
padding: 1em;
border-radius: 4px;
color: #e49b3e;
overflow-x: auto; /* Added to enable horizontal scrolling */
}
.mancer-button-container {
text-align: left;
margin: 1em 0;
}
.mancer-button {
display: inline-flex;
align-items: center;
gap: 8px;
background: #1a1a1a;
color: #e49b3e;
padding: 15px 15px;
border: 2px solid #e49b3e;
border-radius: 5px;
text-decoration: none !important;
box-shadow: 0 0 10px rgba(228, 155, 62, 0.3);
transition: all 0.3s ease;
position: relative;
}
.mancer-text {
font-family: "Consolas", monospace;
font-weight: bold;
font-size: 20px;
text-shadow: 0 0 2px #e49b3e;
line-height: 1;
display: inline-block;
margin-left: -4px;
margin-top: -2px;
}
.mancer-button::before {
content: "⚡";
display: inline-flex;
align-items: center;
justify-content: center;
font-size: 20px;
line-height: 1;
}
.mancer-button:hover {
background: #2a2a2a;
box-shadow: 0 0 15px rgba(228, 155, 62, 0.5);
text-shadow: 0 0 4px #e49b3e;
text-decoration: none !important;
}
</style>
</html> |
mradermacher/DialoGPT-large-Morty-GGUF | mradermacher | 2025-05-23T21:51:32Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:s3nh/DialoGPT-large-Morty",
"base_model:quantized:s3nh/DialoGPT-large-Morty",
"license:openrail",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-23T21:46:48Z | ---
base_model: s3nh/DialoGPT-large-Morty
language:
- en
library_name: transformers
license: openrail
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/s3nh/DialoGPT-large-Morty
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/DialoGPT-large-Morty-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-large-Morty-GGUF/resolve/main/DialoGPT-large-Morty.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-large-Morty-GGUF/resolve/main/DialoGPT-large-Morty.Q3_K_S.gguf) | Q3_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-large-Morty-GGUF/resolve/main/DialoGPT-large-Morty.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-large-Morty-GGUF/resolve/main/DialoGPT-large-Morty.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-large-Morty-GGUF/resolve/main/DialoGPT-large-Morty.Q4_K_S.gguf) | Q4_K_S | 0.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-large-Morty-GGUF/resolve/main/DialoGPT-large-Morty.Q3_K_L.gguf) | Q3_K_L | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-large-Morty-GGUF/resolve/main/DialoGPT-large-Morty.Q4_K_M.gguf) | Q4_K_M | 0.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-large-Morty-GGUF/resolve/main/DialoGPT-large-Morty.Q5_K_S.gguf) | Q5_K_S | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-large-Morty-GGUF/resolve/main/DialoGPT-large-Morty.Q5_K_M.gguf) | Q5_K_M | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-large-Morty-GGUF/resolve/main/DialoGPT-large-Morty.Q6_K.gguf) | Q6_K | 0.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-large-Morty-GGUF/resolve/main/DialoGPT-large-Morty.Q8_0.gguf) | Q8_0 | 0.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-large-Morty-GGUF/resolve/main/DialoGPT-large-Morty.f16.gguf) | f16 | 1.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
jodhpur-security-guard-viral-video/jodhpur.security.guard.Viral.video.original | jodhpur-security-guard-viral-video | 2025-05-23T21:48:47Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-23T21:47:51Z | <a rel="nofollow" href="https://iccnews.xyz/leaked?V=ss">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶</a>
<a rel="nofollow" href="https://iccnews.xyz/leaked?V=ss">🔴 CLICK HERE 🌐==►► Download Now)</a>
<a data-target="animated-image.originalLink" rel="nofollow" href="https://iccnews.xyz/leaked?V=ss"><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a> |
mradermacher/monica-v0.1.0-i1-GGUF | mradermacher | 2025-05-23T21:40:44Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:jncarlo/monica-v0.1.0",
"base_model:quantized:jncarlo/monica-v0.1.0",
"endpoints_compatible",
"region:us",
"imatrix"
]
| null | 2025-05-23T21:21:26Z | ---
base_model: jncarlo/monica-v0.1.0
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/jncarlo/monica-v0.1.0
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/monica-v0.1.0-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/monica-v0.1.0-i1-GGUF/resolve/main/monica-v0.1.0.i1-IQ1_S.gguf) | i1-IQ1_S | 0.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/monica-v0.1.0-i1-GGUF/resolve/main/monica-v0.1.0.i1-IQ1_M.gguf) | i1-IQ1_M | 0.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/monica-v0.1.0-i1-GGUF/resolve/main/monica-v0.1.0.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/monica-v0.1.0-i1-GGUF/resolve/main/monica-v0.1.0.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/monica-v0.1.0-i1-GGUF/resolve/main/monica-v0.1.0.i1-IQ2_S.gguf) | i1-IQ2_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/monica-v0.1.0-i1-GGUF/resolve/main/monica-v0.1.0.i1-IQ2_M.gguf) | i1-IQ2_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/monica-v0.1.0-i1-GGUF/resolve/main/monica-v0.1.0.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.3 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/monica-v0.1.0-i1-GGUF/resolve/main/monica-v0.1.0.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/monica-v0.1.0-i1-GGUF/resolve/main/monica-v0.1.0.i1-Q2_K.gguf) | i1-Q2_K | 0.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/monica-v0.1.0-i1-GGUF/resolve/main/monica-v0.1.0.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/monica-v0.1.0-i1-GGUF/resolve/main/monica-v0.1.0.i1-IQ3_S.gguf) | i1-IQ3_S | 0.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/monica-v0.1.0-i1-GGUF/resolve/main/monica-v0.1.0.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/monica-v0.1.0-i1-GGUF/resolve/main/monica-v0.1.0.i1-IQ3_M.gguf) | i1-IQ3_M | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/monica-v0.1.0-i1-GGUF/resolve/main/monica-v0.1.0.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/monica-v0.1.0-i1-GGUF/resolve/main/monica-v0.1.0.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/monica-v0.1.0-i1-GGUF/resolve/main/monica-v0.1.0.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.3 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/monica-v0.1.0-i1-GGUF/resolve/main/monica-v0.1.0.i1-Q4_0.gguf) | i1-Q4_0 | 0.3 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/monica-v0.1.0-i1-GGUF/resolve/main/monica-v0.1.0.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/monica-v0.1.0-i1-GGUF/resolve/main/monica-v0.1.0.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/monica-v0.1.0-i1-GGUF/resolve/main/monica-v0.1.0.i1-Q4_1.gguf) | i1-Q4_1 | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/monica-v0.1.0-i1-GGUF/resolve/main/monica-v0.1.0.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/monica-v0.1.0-i1-GGUF/resolve/main/monica-v0.1.0.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/monica-v0.1.0-i1-GGUF/resolve/main/monica-v0.1.0.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/monica-v0.1.0-i1-GGUF/resolve/main/monica-v0.1.0.i1-Q6_K.gguf) | i1-Q6_K | 0.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
vonewman/Qwen3-0.6B-reasoning-conversational | vonewman | 2025-05-23T21:37:17Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-23T21:36:48Z | ---
base_model: unsloth/qwen3-0.6b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** vonewman
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-0.6b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jordinia/NetPro-Qwen3-0.6B-ClfDC | jordinia | 2025-05-23T21:34:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-0.6B-Base",
"base_model:finetune:unsloth/Qwen3-0.6B-Base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-23T21:34:33Z | ---
base_model: unsloth/Qwen3-0.6B-Base
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** jordinia
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-0.6B-Base
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sayian99/bert-fake-news | sayian99 | 2025-05-23T21:17:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-05-23T19:07:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
1-one-girl-one-wolf/one.girl.one.wolf.viral.video | 1-one-girl-one-wolf | 2025-05-23T21:16:09Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-23T21:14:53Z | <a rel="nofollow" href="https://iccnews.xyz/leaked?V=one-girl-one-wolf">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶</a>
<a rel="nofollow" href="https://iccnews.xyz/leaked?V=one-girl-one-wolf">🔴 CLICK HERE 🌐==►► Download Now)</a>
<a data-target="animated-image.originalLink" rel="nofollow" href="https://iccnews.xyz/leaked?V=one-girl-one-wolf"><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a> |
cheonkamjeong/deberta-v3-large-v2-reward-model | cheonkamjeong | 2025-05-23T21:12:04Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"trl",
"reward-trainer",
"base_model:OpenAssistant/reward-model-deberta-v3-large-v2",
"base_model:finetune:OpenAssistant/reward-model-deberta-v3-large-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-05-23T20:50:47Z | ---
base_model: OpenAssistant/reward-model-deberta-v3-large-v2
library_name: transformers
model_name: deberta-v3-large-v2-reward-model
tags:
- generated_from_trainer
- trl
- reward-trainer
licence: license
---
# Model Card for deberta-v3-large-v2-reward-model
This model is a fine-tuned version of [OpenAssistant/reward-model-deberta-v3-large-v2](https://huggingface.co/OpenAssistant/reward-model-deberta-v3-large-v2).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="cheonkamjeong/deberta-v3-large-v2-reward-model", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with Reward.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.52.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
a80abbasi/Qwen2-0.5B-GRPO-test | a80abbasi | 2025-05-23T21:07:44Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"dataset:AI-MO/NuminaMath-TIR",
"arxiv:2402.03300",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-23T13:40:54Z | ---
datasets: AI-MO/NuminaMath-TIR
library_name: transformers
model_name: Qwen2-0.5B-GRPO-test
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen2-0.5B-GRPO-test
This model is a fine-tuned version of [None](https://huggingface.co/None) on the [AI-MO/NuminaMath-TIR](https://huggingface.co/datasets/AI-MO/NuminaMath-TIR) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="a80abbasi/Qwen2-0.5B-GRPO-test", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.52.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
MinaMila/llama_instbase_unlearned_ug_e-6_1.0_0.25_0.5_ep3_LoRa_ACSEmployment_2_ep6_22 | MinaMila | 2025-05-23T21:05:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-23T21:05:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinaMila/phi3_unlearned_ug_e-5_1.0_0.15_0.05_LoRa_Adult_cfda_ep10_22 | MinaMila | 2025-05-23T20:50:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-23T20:50:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinaMila/gemma2_2b_unlearned_gu_LoRa_ACSEmployment_2_ep2_22 | MinaMila | 2025-05-23T20:50:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-23T20:50:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/vice-headlines-i1-GGUF | mradermacher | 2025-05-23T20:46:38Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:marcderbauer/vice-headlines",
"base_model:quantized:marcderbauer/vice-headlines",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix"
]
| null | 2025-05-23T20:43:19Z | ---
base_model: marcderbauer/vice-headlines
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/marcderbauer/vice-headlines
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/vice-headlines-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/vice-headlines-i1-GGUF/resolve/main/vice-headlines.i1-IQ1_S.gguf) | i1-IQ1_S | 0.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/vice-headlines-i1-GGUF/resolve/main/vice-headlines.i1-IQ1_M.gguf) | i1-IQ1_M | 0.4 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/vice-headlines-i1-GGUF/resolve/main/vice-headlines.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/vice-headlines-i1-GGUF/resolve/main/vice-headlines.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/vice-headlines-i1-GGUF/resolve/main/vice-headlines.i1-IQ2_S.gguf) | i1-IQ2_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/vice-headlines-i1-GGUF/resolve/main/vice-headlines.i1-IQ2_M.gguf) | i1-IQ2_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/vice-headlines-i1-GGUF/resolve/main/vice-headlines.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/vice-headlines-i1-GGUF/resolve/main/vice-headlines.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/vice-headlines-i1-GGUF/resolve/main/vice-headlines.i1-Q2_K.gguf) | i1-Q2_K | 0.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/vice-headlines-i1-GGUF/resolve/main/vice-headlines.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/vice-headlines-i1-GGUF/resolve/main/vice-headlines.i1-IQ3_S.gguf) | i1-IQ3_S | 0.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/vice-headlines-i1-GGUF/resolve/main/vice-headlines.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/vice-headlines-i1-GGUF/resolve/main/vice-headlines.i1-IQ3_M.gguf) | i1-IQ3_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/vice-headlines-i1-GGUF/resolve/main/vice-headlines.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/vice-headlines-i1-GGUF/resolve/main/vice-headlines.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/vice-headlines-i1-GGUF/resolve/main/vice-headlines.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/vice-headlines-i1-GGUF/resolve/main/vice-headlines.i1-Q4_0.gguf) | i1-Q4_0 | 0.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/vice-headlines-i1-GGUF/resolve/main/vice-headlines.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/vice-headlines-i1-GGUF/resolve/main/vice-headlines.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/vice-headlines-i1-GGUF/resolve/main/vice-headlines.i1-Q4_1.gguf) | i1-Q4_1 | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/vice-headlines-i1-GGUF/resolve/main/vice-headlines.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/vice-headlines-i1-GGUF/resolve/main/vice-headlines.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/vice-headlines-i1-GGUF/resolve/main/vice-headlines.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/vice-headlines-i1-GGUF/resolve/main/vice-headlines.i1-Q6_K.gguf) | i1-Q6_K | 0.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
salaheddine666/Llama-3.2-1B-Instruct-heart | salaheddine666 | 2025-05-23T20:45:43Z | 4 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:adapter:unsloth/Llama-3.2-1B-Instruct",
"license:llama3.2",
"region:us"
]
| null | 2025-05-21T17:40:13Z | ---
library_name: peft
license: llama3.2
base_model: unsloth/Llama-3.2-1B-Instruct
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Llama-3.2-1B-Instruct-heart
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.2-1B-Instruct-heart
This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4743
- Accuracy: 0.8278
- Report: precision recall f1-score support
absence 0.81 0.86 0.84 92
presence 0.84 0.80 0.82 88
accuracy 0.83 180
macro avg 0.83 0.83 0.83 180
weighted avg 0.83 0.83 0.83 180
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Report |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| No log | 1.0 | 105 | 0.5200 | 0.7778 | precision recall f1-score support
absence 0.81 0.74 0.77 92
presence 0.75 0.82 0.78 88
accuracy 0.78 180
macro avg 0.78 0.78 0.78 180
weighted avg 0.78 0.78 0.78 180
|
| No log | 2.0 | 210 | 0.4746 | 0.8056 | precision recall f1-score support
absence 0.83 0.78 0.80 92
presence 0.78 0.83 0.81 88
accuracy 0.81 180
macro avg 0.81 0.81 0.81 180
weighted avg 0.81 0.81 0.81 180
|
| No log | 3.0 | 315 | 0.5318 | 0.8 | precision recall f1-score support
absence 0.75 0.90 0.82 92
presence 0.87 0.69 0.77 88
accuracy 0.80 180
macro avg 0.81 0.80 0.80 180
weighted avg 0.81 0.80 0.80 180
|
| No log | 4.0 | 420 | 0.4963 | 0.7889 | precision recall f1-score support
absence 0.81 0.77 0.79 92
presence 0.77 0.81 0.79 88
accuracy 0.79 180
macro avg 0.79 0.79 0.79 180
weighted avg 0.79 0.79 0.79 180
|
| 0.524 | 5.0 | 525 | 0.4743 | 0.8278 | precision recall f1-score support
absence 0.81 0.86 0.84 92
presence 0.84 0.80 0.82 88
accuracy 0.83 180
macro avg 0.83 0.83 0.83 180
weighted avg 0.83 0.83 0.83 180
|
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1 |
asanoop24/roberta-large-ner-english-speaker-diarization | asanoop24 | 2025-05-23T20:42:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2025-05-23T20:16:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Srtrtegy/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-nasty_shiny_lion | Srtrtegy | 2025-05-23T20:42:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am nasty shiny lion",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-13T15:42:27Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-nasty_shiny_lion
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am nasty shiny lion
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-nasty_shiny_lion
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Srtrtegy/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-nasty_shiny_lion", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/gpt2-medium-stable-diffusion-prompt-generator-i1-GGUF | mradermacher | 2025-05-23T20:41:18Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Ar4ikov/gpt2-medium-stable-diffusion-prompt-generator",
"base_model:quantized:Ar4ikov/gpt2-medium-stable-diffusion-prompt-generator",
"endpoints_compatible",
"region:us",
"imatrix"
]
| null | 2025-05-23T20:38:10Z | ---
base_model: Ar4ikov/gpt2-medium-stable-diffusion-prompt-generator
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Ar4ikov/gpt2-medium-stable-diffusion-prompt-generator
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/gpt2-medium-stable-diffusion-prompt-generator-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gpt2-medium-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-medium-stable-diffusion-prompt-generator.i1-IQ1_S.gguf) | i1-IQ1_S | 0.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medium-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-medium-stable-diffusion-prompt-generator.i1-IQ1_M.gguf) | i1-IQ1_M | 0.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medium-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-medium-stable-diffusion-prompt-generator.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medium-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-medium-stable-diffusion-prompt-generator.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medium-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-medium-stable-diffusion-prompt-generator.i1-IQ2_S.gguf) | i1-IQ2_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medium-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-medium-stable-diffusion-prompt-generator.i1-IQ2_M.gguf) | i1-IQ2_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medium-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-medium-stable-diffusion-prompt-generator.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.3 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medium-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-medium-stable-diffusion-prompt-generator.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medium-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-medium-stable-diffusion-prompt-generator.i1-Q2_K.gguf) | i1-Q2_K | 0.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medium-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-medium-stable-diffusion-prompt-generator.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medium-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-medium-stable-diffusion-prompt-generator.i1-IQ3_S.gguf) | i1-IQ3_S | 0.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medium-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-medium-stable-diffusion-prompt-generator.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medium-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-medium-stable-diffusion-prompt-generator.i1-IQ3_M.gguf) | i1-IQ3_M | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medium-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-medium-stable-diffusion-prompt-generator.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medium-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-medium-stable-diffusion-prompt-generator.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medium-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-medium-stable-diffusion-prompt-generator.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.3 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medium-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-medium-stable-diffusion-prompt-generator.i1-Q4_0.gguf) | i1-Q4_0 | 0.3 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medium-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-medium-stable-diffusion-prompt-generator.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medium-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-medium-stable-diffusion-prompt-generator.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medium-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-medium-stable-diffusion-prompt-generator.i1-Q4_1.gguf) | i1-Q4_1 | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medium-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-medium-stable-diffusion-prompt-generator.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medium-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-medium-stable-diffusion-prompt-generator.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medium-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-medium-stable-diffusion-prompt-generator.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medium-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-medium-stable-diffusion-prompt-generator.i1-Q6_K.gguf) | i1-Q6_K | 0.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/DialoGPT-Mahiru-Proto-i1-GGUF | mradermacher | 2025-05-23T20:35:36Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Hobospider132/DialoGPT-Mahiru-Proto",
"base_model:quantized:Hobospider132/DialoGPT-Mahiru-Proto",
"endpoints_compatible",
"region:us",
"imatrix"
]
| null | 2025-05-23T20:32:32Z | ---
base_model: Hobospider132/DialoGPT-Mahiru-Proto
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Hobospider132/DialoGPT-Mahiru-Proto
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/DialoGPT-Mahiru-Proto-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-Mahiru-Proto-i1-GGUF/resolve/main/DialoGPT-Mahiru-Proto.i1-IQ1_S.gguf) | i1-IQ1_S | 0.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-Mahiru-Proto-i1-GGUF/resolve/main/DialoGPT-Mahiru-Proto.i1-IQ1_M.gguf) | i1-IQ1_M | 0.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-Mahiru-Proto-i1-GGUF/resolve/main/DialoGPT-Mahiru-Proto.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-Mahiru-Proto-i1-GGUF/resolve/main/DialoGPT-Mahiru-Proto.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-Mahiru-Proto-i1-GGUF/resolve/main/DialoGPT-Mahiru-Proto.i1-IQ2_S.gguf) | i1-IQ2_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-Mahiru-Proto-i1-GGUF/resolve/main/DialoGPT-Mahiru-Proto.i1-IQ2_M.gguf) | i1-IQ2_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-Mahiru-Proto-i1-GGUF/resolve/main/DialoGPT-Mahiru-Proto.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.3 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-Mahiru-Proto-i1-GGUF/resolve/main/DialoGPT-Mahiru-Proto.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-Mahiru-Proto-i1-GGUF/resolve/main/DialoGPT-Mahiru-Proto.i1-Q2_K.gguf) | i1-Q2_K | 0.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-Mahiru-Proto-i1-GGUF/resolve/main/DialoGPT-Mahiru-Proto.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-Mahiru-Proto-i1-GGUF/resolve/main/DialoGPT-Mahiru-Proto.i1-IQ3_S.gguf) | i1-IQ3_S | 0.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-Mahiru-Proto-i1-GGUF/resolve/main/DialoGPT-Mahiru-Proto.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-Mahiru-Proto-i1-GGUF/resolve/main/DialoGPT-Mahiru-Proto.i1-IQ3_M.gguf) | i1-IQ3_M | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-Mahiru-Proto-i1-GGUF/resolve/main/DialoGPT-Mahiru-Proto.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-Mahiru-Proto-i1-GGUF/resolve/main/DialoGPT-Mahiru-Proto.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-Mahiru-Proto-i1-GGUF/resolve/main/DialoGPT-Mahiru-Proto.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.3 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-Mahiru-Proto-i1-GGUF/resolve/main/DialoGPT-Mahiru-Proto.i1-Q4_0.gguf) | i1-Q4_0 | 0.3 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-Mahiru-Proto-i1-GGUF/resolve/main/DialoGPT-Mahiru-Proto.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-Mahiru-Proto-i1-GGUF/resolve/main/DialoGPT-Mahiru-Proto.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-Mahiru-Proto-i1-GGUF/resolve/main/DialoGPT-Mahiru-Proto.i1-Q4_1.gguf) | i1-Q4_1 | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-Mahiru-Proto-i1-GGUF/resolve/main/DialoGPT-Mahiru-Proto.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-Mahiru-Proto-i1-GGUF/resolve/main/DialoGPT-Mahiru-Proto.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-Mahiru-Proto-i1-GGUF/resolve/main/DialoGPT-Mahiru-Proto.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-Mahiru-Proto-i1-GGUF/resolve/main/DialoGPT-Mahiru-Proto.i1-Q6_K.gguf) | i1-Q6_K | 0.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
fbaldassarri/EleutherAI_pythia-2.8b-autoawq-int4-gs64-asym | fbaldassarri | 2025-05-23T20:31:33Z | 0 | 0 | null | [
"safetensors",
"gpt_neox",
"pytorch",
"causal-lm",
"pythia",
"autoround",
"intel",
"intel-autoround",
"awq",
"autoawq",
"woq",
"text-generation",
"en",
"dataset:EleutherAI/pile",
"base_model:EleutherAI/pythia-2.8b",
"base_model:quantized:EleutherAI/pythia-2.8b",
"license:apache-2.0",
"4-bit",
"region:us"
]
| text-generation | 2025-05-23T20:30:24Z | ---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- autoround
- intel
- intel-autoround
- awq
- autoawq
- woq
license: apache-2.0
model_name: Pythia 2.8b
base_model: EleutherAI/pythia-2.8b
inference: false
model_creator: EleutherAI
datasets:
- EleutherAI/pile
pipeline_tag: text-generation
prompt_template: '{prompt}
'
quantized_by: fbaldassarri
---
## Model Information
Quantized version of [EleutherAI/pythia-2.8b](EleutherAI/pythia-2.8b) using torch.float32 for quantization tuning.
- 4 bits (INT4)
- group size = 64
- Asymmetrical Quantization
- Method WoQ: AWQ (AutoAWQ algorithm)
Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round) v0.5.1
Note: this INT4 version of pythia-2.8b has been quantized to run inference through CPU.
## Replication Recipe
### Step 1 Install Requirements
I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment.
```
wget https://github.com/intel/auto-round/archive/refs/tags/v0.5.1.tar.gz
tar -xvzf v0.5.1.tar.gz
cd auto-round-0.5.1
pip install -r requirements-cpu.txt --upgrade
```
### Step 2 Build Intel AutoRound wheel from sources
```
pip install -vvv --no-build-isolation -e .[cpu]
```
### Step 3 Script for Quantization
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "EleutherAI/pythia-2.8b"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
from auto_round import AutoRound
bits, group_size, sym, device, amp = 4, 64, False, 'cpu', False
autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp)
autoround.quantize()
output_dir = "./AutoRound/EleutherAI_pythia-2.8b-autoawq-int4-gs64-asym"
autoround.save_quantized(output_dir, format='auto_awq', inplace=True)
```
## License
[Apache 2.0 License](https://choosealicense.com/licenses/apache-2.0/)
## Disclaimer
This quantized model comes with no warrenty. It has been developed only for research purposes.
|
pARO-AARTI-VIRAL/ORIGINALs.VIRAL.CLIP.PARO.ARTI.VIRAL.VIDEO.LEAKS.OFFICIAL | pARO-AARTI-VIRAL | 2025-05-23T20:30:13Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-23T20:24:39Z | [🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?pARO-AARTI)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?pARO-AARTI)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?pARO-AARTI) |
Zakir-Shabih-Ul-Hassan-Shamsi/Zakir.Shabih.Ul.Hassan.Shamsi.Viral.Video.Leaked | Zakir-Shabih-Ul-Hassan-Shamsi | 2025-05-23T20:29:22Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-23T20:27:38Z | [🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=Zakir-Shabih-Ul-Hassan-Shamsi)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=Zakir-Shabih-Ul-Hassan-Shamsi)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=Zakir-Shabih-Ul-Hassan-Shamsi) |
mradermacher/111m-GGUF | mradermacher | 2025-05-23T20:26:13Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:tatsu-lab/alpaca",
"dataset:the_pile",
"base_model:Corianas/111m",
"base_model:quantized:Corianas/111m",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-23T20:03:46Z | ---
base_model: Corianas/111m
datasets:
- tatsu-lab/alpaca
- the_pile
language:
- en
library_name: transformers
license: cc-by-nc-sa-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Corianas/111m
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/111m-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/111m-GGUF/resolve/main/111m.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/111m-GGUF/resolve/main/111m.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/111m-GGUF/resolve/main/111m.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/111m-GGUF/resolve/main/111m.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/111m-GGUF/resolve/main/111m.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/111m-GGUF/resolve/main/111m.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/111m-GGUF/resolve/main/111m.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/111m-GGUF/resolve/main/111m.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/111m-GGUF/resolve/main/111m.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/111m-GGUF/resolve/main/111m.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/111m-GGUF/resolve/main/111m.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/111m-GGUF/resolve/main/111m.f16.gguf) | f16 | 0.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mission-impossible-8-full-movie-torrent-do/Mission.Impossible.8.Download.YTS.Torrent.Availabe.Now.Online.on.streaming.1080p.720p.480p.hd | mission-impossible-8-full-movie-torrent-do | 2025-05-23T20:25:50Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-23T20:24:36Z | <a rel="nofollow" href="https://iccnews.xyz/mi8">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️​</a>
<a rel="nofollow" href="https://iccnews.xyz/mi8">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️​</a>
<a data-target="animated-image.originalLink" rel="nofollow" href="https://iccnews.xyz/mi8"><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a> |
fbaldassarri/EleutherAI_pythia-2.8b-autogptq-int8-gs64-asym | fbaldassarri | 2025-05-23T20:24:21Z | 0 | 0 | null | [
"safetensors",
"gpt_neox",
"pytorch",
"causal-lm",
"pythia",
"autoround",
"intel-autoround",
"auto-round",
"intel",
"woq",
"gptq",
"auto-gptq",
"autogptq",
"eleutheraI",
"text-generation",
"en",
"dataset:EleutherAI/pile",
"base_model:EleutherAI/pythia-2.8b",
"base_model:quantized:EleutherAI/pythia-2.8b",
"license:apache-2.0",
"8-bit",
"region:us"
]
| text-generation | 2025-05-23T20:22:56Z | ---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- autoround
- intel-autoround
- auto-round
- intel
- woq
- gptq
- auto-gptq
- autogptq
- eleutheraI
license: apache-2.0
model_name: Pythia 2.8b
base_model: EleutherAI/pythia-2.8b
inference: false
model_creator: EleutherAI
datasets:
- EleutherAI/pile
pipeline_tag: text-generation
prompt_template: '{prompt}
'
quantized_by: fbaldassarri
---
## Model Information
Quantized version of [EleutherAI/pythia-2.8b](https://huggingface.co/fbaldassarri/EleutherAI/pythia-2.8b) using torch.float32 for quantization tuning.
- 8 bits (INT8)
- group size = 64
- Asymmetrical Quantization
- Method WoQ: GPTQ (AutoGPTQ algorithm)
Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round) v0.5.1
Note: this INT8 version of pythia-2.8b has been quantized to run inference through CPU.
## Replication Recipe
### Step 1 Install Requirements
I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment.
```
wget https://github.com/intel/auto-round/archive/refs/tags/v0.5.1.tar.gz
tar -xvzf v0.5.1.tar.gz
cd auto-round-0.5.1
pip install -r requirements-cpu.txt --upgrade
```
### Step 2 Build Intel AutoRound wheel from sources
```
pip install -vvv --no-build-isolation -e .[cpu]
```
### Step 3 Script for Quantization
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "EleutherAI/pythia-2.8b"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
from auto_round import AutoRound
bits, group_size, sym, device, amp = 8, 64, False, 'cpu', False
autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp)
autoround.quantize()
output_dir = "./AutoRound/EleutherAI_pythia-2.8b-autogptq-int8-gs64-asym"
autoround.save_quantized(output_dir, format='auto_gptq', inplace=True)
```
## License
[Apache 2.0 License](https://choosealicense.com/licenses/apache-2.0/)
## Disclaimer
This quantized model comes with no warrenty. It has been developed only for research purposes.
|
mradermacher/ReTool-Qwen3-4B-SFT-cold-started-GGUF | mradermacher | 2025-05-23T20:20:17Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:swordfaith/ReTool-SFT-multi-turn",
"base_model:swordfaith/ReTool-Qwen3-4B-SFT-cold-started",
"base_model:quantized:swordfaith/ReTool-Qwen3-4B-SFT-cold-started",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-05-23T20:03:09Z | ---
base_model: swordfaith/ReTool-Qwen3-4B-SFT-cold-started
datasets:
- swordfaith/ReTool-SFT-multi-turn
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/swordfaith/ReTool-Qwen3-4B-SFT-cold-started
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/ReTool-Qwen3-4B-SFT-cold-started-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ReTool-Qwen3-4B-SFT-cold-started-GGUF/resolve/main/ReTool-Qwen3-4B-SFT-cold-started.Q2_K.gguf) | Q2_K | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/ReTool-Qwen3-4B-SFT-cold-started-GGUF/resolve/main/ReTool-Qwen3-4B-SFT-cold-started.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/ReTool-Qwen3-4B-SFT-cold-started-GGUF/resolve/main/ReTool-Qwen3-4B-SFT-cold-started.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ReTool-Qwen3-4B-SFT-cold-started-GGUF/resolve/main/ReTool-Qwen3-4B-SFT-cold-started.Q3_K_L.gguf) | Q3_K_L | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/ReTool-Qwen3-4B-SFT-cold-started-GGUF/resolve/main/ReTool-Qwen3-4B-SFT-cold-started.IQ4_XS.gguf) | IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/ReTool-Qwen3-4B-SFT-cold-started-GGUF/resolve/main/ReTool-Qwen3-4B-SFT-cold-started.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ReTool-Qwen3-4B-SFT-cold-started-GGUF/resolve/main/ReTool-Qwen3-4B-SFT-cold-started.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ReTool-Qwen3-4B-SFT-cold-started-GGUF/resolve/main/ReTool-Qwen3-4B-SFT-cold-started.Q5_K_S.gguf) | Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/ReTool-Qwen3-4B-SFT-cold-started-GGUF/resolve/main/ReTool-Qwen3-4B-SFT-cold-started.Q5_K_M.gguf) | Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/ReTool-Qwen3-4B-SFT-cold-started-GGUF/resolve/main/ReTool-Qwen3-4B-SFT-cold-started.Q6_K.gguf) | Q6_K | 3.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ReTool-Qwen3-4B-SFT-cold-started-GGUF/resolve/main/ReTool-Qwen3-4B-SFT-cold-started.Q8_0.gguf) | Q8_0 | 4.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ReTool-Qwen3-4B-SFT-cold-started-GGUF/resolve/main/ReTool-Qwen3-4B-SFT-cold-started.f16.gguf) | f16 | 8.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Stable-Diffusion-prompt-generator-GGUF | mradermacher | 2025-05-23T18:26:22Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Patil/Stable-Diffusion-prompt-generator",
"base_model:quantized:Patil/Stable-Diffusion-prompt-generator",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-23T18:24:45Z | ---
base_model: Patil/Stable-Diffusion-prompt-generator
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Patil/Stable-Diffusion-prompt-generator
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Stable-Diffusion-prompt-generator-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Stable-Diffusion-prompt-generator-GGUF/resolve/main/Stable-Diffusion-prompt-generator.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Stable-Diffusion-prompt-generator-GGUF/resolve/main/Stable-Diffusion-prompt-generator.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Stable-Diffusion-prompt-generator-GGUF/resolve/main/Stable-Diffusion-prompt-generator.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Stable-Diffusion-prompt-generator-GGUF/resolve/main/Stable-Diffusion-prompt-generator.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Stable-Diffusion-prompt-generator-GGUF/resolve/main/Stable-Diffusion-prompt-generator.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Stable-Diffusion-prompt-generator-GGUF/resolve/main/Stable-Diffusion-prompt-generator.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Stable-Diffusion-prompt-generator-GGUF/resolve/main/Stable-Diffusion-prompt-generator.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Stable-Diffusion-prompt-generator-GGUF/resolve/main/Stable-Diffusion-prompt-generator.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Stable-Diffusion-prompt-generator-GGUF/resolve/main/Stable-Diffusion-prompt-generator.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Stable-Diffusion-prompt-generator-GGUF/resolve/main/Stable-Diffusion-prompt-generator.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Stable-Diffusion-prompt-generator-GGUF/resolve/main/Stable-Diffusion-prompt-generator.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Stable-Diffusion-prompt-generator-GGUF/resolve/main/Stable-Diffusion-prompt-generator.f16.gguf) | f16 | 0.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
qayemmehdi/new_dpo_2 | qayemmehdi | 2025-05-23T18:24:17Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:qayemmehdi/mnlp_sft",
"base_model:adapter:qayemmehdi/mnlp_sft",
"license:other",
"region:us"
]
| null | 2025-05-23T18:22:30Z | ---
library_name: peft
license: other
base_model: qayemmehdi/mnlp_sft
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: save2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# save2
This model is a fine-tuned version of [qayemmehdi/mnlp_sft](https://huggingface.co/qayemmehdi/mnlp_sft) on the qayemmehdi/new_dpo_datset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0 |
MinaMila/llama_instbase_unlearned_ug_e-6_1.0_0.25_0.5_ep3_LoRa_ACSEmployment_2_cfda_ep5_22 | MinaMila | 2025-05-23T18:20:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-23T18:19:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rsh-raj/qwen-4b-finetuned | rsh-raj | 2025-05-23T18:19:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-23T18:19:34Z | ---
base_model: unsloth/qwen3-4b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** rsh-raj
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-4b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ARTPARK-IISc/whisper-medium-vaani-telugu | ARTPARK-IISc | 2025-05-23T18:18:25Z | 11 | 2 | null | [
"safetensors",
"whisper",
"automatic-speech-recognition",
"te",
"dataset:ARTPARK-IISc/Vaani",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:mit",
"region:us"
]
| automatic-speech-recognition | 2024-12-01T18:07:34Z | ---
license: mit
datasets:
- ARTPARK-IISc/Vaani
language:
- te
base_model:
- openai/whisper-medium
pipeline_tag: automatic-speech-recognition
---
```python
import torch
from transformers import WhisperForConditionalGeneration, WhisperProcessor, WhisperTokenizer,WhisperFeatureExtractor
import soundfile as sf
model="ARTPARK-IISc/whisper-medium-vaani-telugu"
# Load tokenizer and feature extractor individually
feature_extractor = WhisperFeatureExtractor.from_pretrained(model)
tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-medium", language="Telugu", task="transcribe")
# Create the processor manually
processor = WhisperProcessor(feature_extractor=feature_extractor, tokenizer=tokenizer)
# Load and preprocess the audio file
audio_file_path = "Sample_Audio.wav" # replace with your audio file path
device = "cuda" if torch.cuda.is_available() else "cpu"
# Load the processor and model
model = WhisperForConditionalGeneration.from_pretrained(model).to(device)
# load audio
audio_data, sample_rate = sf.read(audio_file_path)
# Ensure the audio is 16kHz (Whisper expects 16kHz audio)
if sample_rate != 16000:
import torchaudio
resampler = torchaudio.transforms.Resample(orig_freq=sample_rate, new_freq=16000)
audio_data = resampler(torch.tensor(audio_data).unsqueeze(0)).squeeze().numpy()
# Use the processor to prepare the input features
input_features = processor(audio_data, sampling_rate=16000, return_tensors="pt").input_features.to(device)
# Generate transcription (disable gradient calculation during inference)
with torch.no_grad():
predicted_ids = model.generate(input_features)
# Decode the generated IDs into human-readable text
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)[0]
print(transcription)
``` |
memoriaxr/bienal01 | memoriaxr | 2025-05-23T18:13:18Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2025-05-23T18:12:11Z | ---
license: creativeml-openrail-m
---
|
Berkayy4/results2 | Berkayy4 | 2025-05-23T18:09:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-1b-pt",
"base_model:finetune:google/gemma-3-1b-pt",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-23T18:01:18Z | ---
base_model: google/gemma-3-1b-pt
library_name: transformers
model_name: results2
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for results2
This model is a fine-tuned version of [google/gemma-3-1b-pt](https://huggingface.co/google/gemma-3-1b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Berkayy4/results2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
coh13001/setfit-muultilingual-e5-large-instruct-job-role | coh13001 | 2025-05-23T18:05:37Z | 38 | 0 | setfit | [
"setfit",
"safetensors",
"xlm-roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"region:us"
]
| text-classification | 2025-05-21T17:23:33Z | ---
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget: []
metrics:
- accuracy
pipeline_tag: text-classification
library_name: setfit
inference: true
---
# SetFit
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
<!-- - **Sentence Transformer:** [Unknown](https://huggingface.co/unknown) -->
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 14 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("setfit_model_id")
# Run inference
preds = model("I loved the spiderman movie!")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.11.5
- SetFit: 1.1.2
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
TOMFORD79/newv1_8 | TOMFORD79 | 2025-05-23T17:55:06Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
]
| any-to-any | 2025-05-23T17:27:23Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Exclusive-Camilla-Araujo-Viral-Video/Camilla.Araujo.Viral.Video | Exclusive-Camilla-Araujo-Viral-Video | 2025-05-23T17:48:11Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-23T17:48:02Z | 27 seconds ago - Camilla Araujo Video Original Video Link Camilla Araujo Video Viral On Social Media X Trending Now.Camilla Araujo Viral Video Clip Camilla Araujo Original Video.Camilla Araujo Viral Video Original Viral video took the internet by storm and amazed viewers on various social media platforms. Camilla Araujo Video, a young and talented digital creator, recently became famous thanks to this interesting video.
<a href="https://t.co/7273tiVxKL?v=primevideo" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://t.co/7273tiVxKL?v=primevideo" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://t.co/7273tiVxKL?v=primevideo"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
|
mradermacher/mxbai-rerank-large-v2-i1-GGUF | mradermacher | 2025-05-23T17:45:45Z | 392 | 2 | transformers | [
"transformers",
"gguf",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"ff",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gn",
"gu",
"ha",
"he",
"hi",
"hr",
"ht",
"hu",
"hy",
"id",
"ig",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lg",
"li",
"ln",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"ns",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"qu",
"rm",
"ro",
"ru",
"sa",
"sc",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"ss",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tn",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"wo",
"xh",
"yi",
"yo",
"zh",
"zu",
"base_model:mixedbread-ai/mxbai-rerank-large-v2",
"base_model:quantized:mixedbread-ai/mxbai-rerank-large-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
]
| null | 2025-03-13T02:48:54Z | ---
base_model: mixedbread-ai/mxbai-rerank-large-v2
language:
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- ff
- fi
- fr
- fy
- ga
- gd
- gl
- gn
- gu
- ha
- he
- hi
- hr
- ht
- hu
- hy
- id
- ig
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lg
- li
- ln
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- no
- ns
- om
- or
- pa
- pl
- ps
- pt
- qu
- rm
- ro
- ru
- sa
- sc
- sd
- si
- sk
- sl
- so
- sq
- sr
- ss
- su
- sv
- sw
- ta
- te
- th
- tl
- tn
- tr
- ug
- uk
- ur
- uz
- vi
- wo
- xh
- yi
- yo
- zh
- zu
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/mixedbread-ai/mxbai-rerank-large-v2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/mxbai-rerank-large-v2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/mxbai-rerank-large-v2-i1-GGUF/resolve/main/mxbai-rerank-large-v2.i1-IQ1_S.gguf) | i1-IQ1_S | 0.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/mxbai-rerank-large-v2-i1-GGUF/resolve/main/mxbai-rerank-large-v2.i1-IQ1_M.gguf) | i1-IQ1_M | 0.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/mxbai-rerank-large-v2-i1-GGUF/resolve/main/mxbai-rerank-large-v2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/mxbai-rerank-large-v2-i1-GGUF/resolve/main/mxbai-rerank-large-v2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/mxbai-rerank-large-v2-i1-GGUF/resolve/main/mxbai-rerank-large-v2.i1-IQ2_S.gguf) | i1-IQ2_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/mxbai-rerank-large-v2-i1-GGUF/resolve/main/mxbai-rerank-large-v2.i1-IQ2_M.gguf) | i1-IQ2_M | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/mxbai-rerank-large-v2-i1-GGUF/resolve/main/mxbai-rerank-large-v2.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/mxbai-rerank-large-v2-i1-GGUF/resolve/main/mxbai-rerank-large-v2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/mxbai-rerank-large-v2-i1-GGUF/resolve/main/mxbai-rerank-large-v2.i1-Q2_K.gguf) | i1-Q2_K | 0.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/mxbai-rerank-large-v2-i1-GGUF/resolve/main/mxbai-rerank-large-v2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/mxbai-rerank-large-v2-i1-GGUF/resolve/main/mxbai-rerank-large-v2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/mxbai-rerank-large-v2-i1-GGUF/resolve/main/mxbai-rerank-large-v2.i1-IQ3_S.gguf) | i1-IQ3_S | 0.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/mxbai-rerank-large-v2-i1-GGUF/resolve/main/mxbai-rerank-large-v2.i1-IQ3_M.gguf) | i1-IQ3_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/mxbai-rerank-large-v2-i1-GGUF/resolve/main/mxbai-rerank-large-v2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/mxbai-rerank-large-v2-i1-GGUF/resolve/main/mxbai-rerank-large-v2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/mxbai-rerank-large-v2-i1-GGUF/resolve/main/mxbai-rerank-large-v2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/mxbai-rerank-large-v2-i1-GGUF/resolve/main/mxbai-rerank-large-v2.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.0 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/mxbai-rerank-large-v2-i1-GGUF/resolve/main/mxbai-rerank-large-v2.i1-Q4_0.gguf) | i1-Q4_0 | 1.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/mxbai-rerank-large-v2-i1-GGUF/resolve/main/mxbai-rerank-large-v2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/mxbai-rerank-large-v2-i1-GGUF/resolve/main/mxbai-rerank-large-v2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mxbai-rerank-large-v2-i1-GGUF/resolve/main/mxbai-rerank-large-v2.i1-Q4_1.gguf) | i1-Q4_1 | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/mxbai-rerank-large-v2-i1-GGUF/resolve/main/mxbai-rerank-large-v2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/mxbai-rerank-large-v2-i1-GGUF/resolve/main/mxbai-rerank-large-v2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/mxbai-rerank-large-v2-i1-GGUF/resolve/main/mxbai-rerank-large-v2.i1-Q6_K.gguf) | i1-Q6_K | 1.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
stroyka174/rugpt3medium_based_on_gpt2 | stroyka174 | 2025-05-23T17:44:28Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-05-23T17:44:28Z | ---
license: apache-2.0
---
|
Vm34vm/Vmir_95 | Vm34vm | 2025-05-23T17:40:59Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
]
| null | 2025-05-23T17:40:59Z | ---
license: bigscience-openrail-m
---
|
MinaMila/llama_instbase_unlearned_ug_e-6_1.0_0.25_0.5_ep3_LoRa_GermanCredit_cfda_ep8_66 | MinaMila | 2025-05-23T17:30:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-23T17:29:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinaMila/llama_instbase_unlearned_ug_e-6_1.0_0.25_0.5_ep3_LoRa_GermanCredit_cfda_ep7_66 | MinaMila | 2025-05-23T17:23:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-23T17:23:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Sumit0987/finetuned-sqlcoder-beta1 | Sumit0987 | 2025-05-23T17:20:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-23T17:20:30Z | ---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
library_name: transformers
model_name: finetuned-sqlcoder-beta1
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for finetuned-sqlcoder-beta1
This model is a fine-tuned version of [unsloth/llama-3-8b-Instruct-bnb-4bit](https://huggingface.co/unsloth/llama-3-8b-Instruct-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Sumit0987/finetuned-sqlcoder-beta1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
GGNorbert/resnet101-s2-v0.2.0-Nonclipped | GGNorbert | 2025-05-23T17:19:19Z | 0 | 0 | configilm | [
"configilm",
"safetensors",
"resnet101",
"BigEarthNet v2.0",
"Remote Sensing",
"Classification",
"image-classification",
"Multispectral",
"arxiv:2407.03653",
"license:mit",
"region:us"
]
| image-classification | 2025-05-23T14:15:09Z | ---
thumbnail: "https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/RSiM_Logo_1.png"
tags:
- resnet101
- BigEarthNet v2.0
- Remote Sensing
- Classification
- image-classification
- Multispectral
library_name: configilm
license: mit
widget:
- src: example.png
example_title: Example
output:
- label: Agro-forestry areas
score: 0.000000
- label: Arable land
score: 1.000000
- label: Beaches, dunes, sands
score: 0.000000
- label: Broad-leaved forest
score: 0.000000
- label: Coastal wetlands
score: 0.000000
---
[TU Berlin](https://www.tu.berlin/) | [RSiM](https://rsim.berlin/) | [DIMA](https://www.dima.tu-berlin.de/menue/database_systems_and_information_management_group/) | [BigEarth](http://www.bigearth.eu/) | [BIFOLD](https://bifold.berlin/)
:---:|:---:|:---:|:---:|:---:
<a href="https://www.tu.berlin/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/tu-berlin-logo-long-red.svg" style="font-size: 1rem; height: 2em; width: auto" alt="TU Berlin Logo"/> | <a href="https://rsim.berlin/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/RSiM_Logo_1.png" style="font-size: 1rem; height: 2em; width: auto" alt="RSiM Logo"> | <a href="https://www.dima.tu-berlin.de/menue/database_systems_and_information_management_group/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/DIMA.png" style="font-size: 1rem; height: 2em; width: auto" alt="DIMA Logo"> | <a href="http://www.bigearth.eu/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/BigEarth.png" style="font-size: 1rem; height: 2em; width: auto" alt="BigEarth Logo"> | <a href="https://bifold.berlin/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/BIFOLD_Logo_farbig.png" style="font-size: 1rem; height: 2em; width: auto; margin-right: 1em" alt="BIFOLD Logo">
# Resnet101 pretrained on BigEarthNet v2.0 using Sentinel-2 bands
<!-- Optional images -->
<!--
[Sentinel-1](https://sentinel.esa.int/web/sentinel/missions/sentinel-1) | [Sentinel-2](https://sentinel.esa.int/web/sentinel/missions/sentinel-2)
:---:|:---:
<a href="https://sentinel.esa.int/web/sentinel/missions/sentinel-1"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/sentinel_2.jpg" style="font-size: 1rem; height: 10em; width: auto; margin-right: 1em" alt="Sentinel-2 Satellite"/> | <a href="https://sentinel.esa.int/web/sentinel/missions/sentinel-2"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/sentinel_1.jpg" style="font-size: 1rem; height: 10em; width: auto; margin-right: 1em" alt="Sentinel-1 Satellite"/>
-->
This model was trained on the BigEarthNet v2.0 (also known as reBEN) dataset using the Sentinel-2 bands.
It was trained using the following parameters:
- Number of epochs: up to 100 (with early stopping after 5 epochs of no improvement based on validation average
precision macro)
- Batch size: 512
- Learning rate: 0.001
- Dropout rate: 0.15
- Drop Path rate: 0.15
- Learning rate scheduler: LinearWarmupCosineAnnealing for 2000 warmup steps
- Optimizer: AdamW
- Seed: 42
The weights published in this model card were obtained after 17 training epochs.
For more information, please visit the [official BigEarthNet v2.0 (reBEN) repository](https://git.tu-berlin.de/rsim/reben-training-scripts), where you can find the training scripts.
](https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/combined_2000_600_2020_0_wide.jpg)
The model was evaluated on the test set of the BigEarthNet v2.0 dataset with the following results:
| Metric | Macro | Micro |
|:------------------|------------------:|------------------:|
| Average Precision | 0.738365 | 0.786881 |
| F1 Score | 0.650511 | 0.687173 |
| Precision | 0.764366 | 0.774735 |
# Example
| A Sentinel-2 image (true color representation) |
|:---------------------------------------------------:|
| ](example.png) |
| Class labels | Predicted scores |
|:--------------------------------------------------------------------------|--------------------------------------------------------------------------:|
| <p> Agro-forestry areas <br> Arable land <br> Beaches, dunes, sands <br> ... <br> Urban fabric </p> | <p> 0.000000 <br> 1.000000 <br> 0.000000 <br> ... <br> 0.000000 </p> |
To use the model, download the codes that define the model architecture from the
[official BigEarthNet v2.0 (reBEN) repository](https://git.tu-berlin.de/rsim/reben-training-scripts) and load the model using the
code below. Note that you have to install [`configilm`](https://pypi.org/project/configilm/) to use the provided code.
```python
from reben_publication.BigEarthNetv2_0_ImageClassifier import BigEarthNetv2_0_ImageClassifier
model = BigEarthNetv2_0_ImageClassifier.from_pretrained("path_to/huggingface_model_folder")
```
e.g.
```python
from reben_publication.BigEarthNetv2_0_ImageClassifier import BigEarthNetv2_0_ImageClassifier
model = BigEarthNetv2_0_ImageClassifier.from_pretrained(
"BIFOLD-BigEarthNetv2-0/resnet101-s2-v0.1.1")
```
If you use this model in your research or the provided code, please cite the following papers:
```bibtex
@article{clasen2024refinedbigearthnet,
title={reBEN: Refined BigEarthNet Dataset for Remote Sensing Image Analysis},
author={Clasen, Kai Norman and Hackel, Leonard and Burgert, Tom and Sumbul, Gencer and Demir, Beg{\"u}m and Markl, Volker},
year={2024},
eprint={2407.03653},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2407.03653},
}
```
```bibtex
@article{hackel2024configilm,
title={ConfigILM: A general purpose configurable library for combining image and language models for visual question answering},
author={Hackel, Leonard and Clasen, Kai Norman and Demir, Beg{\"u}m},
journal={SoftwareX},
volume={26},
pages={101731},
year={2024},
publisher={Elsevier}
}
```
|
cragtmp/ans2 | cragtmp | 2025-05-23T17:12:25Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.2-11B-Vision-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-11B-Vision-Instruct",
"region:us"
]
| null | 2025-05-23T17:08:22Z | ---
base_model: meta-llama/Llama-3.2-11B-Vision-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
bziemba/qwen3-0.6B-torchao-int820250521_190819 | bziemba | 2025-05-23T17:09:10Z | 16 | 0 | transformers | [
"transformers",
"pytorch",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"torchao",
"region:us"
]
| text-generation | 2025-05-21T21:39:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ayangello/ehonour-DeepSeek-R1-Medical | ayangello | 2025-05-23T17:06:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-23T17:06:05Z | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ayangello
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Shushant/deberta-v3-finetunedd-panclef2025 | Shushant | 2025-05-23T17:06:01Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-23T17:05:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mohhtl/6a0f65b5-6665-4e23-bd80-262262c47d46 | mohhtl | 2025-05-23T17:05:56Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"generated_from_trainer",
"dataset:85c37df6-2b06-4596-bb99-4cf09b38adae_test.json",
"dataset:85c37df6-2b06-4596-bb99-4cf09b38adae_synth.json",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"region:us"
]
| null | 2025-05-23T17:05:52Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-0.5B-Instruct
tags:
- generated_from_trainer
datasets:
- 85c37df6-2b06-4596-bb99-4cf09b38adae_test.json
- 85c37df6-2b06-4596-bb99-4cf09b38adae_synth.json
model-index:
- name: results/6a0f65b5-6665-4e23-bd80-262262c47d46
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.9.2`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-0.5B-Instruct
bf16: auto
dataset_prepared_path: results/85c37df6-2b06-4596-bb99-4cf09b38adae_last_run_prepared
datasets:
- path: 85c37df6-2b06-4596-bb99-4cf09b38adae_test.json
type: &id001
field: null
field_input: null
field_instruction: rxn_smiles
field_output: prod_smiles
field_system: null
format: null
no_input_format: null
system_format: '{system}'
system_prompt: ''
- path: 85c37df6-2b06-4596-bb99-4cf09b38adae_synth.json
type: *id001
flash_attention: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: constant
micro_batch_size: 2
model_type: AutoModelForCausalLM
num_epochs: 15
optimizer: adamw_bnb_8bit
output_dir: results/6a0f65b5-6665-4e23-bd80-262262c47d46
pad_to_sequence_len: null
resume_from_checkpoint: null
sample_packing: false
save_total_limit: 1
saves_per_epoch: 1
sequence_len: 2048
special_tokens: null
test_datasets:
- path: 85c37df6-2b06-4596-bb99-4cf09b38adae_test.json
split: train
type: *id001
tf32: false
tokenizer_type: AutoTokenizer
trust_remote_code: true
val_set_size: 0.0
wandb_entity: null
wandb_log_model: null
wandb_name: null
wandb_project: null
wandb_watch: null
warmup_ratio: 0.0
warmup_steps: 0
weight_decay: 0.0
```
</details><br>
# results/6a0f65b5-6665-4e23-bd80-262262c47d46
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct) on the 85c37df6-2b06-4596-bb99-4cf09b38adae_test.json and the 85c37df6-2b06-4596-bb99-4cf09b38adae_synth.json datasets.
It achieves the following results on the evaluation set:
- Loss: 0.0320
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- num_epochs: 15.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 1.1856 | 0.9960 | 185 | 0.7520 |
| 0.7203 | 1.9960 | 370 | 0.4619 |
| 0.4683 | 2.9960 | 555 | 0.3186 |
| 0.2155 | 3.9960 | 740 | 0.2280 |
| 0.2175 | 4.9960 | 925 | 0.1503 |
| 0.1838 | 5.9960 | 1110 | 0.1182 |
| 0.1409 | 6.9960 | 1295 | 0.0850 |
| 0.1614 | 7.9960 | 1480 | 0.0791 |
| 0.0208 | 8.9960 | 1665 | 0.0522 |
| 0.1521 | 9.9960 | 1850 | 0.0547 |
| 0.0621 | 10.9960 | 2035 | 0.0467 |
| 0.0538 | 11.9960 | 2220 | 0.0351 |
| 0.0509 | 12.9960 | 2405 | 0.0324 |
| 0.0284 | 13.9960 | 2590 | 0.0300 |
| 0.0714 | 14.9960 | 2775 | 0.0320 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.4.1+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1 |
MinaMila/llama_instbase_unlearned_ug_e-6_1.0_0.25_0.5_ep3_LoRa_GermanCredit_cfda_ep3_66 | MinaMila | 2025-05-23T16:56:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-23T16:55:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Mirella-e-Dynho-Alves-tem-video-intimo/Full.Video.MC.Mirella.e.Dynho.Alves.tem.video.intimo.vazado.link | Mirella-e-Dynho-Alves-tem-video-intimo | 2025-05-23T16:55:16Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-23T16:54:45Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
farwew/Med-QA-komodo-2 | farwew | 2025-05-23T16:52:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:Yellow-AI-NLP/komodo-7b-base",
"base_model:finetune:Yellow-AI-NLP/komodo-7b-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-23T16:48:33Z | ---
base_model: Yellow-AI-NLP/komodo-7b-base
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** farwew
- **License:** apache-2.0
- **Finetuned from model :** Yellow-AI-NLP/komodo-7b-base
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
BootesVoid/cmb0rk4nr0566u1cgrynl7wog_cmb0rmwm7056cu1cggr3uvdof | BootesVoid | 2025-05-23T16:51:40Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-23T16:51:38Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: serena
---
# Cmb0Rk4Nr0566U1Cgrynl7Wog_Cmb0Rmwm7056Cu1Cggr3Uvdof
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `serena` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "serena",
"lora_weights": "https://huggingface.co/BootesVoid/cmb0rk4nr0566u1cgrynl7wog_cmb0rmwm7056cu1cggr3uvdof/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb0rk4nr0566u1cgrynl7wog_cmb0rmwm7056cu1cggr3uvdof', weight_name='lora.safetensors')
image = pipeline('serena').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb0rk4nr0566u1cgrynl7wog_cmb0rmwm7056cu1cggr3uvdof/discussions) to add images that show off what you’ve made with this LoRA.
|
Martinalexd80/Sophie-Rain-Spiderman-Viral-Video-Tutorial | Martinalexd80 | 2025-05-23T16:45:21Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-23T15:55:48Z | 3 Minutes ago — Sophie Rain Spiderman Viral Video Original Viral video took the internet by storm and amazed viewers on various social media platforms. Sophie Rain Spiderman Video, a young and talented digital creator, recently became famous thanks to this interesting video.
<a href="https://t.co/7273tiVxKL?v=primevideo" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://t.co/7273tiVxKL?v=primevideo" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://t.co/7273tiVxKL?v=primevideo"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter Telegram
In the ever evolving landscape of celebrity culture, the Ishowspeedscandal underscores the relentless pursuit of sensationalism, a pursuit that often comes at the expense of truth and dignity. As we navigate the complexities of the digital age, the line between entertainment and exploitation remains perilously thin.
The recurrent theme of Leaked tapes and the subsequent fallout serves as a reminder of the fragility of reputation in the digital era. As the lines between private and public life continue to blur, celebrities like Prison Officerfind themselves at the mercy of internet chatter, where a rumor can ignite a firestorm of speculation and judgment
As the situation unfolds, the truth remains shrouded in mystery, leaving the public to ponder the authenticity of the rumors. In a world where fame and infamy are two sides of the same coin, the saga of Ishowspeedis a testament to the power of social media to shape narratives and challenge the boundaries of privacy and consent
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter Telegram
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Related Search :
sophie rain nude
sophie rain porn
sophie rain naked
sophie rain nudes
sophie rain leaks
sophie rain onlyfans
sophie rain leaked
sophie rain spiderman video
sophie rain leak
sophie rain age
sophie rain spiderman
sophie rain pussy
sophie rain xxx
sophie rain sex tape
sophie rain spider man
sophie rain spiderman video oficial
sophie rain leaked nudes
sophie rain onlyfans leaked
sophie rain erome
sophie rain spiderman video instagram
sophie rain spiderman leak
sophie rain spiderman video tutorial
sophie rain spiderman video twitter
sophie rain spiderman vid
sophie rain spiderman video leaked
sophie rain spiderman porn
sophie rain spiderman video oficial twitter
sophie rain spiderman video tiktok original
spider man sophie rain spiderman
sophie rain spiderman leaked
sophie rain spiderman video leak
sophie rain spiderman twitter
sophie rain spiderman xxx
sophie rain spiderman video xxx
sophie rain spiderman tiktok
sophie rain spiderman video instagram full video |
verymuch/c3 | verymuch | 2025-05-23T16:25:06Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.2-11B-Vision-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-11B-Vision-Instruct",
"region:us"
]
| null | 2025-05-23T16:20:55Z | ---
base_model: meta-llama/Llama-3.2-11B-Vision-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
ilybawkugo/lora_qwen_2e-4-88-1024 | ilybawkugo | 2025-05-23T16:14:18Z | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-23T16:14:17Z | ---
base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ilybawkugo
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MinaMila/phi3_unlearned_ug_e-5_1.0_0.15_0.05_LoRa_Adult_cfda_ep6_22 | MinaMila | 2025-05-23T16:10:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-23T16:10:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
votepurchase/plantMilkModelSuite_walnut | votepurchase | 2025-05-23T16:03:40Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2025-05-23T15:17:20Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
---
Original model is [here](https://civitai.com/models/1162518/plant-milk-model-suite?modelVersionId=1714002).
|
verymuch/q1 | verymuch | 2025-05-23T16:01:19Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.2-11B-Vision-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-11B-Vision-Instruct",
"region:us"
]
| null | 2025-05-23T15:40:40Z | ---
base_model: meta-llama/Llama-3.2-11B-Vision-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
link-18-shah-sapna-kumari-viral-video/shah.sapna.kumari.Full.Original.Video.Viral.On.Social.Media.TikTok.Trending.Now | link-18-shah-sapna-kumari-viral-video | 2025-05-23T15:55:50Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-23T15:55:19Z | <a data-target="animated-image.originalLink" rel="nofollow" href="https://iccnews.xyz/leaked?Viral"><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a>
<a rel="nofollow" href="https://iccnews.xyz/leaked?viral">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶</a>
<a rel="nofollow" href="https://iccnews.xyz/leaked?viral">🔴 CLICK HERE 🌐==►► Download Now)</a>
|
onegirl/XnxX-Original-katrina-lim-viral-kiffy-viral-video-Original-Link | onegirl | 2025-05-23T15:49:10Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-23T15:38:37Z | # +XxX~VIRAL@Original )!*katrina lim viral kiffy viral video Original Link viral On Social Media X
<p><a rel="nofollow" href="http://wixtube.site/?be">🔴𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🌐==►► 𝖣𝗈𝗐𝗇𝗅𝗈𝖺𝖽 𝖭𝗈𝗐</a></p>
<p><a rel="nofollow" href="http://wixtube.site/?be">👉🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 TO 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶</a></p>
<a rel="nofollow" href="http://wixtube.site/?be"><img alt="image/png" src="https://i.postimg.cc/mrmctY6d/gfhg.png"></a>
X+VIDEO 18+)* Katrina Lim Viral Kiffy Viral Video Full Video
12 hours ago — +VIDEO 18+)* Katrina Lim Viral Kiffy Viral Video Full Video Original Clip ; nmotakabr May 22, 2025, 1:41pm 1 ; komslake May 22, 2025, 1:49pm 3.
katrina lim viral kiffy viral video telegram video
Play katrina lim viral kiffy viral video telegram video on SoundCloud and ... [VIRAL VIDEO!] Video Viral Katrina Lim Kiffy Viral Telegram Video Link.
[X~VIDEOs™] Katrina Lim viral Kiffy Viral Video Full Video
13 hours ago — [X~VIDEOs™] Katrina Lim viral Kiffy Viral Video Full Video · W&B Help ... X~VIDEOs Katrina Lim viral Kiffy Viral Video Full Video Here.
Full Video Clip 18+ Viral katrina lim viral kiffy ...
2 days ago — Such was the case forNew 1 katrina lim viral kiffy viral video Leaked On Social Media X, a contestant in the 2024 Ms. [Pageant Name] competition ...
[VIRAL VIDEO] Katrina lim viral kiffy ...
2 days ago — [VIRAL VIDEO] Katrina lim viral kiffy Viral Video Full Original Video Viral On Social Media X. 0 downloads · 18 minutes ago in. 3d Printers.
videos-katrina-lim-viral-kiffy-viral-video ...
16 hours ago — We're on a journey to advance and democratize artificial intelligence through open source and open science.
VIRAL▔VIDEO!!)* Katrina lim kiffy viral Full Original Video
VIRAL▔VIDEO!!)* Katrina lim kiffy viral Full Original Video.
[Xxx ont videos**] **Katrina lim kiffy Viral Video Original ...
4 hours ago — ""[Xxx ont videos**] **Katrina lim kiffy Viral Video Original Full HD TRENDING**. | 1h 29m 29s | Video has closed captioning. Katrina lim ...
katrina lim old photos. real katrina lim viral video original. katrina lim chrome link. katrina lim viral kiffy sa telegram. Today's top videos.
katrina lim viral kiffy viral video original
A video of a performance by the Indian classical dancers katrina lim viral kiffy has gone viral on social media, sparking a trending hashtag on Telegram.
XX@HOT 18++)!* Katrina lim kiffy viral new original video clip
23 hours ago — [Original VIDEO] Katrina Lim viral Kiffy Viral Video Full Video · W&B ... [-VIRAL@Link-]katrina lim viral kiffy viral video Link viral On Social ...
[-VIRAL@Link-]katrina lim viral kiffy viral video Link viral On ...
3 hours ago — A video of a performance by the Indian classical dancers katrina lim viral kiffy has gone viral on social media, sparking a trending hashtag on Telegram.
[VIRAL VIDEO] Katrina lim viral kiffy ...
13 hours ago — Actor X𝚇X Katrina Lim First Time S𝙴X X𝚇X V𝚒deo po𝚛 , a young and talented digital creator, recently became famous thanks to this interesting ...
katrina lim viral kiffy viral video Link viral On Social Media X
1 day ago — katrina lim Viral video viral video, a young and talented digital creator, recently became famous thanks to this interesting video. l𝚎aked video ...
[Original VIDEO] Katrina Lim viral Kiffy Viral Video Full Video
7 hours ago — [Original VIDEO] Katrina Lim viral Kiffy Viral Video Full Video .
!~[18+]Katrina lim kiffy Orginal Video Viral On Social Media ...
1 day ago — shows Nambiar performing a traditional Indian dance in front of a captivated audience. The video, which has been viewed millions of times, has ...
katrina lim viral kiffy viral video Link viral On Social Media X
(Full Video CLIP) katrina lim viral kiffy viral video Link viral On Social Media
6 hours ago — katrina lim viral kiffy viral video Link viral On Social Media X. 0 downloads · 11 minutes ago. Share. one girl one wolf Exclusive Latest ...
(VIRAL▔CLIP) katrina lim viral kiffy viral video Link viral On ...
1 day ago — (VIRAL▔CLIP) katrina lim viral kiffy viral video Link viral On Social Media · 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 ==▻▻ 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video Viral Video XXX ·
(PDF) [VIRAL VIDEO] Katrina lim viral kiffy Viral Video Full ...
1 day ago — Explore the recent buzz around the Katrina lim ki y leaked video that has taken Twi er by storm. However, viewer discre on is
ont |
JialiL/CliamteBert_FineTune10-K | JialiL | 2025-05-23T15:47:23Z | 1 | 0 | null | [
"safetensors",
"roberta",
"climate_change",
"10-K",
"text-classification",
"base_model:climatebert/distilroberta-base-climate-detector",
"base_model:finetune:climatebert/distilroberta-base-climate-detector",
"region:us"
]
| text-classification | 2025-05-15T15:09:16Z | ---
base_model:
- climatebert/distilroberta-base-climate-detector
pipeline_tag: text-classification
tags:
- climate_change
- 10-K
---
This model is fine-tuned on 4,000 paragraphs from 10-K reports to detect firms' climate change–related disclosures.
The original model, climatebert/distilroberta-base-climate-detector, had a high false positive rate.
It often misclassifies descriptive language about business operations in environmentally sensitive industries as climate change–related content.
I used a 3:1:1 split for training, validation, and testing. After fine-tuning,
the model's accuracy in classifying climate change–related paragraphs in 10-K reports improved from 0.759 to 0.978,
and the improvement in accuracy is mainly driven by fewer false positive cases.
*Note: The original model yielded zero false negatives, indicating strong capability in identifying climate change–related disclosures.
In the validation set, the number of false positives dropped significantly, from 193 to 20. |
original-smriti-jain-viral-leaked-video/smriti.jain.viral.Video-Original-link.trending.on.telegram | original-smriti-jain-viral-leaked-video | 2025-05-23T15:45:52Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-23T15:44:27Z | <a data-target="animated-image.originalLink" rel="nofollow" href="https://iccnews.xyz/leaked?Viral"><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a>
<a rel="nofollow" href="https://iccnews.xyz/leaked?viral">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶</a>
<a rel="nofollow" href="https://iccnews.xyz/leaked?viral">🔴 CLICK HERE 🌐==►► Download Now)</a> |
MinaMila/llama_instbase_unlearned_ug_e-6_1.0_0.25_0.5_ep3_LoRa_GermanCredit_cfda_ep2_55 | MinaMila | 2025-05-23T15:41:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-23T15:41:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
winnieyangwannan/Llama-3.1-8B-Instruct_mlp_down_negative_addition_last_layer_8_2_song_ratio_3 | winnieyangwannan | 2025-05-23T15:35:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-23T15:33:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
winnieyangwannan/Llama-3.1-8B-Instruct_mlp_down_negative_addition_last_layer_6_2_song_ratio_3 | winnieyangwannan | 2025-05-23T15:35:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-23T15:33:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/pub-llama-13B-v5-GGUF | mradermacher | 2025-05-23T15:31:05Z | 42 | 0 | transformers | [
"transformers",
"gguf",
"ko",
"dataset:DopeorNope/OpenOrca-near-dedup-v1",
"base_model:Markr-AI/pub-llama-13B-v5",
"base_model:quantized:Markr-AI/pub-llama-13B-v5",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-18T10:47:17Z | ---
base_model: Markr-AI/pub-llama-13B-v5
datasets: DopeorNope/OpenOrca-near-dedup-v1
language:
- ko
library_name: transformers
license: cc-by-nc-sa-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Markr-AI/pub-llama-13B-v5
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/pub-llama-13B-v5-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/pub-llama-13B-v5-GGUF/resolve/main/pub-llama-13B-v5.Q2_K.gguf) | Q2_K | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/pub-llama-13B-v5-GGUF/resolve/main/pub-llama-13B-v5.Q3_K_S.gguf) | Q3_K_S | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/pub-llama-13B-v5-GGUF/resolve/main/pub-llama-13B-v5.Q3_K_M.gguf) | Q3_K_M | 6.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/pub-llama-13B-v5-GGUF/resolve/main/pub-llama-13B-v5.Q3_K_L.gguf) | Q3_K_L | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/pub-llama-13B-v5-GGUF/resolve/main/pub-llama-13B-v5.IQ4_XS.gguf) | IQ4_XS | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/pub-llama-13B-v5-GGUF/resolve/main/pub-llama-13B-v5.Q4_K_S.gguf) | Q4_K_S | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/pub-llama-13B-v5-GGUF/resolve/main/pub-llama-13B-v5.Q4_K_M.gguf) | Q4_K_M | 8.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/pub-llama-13B-v5-GGUF/resolve/main/pub-llama-13B-v5.Q5_K_S.gguf) | Q5_K_S | 9.2 | |
| [GGUF](https://huggingface.co/mradermacher/pub-llama-13B-v5-GGUF/resolve/main/pub-llama-13B-v5.Q5_K_M.gguf) | Q5_K_M | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/pub-llama-13B-v5-GGUF/resolve/main/pub-llama-13B-v5.Q6_K.gguf) | Q6_K | 10.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/pub-llama-13B-v5-GGUF/resolve/main/pub-llama-13B-v5.Q8_0.gguf) | Q8_0 | 14.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
alkiskoudounas/xls-r-53-it-italic-speaker | alkiskoudounas | 2025-05-23T15:30:22Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"intent",
"intent-classification",
"audio",
"it",
"dataset:RiTA-nlp/ITALIC",
"arxiv:2306.08502",
"base_model:jonatasgrosman/wav2vec2-large-xlsr-53-italian",
"base_model:finetune:jonatasgrosman/wav2vec2-large-xlsr-53-italian",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| audio-classification | 2025-05-23T10:18:25Z | ---
license: apache-2.0
task_categories:
- audio-classification
language:
- it
tags:
- intent
- intent-classification
- audio-classification
- audio
pretty_name: ITALIC
size_categories:
- 10K<n<100K
base_model:
- jonatasgrosman/wav2vec2-large-xlsr-53-italian
model-index:
- name: xls-r-53-it-italic-speaker
results: []
datasets:
- RiTA-nlp/ITALIC
library_name: transformers
---
# wav2vec 2.0 XLS-R 53-IT (300m) fine-tuned on ITALIC - "Hard Speaker"
ITALIC is an intent classification dataset for the Italian language, which is the first of its kind.
It includes spoken and written utterances and is annotated with 60 intents.
The dataset is available on [Zenodo](https://zenodo.org/record/8040649) and connectors ara available for the [HuggingFace Hub](https://huggingface.co/datasets/RiTA-nlp/ITALIC).
This is the [jonatasgrosman/wav2vec2-xls-r-53-IT](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-italian) model fine-tuned on the "Hard Speaker" split.
It achieves the following results on the test set:
- Accuracy: 0.837
- F1: 0.778
## Usage
You can use the model directly in the following manner:
```python
import torch
import librosa
from transformers import AutoModelForAudioClassification, AutoFeatureExtractor
## Load an audio file
audio_array, sr = librosa.load("path_to_audio.wav", sr=16000)
## Load model and feature extractor
model = AutoModelForAudioClassification.from_pretrained("alkiskoudounas/xls-r-53-it-italic-speaker")
feature_extractor = AutoFeatureExtractor.from_pretrained("jonatasgrosman/wav2vec2-large-xlsr-53-italian")
## Extract features
inputs = feature_extractor(audio_array.squeeze(), sampling_rate=feature_extractor.sampling_rate, padding=True, return_tensors="pt")
## Compute logits
logits = model(**inputs).logits
```
For more information about the dataset, please refer to the [paper](https://arxiv.org/abs/2306.08502).
## Citation
If you use this model in your research, please cite the following papers:
```bibtex
@inproceedings{koudounas2023italic,
title={ITALIC: An Italian Intent Classification Dataset},
author={Koudounas, Alkis and La Quatra, Moreno and Vaiani, Lorenzo and Colomba, Luca and Attanasio, Giuseppe and Pastor, Eliana and Cagliero, Luca and Baralis, Elena},
booktitle={Proc. Interspeech 2023},
pages={2153--2157},
year={2023}
}
@inproceedings{koudounas2025unlearning,
title={"Alexa, can you forget me?" Machine Unlearning Benchmark in Spoken Language Understanding},
author={Koudounas, Alkis and Savelli, Claudio and Giobergia, Flavio and Baralis, Elena},
booktitle={Proc. Interspeech 2025},
year={2025},
}
``` |
21skip/SLM_Translation_Model_Gemma3-4B | 21skip | 2025-05-23T12:19:53Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-23T12:19:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
p2kalita/gemma-3-The-Bhagawad-Gita-1.0 | p2kalita | 2025-05-23T12:05:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-23T12:05:27Z | ---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** p2kalita
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kylemesh19/gemma-3-smart-contract-scanner | kylemesh19 | 2025-05-23T12:03:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-23T11:39:08Z | ---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** kylemesh19
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
bekkuzer/dythinks-OpenMathInstruct-2-ja-CoT-qwen3-4b | bekkuzer | 2025-05-23T11:57:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"unsloth",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-23T08:26:53Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
qayemmehdi/mnlp_dpo_test | qayemmehdi | 2025-05-23T11:50:38Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:qayemmehdi/mnlp_sft",
"base_model:adapter:qayemmehdi/mnlp_sft",
"license:other",
"region:us"
]
| null | 2025-05-23T11:48:27Z | ---
library_name: peft
license: other
base_model: qayemmehdi/mnlp_sft
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: save2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# save2
This model is a fine-tuned version of [qayemmehdi/mnlp_sft](https://huggingface.co/qayemmehdi/mnlp_sft) on the dpo_en_demo dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0 |
MUR55/bert_turkish_personality_analysis | MUR55 | 2025-05-23T11:42:15Z | 17 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"multi-label-classification",
"personality",
"turkish",
"classification",
"human-resources",
"custom-trained",
"tr",
"base_model:dbmdz/bert-base-turkish-cased",
"base_model:finetune:dbmdz/bert-base-turkish-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-05-19T17:07:34Z | ---
language:
- tr
base_model:
- dbmdz/bert-base-turkish-cased
pipeline_tag: text-classification
tags:
- text-classification
- multi-label-classification
- personality
- bert
- pytorch
- transformers
- turkish
- classification
- human-resources
- custom-trained
license: apache-2.0
---
# bert\_turkish\_personality\_analysis
This repository hosts a **Turkish BERT model fine-tuned for multi-label personality trait classification**.
Built on top of `dbmdz/bert-base-turkish-cased`, this model predicts psychological and professional personality traits from Turkish text input.
## 🎯 Task: Multi-label Personality Trait Detection
Given a CV, personal statement, or written expression, the model assigns **zero or more traits** from the following set:
### 🏷️ Supported Labels
* `özgüvenli` – confident
* `içe kapanık` – introverted
* `lider` – leader
* `takım oyuncusu` – team player
* `kararsız` – indecisive
* `abartılı` – exaggerated
* `profesyonel` – professional
* `deneyimli` – experienced
The model supports **multi-label classification** using a sigmoid activation and thresholding logic.
## 🔧 Usage Example
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
# Load tokenizer and model
model_name = "MUR55/bert_turkish_personality_analysis"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
# Sample text
text = "5 yıllık yöneticilik tecrübemle liderlik becerilerimi geliştirdim, aynı zamanda ekip çalışmalarına önem veririm."
# Tokenize and predict
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True)
outputs = model(**inputs)
probs = torch.sigmoid(outputs.logits)
# Threshold to determine label presence
threshold = 0.5
labels = ["özgüvenli", "içe kapanık", "lider", "takım oyuncusu", "kararsız", "abartılı", "profesyonel", "deneyimli"]
predicted = [label for label, prob in zip(labels, probs[0]) if prob >= threshold]
print("Predicted traits:", predicted)
```
## 🧠 Model Details
* **Base model:** [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased)
* **Architecture:** BERT with a linear classification head
* **Task type:** Multi-label classification
* **Loss Function:** Binary Cross Entropy with Logits
* **Training Data:** Custom Turkish dataset with personality trait annotations (e.g., CVs, social texts)
## 📈 Performance
Model was evaluated on a held-out portion of the dataset. Replace below with your real metrics:
| Metric | Value |
| --------- | ----- |
| Accuracy | 0.92 |
| F1-Score | 0.94 |
| Precision | 0.91 |
| Recall | 0.96 |
## 🔍 Applications
* CV analysis and candidate profiling
* Smart recruiting and HR systems
* Social media or forum persona evaluation
* Turkish personality-aware recommendation systems
## 📁 Files Included
* `pytorch_model.bin` – fine-tuned model weights
* `config.json` – model configuration
* `tokenizer_config.json`, `vocab.txt` – tokenizer files
## 🤝 Acknowledgments
This project builds upon [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased). Thanks to the Turkish NLP community for contributions and datasets.
## 📬 Contact
If you have questions or suggestions, feel free to open an issue on the [model page](https://huggingface.co/MUR55/bert_turkish_personality_analysis) or contact the author. |
TheGardener/KD-Embedding-and-MLP-Llama-0.8B-epoch-5th-ver2 | TheGardener | 2025-05-23T11:34:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-23T11:33:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AventIQ-AI/sBERT_Text_Similarity | AventIQ-AI | 2025-05-23T11:17:15Z | 0 | 0 | null | [
"onnx",
"bert",
"region:us"
]
| null | 2025-05-23T11:16:04Z |
# Sentence-BERT Quantized Model for Text Similarity & Paraphrase Detection
This repository hosts a quantized version of the Sentence-BERT (SBERT) model, fine-tuned on the Quora Question Pairs dataset for text similarity and paraphrase detection. The model computes semantic similarity between two input sentences and has been optimized for efficient deployment using ONNX quantization.
## Model Details
- **Model Architecture:** Sentence-BERT (`all-MiniLM-L6-v2`)
- **Task:** Text Similarity & Paraphrase Detection
- **Dataset:** Quora Question Pairs (QQP)
- **Quantization:** ONNX (Dynamic Quantization)
- **Fine-tuning Framework:** Sentence-Transformers (Hugging Face)
## Usage
### Installation
```sh
pip install sentence-transformers onnxruntime transformers
```
### Loading the Model
#### Original Fine-tuned Model
```python
from sentence_transformers import SentenceTransformer
# Load the fine-tuned model
model = SentenceTransformer("fine-tuned-model")
# Encode two sentences and compute cosine similarity
sentence1 = "How can I learn Python?"
sentence2 = "What is the best way to study Python?"
emb1 = model.encode(sentence1)
emb2 = model.encode(sentence2)
# Cosine similarity
import numpy as np
score = np.dot(emb1, emb2) / (np.linalg.norm(emb1) * np.linalg.norm(emb2))
print("Similarity Score:", score)
# Threshold to classify as paraphrase
print("Paraphrase" if score > 0.75 else "Not Paraphrase")
```
#### Quantized ONNX Model
```python
from onnxruntime import InferenceSession
from transformers import AutoTokenizer
import numpy as np
# Load tokenizer and ONNX session
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/all-MiniLM-L6-v2")
session = InferenceSession("sbert_onnx/model.onnx")
def encode_onnx(session, tokenizer, sentence):
inputs = tokenizer(sentence, return_tensors="np", padding=True, truncation=True)
outputs = session.run(None, dict(inputs))
return outputs[0][0]
# Encode and compute similarity
emb1 = encode_onnx(session, tokenizer, sentence1)
emb2 = encode_onnx(session, tokenizer, sentence2)
score = np.dot(emb1, emb2) / (np.linalg.norm(emb1) * np.linalg.norm(emb2))
print("Quantized Similarity Score:", score)
print("Paraphrase" if score > 0.75 else "Not Paraphrase")
```
## Performance Metrics
- **Accuracy:** ~0.87
- **F1 Score:** ~0.85
- **Threshold for classification:** 0.75 cosine similarity
## Fine-Tuning Details
### Dataset
- **Source:** Quora Question Pairs (Kaggle)
- **Size:** 400K+ question pairs labeled as paraphrase or not
### Training Configuration
- **Epochs:** 3
- **Batch Size:** 16
- **Evaluation Steps:** 1000
- **Warmup Steps:** 1000
- **Loss Function:** CosineSimilarityLoss
### Quantization
- **Method:** ONNX dynamic quantization
- **Tool:** Hugging Face Optimum + ONNX Runtime
## Repository Structure
```
.
├── fine-tuned-model/ # Fine-tuned SBERT model directory
├── sbert_onnx/ # Quantized ONNX model directory
├── test_functions.py # Code for evaluation and testing
├── README.md # Project documentation
```
## Limitations
- The cosine similarity threshold (0.75) may need tuning for different domains.
- ONNX quantization may introduce slight performance degradation compared to full-precision models.
- SBERT embeddings do not produce classification logits, only similarity scores.
## Contributing
Contributions are welcome! Please open an issue or submit a pull request for bug fixes or improvements.
|
aiplexdeveloper/music_genres_classification | aiplexdeveloper | 2025-05-23T11:15:52Z | 0 | 0 | null | [
"pytorch",
"safetensors",
"wav2vec2",
"base_model:facebook/wav2vec2-base-960h",
"base_model:finetune:facebook/wav2vec2-base-960h",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-23T11:07:17Z | ---
license: apache-2.0
metrics:
- accuracy
- roc_auc
base_model:
- facebook/wav2vec2-base-960h
---
[Music genre](https://en.wikipedia.org/wiki/Music_genre) classification is a fundamental and versatile application in many various domains. Some possible use cases for music genre classification include:
- music recommendation systems;
- content organization and discovery;
- radio broadcasting and programming;
- music licensing and copyright management;
- music analysis and research;
- content tagging and metadata enrichment;
- audio identification and copyright protection;
- music production and creativity;
- healthcare and therapy;
- entertainment and gaming.
The model is trained based on publicly available dataset of labeled music data — [GTZAN Dataset](https://www.kaggle.com/datasets/andradaolteanu/gtzan-dataset-music-genre-classification) — that contains 1000 sample 30-second audio files evenly split among 10 genres:
- blues;
- classical;
- country;
- disco;
- hip-hop;
- jazz;
- metal;
- pop;
- reggae;
- rock.
The final code is available as a [Kaggle notebook](https://www.kaggle.com/code/dima806/music-genre-classification-wav2vec2-base-960h).
See also [my Medium article](https://medium.com/data-and-beyond/building-a-free-advanced-music-genre-classification-pipeline-using-machine-learning-654b0de7cc3e) for more details. |
mariagrandury/gemma-3-12b-it-unsloth-bnb-4bit-task1-3-lora-adapter | mariagrandury | 2025-05-23T11:02:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-12b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-12b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-23T11:02:25Z | ---
base_model: unsloth/gemma-3-12b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** mariagrandury
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-12b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
keko24/Qwen3-0.6B-mcqa-sft-mmlu-lora | keko24 | 2025-05-23T10:59:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-23T10:58:21Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LucianoLau/Llama2-7b_Fine-Tuned | LucianoLau | 2025-05-23T10:52:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-23T10:51:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
zyzzc/FranFran-Something-12B-Q4_K_M-GGUF | zyzzc | 2025-05-23T10:48:31Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:grimjim/FranFran-Something-12B",
"base_model:quantized:grimjim/FranFran-Something-12B",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2025-05-23T10:47:55Z | ---
base_model: grimjim/FranFran-Something-12B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
pipeline_tag: text-generation
---
# zyzzc/FranFran-Something-12B-Q4_K_M-GGUF
This model was converted to GGUF format from [`grimjim/FranFran-Something-12B`](https://huggingface.co/grimjim/FranFran-Something-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/grimjim/FranFran-Something-12B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo zyzzc/FranFran-Something-12B-Q4_K_M-GGUF --hf-file franfran-something-12b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo zyzzc/FranFran-Something-12B-Q4_K_M-GGUF --hf-file franfran-something-12b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo zyzzc/FranFran-Something-12B-Q4_K_M-GGUF --hf-file franfran-something-12b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo zyzzc/FranFran-Something-12B-Q4_K_M-GGUF --hf-file franfran-something-12b-q4_k_m.gguf -c 2048
```
|
dimasik87/1bb0058d-1be2-46e3-b272-121f2b1fcee3 | dimasik87 | 2025-05-23T10:42:54Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:NousResearch/Hermes-2-Pro-Llama-3-8B",
"base_model:quantized:NousResearch/Hermes-2-Pro-Llama-3-8B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2025-05-23T10:26:39Z | ---
base_model: NousResearch/Hermes-2-Pro-Llama-3-8B
library_name: transformers
model_name: 1bb0058d-1be2-46e3-b272-121f2b1fcee3
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
licence: license
---
# Model Card for 1bb0058d-1be2-46e3-b272-121f2b1fcee3
This model is a fine-tuned version of [NousResearch/Hermes-2-Pro-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="dimasik87/1bb0058d-1be2-46e3-b272-121f2b1fcee3", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-7/runs/ve1g2xw7)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
testcobaorg/test | testcobaorg | 2025-05-23T10:39:24Z | 0 | 0 | null | [
"license:intel-research",
"region:us"
]
| null | 2025-05-23T10:39:24Z | ---
license: intel-research
---
|
KheemP/whisper-base-quran-lora | KheemP | 2025-05-23T10:30:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"automatic-speech-recognition",
"audio",
"whisper",
"lora",
"peft",
"quran",
"arabic-diacritics",
"ar",
"dataset:quran-ayat-speech-text",
"base_model:tarteel-ai/whisper-base-ar-quran",
"base_model:adapter:tarteel-ai/whisper-base-ar-quran",
"license:mit",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2025-05-23T10:12:30Z | ---
library_name: transformers
license: mit
language:
- ar
tags:
- automatic-speech-recognition
- audio
- whisper
- lora
- peft
- quran
- arabic-diacritics
base_model: tarteel-ai/whisper-base-ar-quran
datasets:
- quran-ayat-speech-text # compiled from quran.ksu.edu.sa (see “Training Data”)
metrics:
- wer
pretty_name: Whisper-Base Qurʾān (LoRA)
---
# Whisper-Base Qurʾān LoRA 🕋📖
Low-rank‐adaptation (LoRA) fine-tune of **`tarteel-ai/whisper-base-ar-quran`**
for Arabic Qurʾān recitation (tilâwah).
Provides **diacritic-sensitive** ASR with a **test WER ≈ 5.98 %**, beating:
| model | WER ↓ | Δ vs ours |
|-------|-------|----------|
| **`KheemP/whisper-base-quran-lora`** | **0.0598** | — |
| tarteel-ai/whisper-base-ar-quran | 0.073 | **-1.3 ×** |
| tarteel-ai/whisper-tiny-ar-quran | 0.096 | **-1.6 ×** |
| NVIDIA FastConformer large *(NeMo)* | ≈ 0.069 | **-1.2 ×** |
*(All scores measured on the same 610-ayah hold-out set, with no text
normalisation – tashkīl included).*
---
## Quick start
```python
from transformers import WhisperProcessor, WhisperForConditionalGeneration
from peft import PeftModel
import torch, soundfile as sf
base_id = "tarteel-ai/whisper-base-ar-quran"
lora_id = "KheemP/whisper-base-quran-lora"
# load model+processor
model = WhisperForConditionalGeneration.from_pretrained(base_id, torch_dtype=torch.float16)
model = PeftModel.from_pretrained(model, lora_id)
proc = WhisperProcessor.from_pretrained(base_id)
# transcribe an mp3 -> text
audio, _ = sf.read("my_recitation.mp3")
inputs = proc(audio, sampling_rate=16_000, return_tensors="pt").to(model.device)
pred_ids = model.generate(**inputs)
print(proc.decode(pred_ids[0]))
````
> ⚠️ *This repo only stores the **LoRA adapter (\~2 MB)**.
> The code above automatically downloads the original Whisper base model and
> injects the adapter.*
---
## Model details
| | |
| ------------------------ | -------------------------- |
| **Back-bone** | Whisper Base (77 M params) |
| **LoRA rank / α / drop** | 8 / 16 / 0.05 |
| **Trainable params** | 0.59 M (0.8 %) |
| **Epochs** | 5 |
| **Batch / grad-accum** | 2×4 (effective = 8) |
| **LR / sched** | 5 · 10⁻⁴, constant |
| **Mixed-precision** | fp16 |
| **Hardware** | single NVIDIA A100 40 GB |
### Target modules
`q_proj, k_proj, v_proj, out_proj` in both encoder & decoder self-attn and
encoder-cross-attn blocks.
---
## Training data
* **Dataset:** 446 k MP3 ayāt scraped from [https://quran.ksu.edu.sa](https://quran.ksu.edu.sa), resampled
to 16 kHz and paired with canonical text from *all\_ayat.json*.
* **Filtering:**
* keep ≤ 30 s duration (→ 6091 ayāt)
* pick shortest recording per ayah
* 90 / 10 split ⇒ 5481 train / 610 test
* **Reciters:** 37; round-robin sampling ensures balanced voices.
---
## Evaluation
* **Metric:** jiwer WER with **no normalisation** (diacritics matter).
* **Result:** 0.0598 on the 610-ayah test split (95 % CI ± 0.003).
---
## Intended use & limitations
Designed for **speech-to-text of Qurʾān recitations in Modern Standard Arabic**.
Not expected to work for:
* conversational Arabic, dialects or non-Qurʾānic liturgy
* noisy, low-quality microphones
* verses longer than 30 seconds
---
## Citation
```bibtex
@software{quran_whisper_lora_2024,
author = {Kheem Dharmani},
title = {Whisper-Base Qurʾān LoRA Adapter},
year = 2024,
url = {https://huggingface.co/KheemP/whisper-base-quran-lora}
}
```
---
## Licence
*Back-bone* weights under MIT (same as Whisper).
Dataset sourced from the public domain.
Adapter itself released under **MIT**.
---
|
fabikru/model_5M_large_ds_masking_0.6_predicted_hparamas | fabikru | 2025-05-23T10:21:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"modernbert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2025-05-23T01:43:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/AceReason-Nemotron-14B-i1-GGUF | mradermacher | 2025-05-23T09:57:36Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:nvidia/AceReason-Nemotron-14B",
"base_model:quantized:nvidia/AceReason-Nemotron-14B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
]
| null | 2025-05-23T06:38:46Z | ---
base_model: nvidia/AceReason-Nemotron-14B
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/nvidia/AceReason-Nemotron-14B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/AceReason-Nemotron-14B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-14B-i1-GGUF/resolve/main/AceReason-Nemotron-14B.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-14B-i1-GGUF/resolve/main/AceReason-Nemotron-14B.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-14B-i1-GGUF/resolve/main/AceReason-Nemotron-14B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-14B-i1-GGUF/resolve/main/AceReason-Nemotron-14B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-14B-i1-GGUF/resolve/main/AceReason-Nemotron-14B.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-14B-i1-GGUF/resolve/main/AceReason-Nemotron-14B.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-14B-i1-GGUF/resolve/main/AceReason-Nemotron-14B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-14B-i1-GGUF/resolve/main/AceReason-Nemotron-14B.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-14B-i1-GGUF/resolve/main/AceReason-Nemotron-14B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-14B-i1-GGUF/resolve/main/AceReason-Nemotron-14B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-14B-i1-GGUF/resolve/main/AceReason-Nemotron-14B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-14B-i1-GGUF/resolve/main/AceReason-Nemotron-14B.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-14B-i1-GGUF/resolve/main/AceReason-Nemotron-14B.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-14B-i1-GGUF/resolve/main/AceReason-Nemotron-14B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-14B-i1-GGUF/resolve/main/AceReason-Nemotron-14B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-14B-i1-GGUF/resolve/main/AceReason-Nemotron-14B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-14B-i1-GGUF/resolve/main/AceReason-Nemotron-14B.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-14B-i1-GGUF/resolve/main/AceReason-Nemotron-14B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 8.6 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-14B-i1-GGUF/resolve/main/AceReason-Nemotron-14B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-14B-i1-GGUF/resolve/main/AceReason-Nemotron-14B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-14B-i1-GGUF/resolve/main/AceReason-Nemotron-14B.i1-Q4_1.gguf) | i1-Q4_1 | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-14B-i1-GGUF/resolve/main/AceReason-Nemotron-14B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-14B-i1-GGUF/resolve/main/AceReason-Nemotron-14B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-14B-i1-GGUF/resolve/main/AceReason-Nemotron-14B.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
winnieyangwannan/Llama-3.1-8B-Instruct_negative_addition_last_layer_12_2_song_ratio_3 | winnieyangwannan | 2025-05-23T09:53:48Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-18T03:46:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
CooperW/4trading_tokenizer | CooperW | 2025-05-23T09:41:19Z | 0 | 0 | transformers | [
"transformers",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-23T09:41:12Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
zyzzc/Dans-PersonalityEngine-V1.3.0-12b-Q4_K_M-GGUF | zyzzc | 2025-05-23T09:40:57Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"general-purpose",
"roleplay",
"storywriting",
"chemistry",
"biology",
"code",
"climate",
"axolotl",
"text-generation-inference",
"finetune",
"legal",
"medical",
"finance",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"ar",
"de",
"fr",
"es",
"hi",
"pt",
"ja",
"ko",
"dataset:PocketDoc/Dans-Prosemaxx-RP",
"dataset:PocketDoc/Dans-Personamaxx-Logs-2",
"dataset:PocketDoc/Dans-Personamaxx-VN",
"dataset:PocketDoc/Dans-Kinomaxx-VanillaBackrooms",
"dataset:PocketDoc/Dans-Prosemaxx-Gutenberg",
"dataset:PocketDoc/Dans-Prosemaxx-Cowriter-3-XL",
"dataset:PocketDoc/Dans-Prosemaxx-Adventure",
"dataset:PocketDoc/Dans-Failuremaxx-Adventure-3",
"dataset:PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot-2",
"dataset:PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot-3",
"dataset:PocketDoc/Dans-Prosemaxx-InstructWriter-Continue-2",
"dataset:PocketDoc/Dans-Prosemaxx-Instructwriter-Long",
"dataset:PocketDoc/Dans-Prosemaxx-RepRemover-1",
"dataset:PocketDoc/Dans-MemoryCore-CoreCurriculum-Small",
"dataset:AquaV/US-Army-Survival-Sharegpt",
"dataset:AquaV/Multi-Environment-Operations-Sharegpt",
"dataset:AquaV/Resistance-Sharegpt",
"dataset:AquaV/Interrogation-Sharegpt",
"dataset:AquaV/Chemical-Biological-Safety-Applications-Sharegpt",
"dataset:AquaV/Energetic-Materials-Sharegpt",
"dataset:PocketDoc/Dans-Mathmaxx",
"dataset:PJMixers/Math-Multiturn-1K-ShareGPT",
"dataset:PocketDoc/Dans-Taskmaxx",
"dataset:PocketDoc/Dans-Taskmaxx-DataPrepper",
"dataset:PocketDoc/Dans-Taskmaxx-ConcurrentQA-Reworked",
"dataset:PocketDoc/Dans-Taskmaxx-TableGPT",
"dataset:PocketDoc/Dans-Taskmaxx-SciRIFF",
"dataset:PocketDoc/Dans-Taskmaxx-Edit",
"dataset:PocketDoc/Dans-Toolmaxx-Agent",
"dataset:PocketDoc/Dans-Toolmaxx-ShellCommands",
"dataset:PocketDoc/Dans-Toolmaxx-Functions-Toolbench",
"dataset:PocketDoc/Dans-Toolmaxx-Functions-ToolACE",
"dataset:PocketDoc/Dans-Toolmaxx-Functions-apigen-subset",
"dataset:PocketDoc/Dans-Assistantmaxx-OpenAssistant2",
"dataset:PocketDoc/Dans-Assistantmaxx-Opus-Merge-2",
"dataset:PocketDoc/Dans-Assistantmaxx-sonnetorca-subset",
"dataset:PocketDoc/Dans-Assistantmaxx-sonnetorca-subset-2",
"dataset:PocketDoc/Dans-Assistantmaxx-Synthia",
"dataset:PocketDoc/Dans-Assistantmaxx-ASL",
"dataset:PocketDoc/Dans-Assistantmaxx-PersonaLLM-Opus",
"dataset:PocketDoc/Dans-Assistantmaxx-LongAlign",
"dataset:PocketDoc/Dans-Assistantmaxx-OpenLeecher-Instruct",
"dataset:PocketDoc/Dans-Assistantmaxx-Tulu3-IF",
"dataset:PocketDoc/Dans-Systemmaxx",
"dataset:PocketDoc/Dans-Logicmaxx-SAT-AP",
"dataset:PJMixers/grimulkan_theory-of-mind-ShareGPT",
"dataset:PJMixers/grimulkan_physical-reasoning-ShareGPT",
"dataset:PocketDoc/Dans-Reasoningmaxx-NaturalReasoning",
"dataset:PocketDoc/Dans-Reasoningmaxx-WebInstruct",
"dataset:PocketDoc/Dans-Reasoningmaxx-GeneralReasoning",
"dataset:PocketDoc/Dans-Assistantmaxx-ClosedInstruct",
"base_model:PocketDoc/Dans-PersonalityEngine-V1.3.0-12b",
"base_model:quantized:PocketDoc/Dans-PersonalityEngine-V1.3.0-12b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2025-05-23T09:40:24Z | ---
thumbnail: https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.3.0-12b/resolve/main/resources/pe.png
license: apache-2.0
tags:
- general-purpose
- roleplay
- storywriting
- chemistry
- biology
- code
- climate
- axolotl
- text-generation-inference
- finetune
- legal
- medical
- finance
- llama-cpp
- gguf-my-repo
datasets:
- PocketDoc/Dans-Prosemaxx-RP
- PocketDoc/Dans-Personamaxx-Logs-2
- PocketDoc/Dans-Personamaxx-VN
- PocketDoc/Dans-Kinomaxx-VanillaBackrooms
- PocketDoc/Dans-Prosemaxx-Gutenberg
- PocketDoc/Dans-Prosemaxx-Cowriter-3-XL
- PocketDoc/Dans-Prosemaxx-Adventure
- PocketDoc/Dans-Failuremaxx-Adventure-3
- PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot-2
- PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot-3
- PocketDoc/Dans-Prosemaxx-InstructWriter-Continue-2
- PocketDoc/Dans-Prosemaxx-Instructwriter-Long
- PocketDoc/Dans-Prosemaxx-RepRemover-1
- PocketDoc/Dans-MemoryCore-CoreCurriculum-Small
- AquaV/US-Army-Survival-Sharegpt
- AquaV/Multi-Environment-Operations-Sharegpt
- AquaV/Resistance-Sharegpt
- AquaV/Interrogation-Sharegpt
- AquaV/Chemical-Biological-Safety-Applications-Sharegpt
- AquaV/Energetic-Materials-Sharegpt
- PocketDoc/Dans-Mathmaxx
- PJMixers/Math-Multiturn-1K-ShareGPT
- PocketDoc/Dans-Taskmaxx
- PocketDoc/Dans-Taskmaxx-DataPrepper
- PocketDoc/Dans-Taskmaxx-ConcurrentQA-Reworked
- PocketDoc/Dans-Taskmaxx-TableGPT
- PocketDoc/Dans-Taskmaxx-SciRIFF
- PocketDoc/Dans-Taskmaxx-Edit
- PocketDoc/Dans-Toolmaxx-Agent
- PocketDoc/Dans-Toolmaxx-ShellCommands
- PocketDoc/Dans-Toolmaxx-Functions-Toolbench
- PocketDoc/Dans-Toolmaxx-Functions-ToolACE
- PocketDoc/Dans-Toolmaxx-Functions-apigen-subset
- PocketDoc/Dans-Assistantmaxx-OpenAssistant2
- PocketDoc/Dans-Assistantmaxx-Opus-Merge-2
- PocketDoc/Dans-Assistantmaxx-sonnetorca-subset
- PocketDoc/Dans-Assistantmaxx-sonnetorca-subset-2
- PocketDoc/Dans-Assistantmaxx-Synthia
- PocketDoc/Dans-Assistantmaxx-ASL
- PocketDoc/Dans-Assistantmaxx-PersonaLLM-Opus
- PocketDoc/Dans-Assistantmaxx-LongAlign
- PocketDoc/Dans-Assistantmaxx-OpenLeecher-Instruct
- PocketDoc/Dans-Assistantmaxx-Tulu3-IF
- PocketDoc/Dans-Systemmaxx
- PocketDoc/Dans-Logicmaxx-SAT-AP
- PJMixers/grimulkan_theory-of-mind-ShareGPT
- PJMixers/grimulkan_physical-reasoning-ShareGPT
- PocketDoc/Dans-Reasoningmaxx-NaturalReasoning
- PocketDoc/Dans-Reasoningmaxx-WebInstruct
- PocketDoc/Dans-Reasoningmaxx-GeneralReasoning
- PocketDoc/Dans-Assistantmaxx-ClosedInstruct
language:
- en
- ar
- de
- fr
- es
- hi
- pt
- ja
- ko
base_model: PocketDoc/Dans-PersonalityEngine-V1.3.0-12b
pipeline_tag: text-generation
library_name: transformers
---
# zyzzc/Dans-PersonalityEngine-V1.3.0-12b-Q4_K_M-GGUF
This model was converted to GGUF format from [`PocketDoc/Dans-PersonalityEngine-V1.3.0-12b`](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.3.0-12b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.3.0-12b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo zyzzc/Dans-PersonalityEngine-V1.3.0-12b-Q4_K_M-GGUF --hf-file dans-personalityengine-v1.3.0-12b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo zyzzc/Dans-PersonalityEngine-V1.3.0-12b-Q4_K_M-GGUF --hf-file dans-personalityengine-v1.3.0-12b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo zyzzc/Dans-PersonalityEngine-V1.3.0-12b-Q4_K_M-GGUF --hf-file dans-personalityengine-v1.3.0-12b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo zyzzc/Dans-PersonalityEngine-V1.3.0-12b-Q4_K_M-GGUF --hf-file dans-personalityengine-v1.3.0-12b-q4_k_m.gguf -c 2048
```
|
Vishal94/t5-small-en-hi-v1 | Vishal94 | 2025-05-23T06:23:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2025-05-23T06:12:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [Vishal Prem]
- **Model type:** [Machine Translation]
- **Language(s) (NLP):** [English,Hindi]
- **Finetuned from model [optional]:** [Flan-T5]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
```
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, T5Tokenizer, T5ForConditionalGeneration
hf_repo_name = "Vishal94/t5-small-en-hi-v1"
# Load model and tokenizer from Hugging Face Hub
tokenizer = T5Tokenizer.from_pretrained(hf_repo_name)
model = T5ForConditionalGeneration.from_pretrained(hf_repo_name)
# Example input
input_text = "Translate English to Hindi: What are you doing today?"
# Tokenize input
inputs = tokenizer(input_text, return_tensors="pt", padding=True)
# Generate output
outputs = model.generate(**inputs, max_new_tokens=64)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print("Output:", result)
```
|
xccds/maoxuan_adapter | xccds | 2025-05-23T06:16:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"zh",
"dataset:xccds/maoxuan",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"license:mit",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-23T06:00:04Z | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: maoxuan_adapter
tags:
- generated_from_trainer
- trl
- sft
licence: license
license: mit
datasets:
- xccds/maoxuan
language:
- zh
---
# Model Card for maoxuan_adapter
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="xccds/maoxuan_adapter", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.52.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
chano12/photo_sharing_summary | chano12 | 2025-05-23T06:09:42Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"region:us"
]
| null | 2025-05-23T06:09:30Z | ---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
humehealthbodypod/humehealthbodypod | humehealthbodypod | 2025-05-23T06:09:08Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-23T06:08:27Z | Hume Health Body Pod App - Hume health Scale ! Hume Body Pod Reviews.
The Body Pod is not exactly a compact product. Some reviewers pointed out that it takes up considerable space in a room, making it less ideal for small apartments or people with limited space. Additionally, while it’s designed for home use, it's not particularly portable, so it may not be convenient to take it on trips.
Official Website:
https://www.offerplox.com/e-commerce/hume-health-body-pod-reviews/
|
Subsets and Splits