modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-29 00:46:34
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 502
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-29 00:44:25
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
featherless-ai-quants/ArliAI-ArliAI-RPMax-12B-v1.1-GGUF | featherless-ai-quants | 2024-11-10T19:42:06Z | 29 | 0 | null | [
"gguf",
"text-generation",
"base_model:ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1",
"base_model:quantized:ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-06T02:17:33Z | ---
base_model: ArliAI/ArliAI-RPMax-12B-v1.1
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# ArliAI/ArliAI-RPMax-12B-v1.1 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [ArliAI-ArliAI-RPMax-12B-v1.1-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/ArliAI-ArliAI-RPMax-12B-v1.1-GGUF/blob/main/ArliAI-ArliAI-RPMax-12B-v1.1-IQ4_XS.gguf) | 6485.04 MB |
| Q2_K | [ArliAI-ArliAI-RPMax-12B-v1.1-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/ArliAI-ArliAI-RPMax-12B-v1.1-GGUF/blob/main/ArliAI-ArliAI-RPMax-12B-v1.1-Q2_K.gguf) | 4569.10 MB |
| Q3_K_L | [ArliAI-ArliAI-RPMax-12B-v1.1-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/ArliAI-ArliAI-RPMax-12B-v1.1-GGUF/blob/main/ArliAI-ArliAI-RPMax-12B-v1.1-Q3_K_L.gguf) | 6257.54 MB |
| Q3_K_M | [ArliAI-ArliAI-RPMax-12B-v1.1-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/ArliAI-ArliAI-RPMax-12B-v1.1-GGUF/blob/main/ArliAI-ArliAI-RPMax-12B-v1.1-Q3_K_M.gguf) | 5801.29 MB |
| Q3_K_S | [ArliAI-ArliAI-RPMax-12B-v1.1-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/ArliAI-ArliAI-RPMax-12B-v1.1-GGUF/blob/main/ArliAI-ArliAI-RPMax-12B-v1.1-Q3_K_S.gguf) | 5277.85 MB |
| Q4_K_M | [ArliAI-ArliAI-RPMax-12B-v1.1-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/ArliAI-ArliAI-RPMax-12B-v1.1-GGUF/blob/main/ArliAI-ArliAI-RPMax-12B-v1.1-Q4_K_M.gguf) | 7130.82 MB |
| Q4_K_S | [ArliAI-ArliAI-RPMax-12B-v1.1-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/ArliAI-ArliAI-RPMax-12B-v1.1-GGUF/blob/main/ArliAI-ArliAI-RPMax-12B-v1.1-Q4_K_S.gguf) | 6790.35 MB |
| Q5_K_M | [ArliAI-ArliAI-RPMax-12B-v1.1-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/ArliAI-ArliAI-RPMax-12B-v1.1-GGUF/blob/main/ArliAI-ArliAI-RPMax-12B-v1.1-Q5_K_M.gguf) | 8323.32 MB |
| Q5_K_S | [ArliAI-ArliAI-RPMax-12B-v1.1-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/ArliAI-ArliAI-RPMax-12B-v1.1-GGUF/blob/main/ArliAI-ArliAI-RPMax-12B-v1.1-Q5_K_S.gguf) | 8124.10 MB |
| Q6_K | [ArliAI-ArliAI-RPMax-12B-v1.1-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/ArliAI-ArliAI-RPMax-12B-v1.1-GGUF/blob/main/ArliAI-ArliAI-RPMax-12B-v1.1-Q6_K.gguf) | 9590.35 MB |
| Q8_0 | [ArliAI-ArliAI-RPMax-12B-v1.1-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/ArliAI-ArliAI-RPMax-12B-v1.1-GGUF/blob/main/ArliAI-ArliAI-RPMax-12B-v1.1-Q8_0.gguf) | 12419.10 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/Danielbrdz-Barcenas-2x10.7b-Korean-GGUF | featherless-ai-quants | 2024-11-10T19:42:03Z | 7 | 0 | null | [
"gguf",
"text-generation",
"base_model:Danielbrdz/Barcenas-2x10.7b-Korean",
"base_model:quantized:Danielbrdz/Barcenas-2x10.7b-Korean",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-05T23:52:50Z | ---
base_model: Danielbrdz/Barcenas-2x10.7b-Korean
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# Danielbrdz/Barcenas-2x10.7b-Korean GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [Danielbrdz-Barcenas-2x10.7b-Korean-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Danielbrdz-Barcenas-2x10.7b-Korean-GGUF/blob/main/Danielbrdz-Barcenas-2x10.7b-Korean-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [Danielbrdz-Barcenas-2x10.7b-Korean-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Danielbrdz-Barcenas-2x10.7b-Korean-GGUF/blob/main/Danielbrdz-Barcenas-2x10.7b-Korean-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [Danielbrdz-Barcenas-2x10.7b-Korean-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Danielbrdz-Barcenas-2x10.7b-Korean-GGUF/blob/main/Danielbrdz-Barcenas-2x10.7b-Korean-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [Danielbrdz-Barcenas-2x10.7b-Korean-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Danielbrdz-Barcenas-2x10.7b-Korean-GGUF/blob/main/Danielbrdz-Barcenas-2x10.7b-Korean-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [Danielbrdz-Barcenas-2x10.7b-Korean-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Danielbrdz-Barcenas-2x10.7b-Korean-GGUF/blob/main/Danielbrdz-Barcenas-2x10.7b-Korean-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [Danielbrdz-Barcenas-2x10.7b-Korean-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Danielbrdz-Barcenas-2x10.7b-Korean-GGUF/blob/main/Danielbrdz-Barcenas-2x10.7b-Korean-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [Danielbrdz-Barcenas-2x10.7b-Korean-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Danielbrdz-Barcenas-2x10.7b-Korean-GGUF/blob/main/Danielbrdz-Barcenas-2x10.7b-Korean-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [Danielbrdz-Barcenas-2x10.7b-Korean-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Danielbrdz-Barcenas-2x10.7b-Korean-GGUF/blob/main/Danielbrdz-Barcenas-2x10.7b-Korean-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [Danielbrdz-Barcenas-2x10.7b-Korean-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Danielbrdz-Barcenas-2x10.7b-Korean-GGUF/blob/main/Danielbrdz-Barcenas-2x10.7b-Korean-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [Danielbrdz-Barcenas-2x10.7b-Korean-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Danielbrdz-Barcenas-2x10.7b-Korean-GGUF/blob/main/Danielbrdz-Barcenas-2x10.7b-Korean-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [Danielbrdz-Barcenas-2x10.7b-Korean-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Danielbrdz-Barcenas-2x10.7b-Korean-GGUF/blob/main/Danielbrdz-Barcenas-2x10.7b-Korean-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/KoboldAI-Mistral-7B-Erebus-v3-GGUF | featherless-ai-quants | 2024-11-10T19:41:56Z | 84 | 0 | null | [
"gguf",
"text-generation",
"base_model:KoboldAI/Mistral-7B-Erebus-v3",
"base_model:quantized:KoboldAI/Mistral-7B-Erebus-v3",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-05T22:42:59Z | ---
base_model: KoboldAI/Mistral-7B-Erebus-v3
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# KoboldAI/Mistral-7B-Erebus-v3 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [KoboldAI-Mistral-7B-Erebus-v3-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/KoboldAI-Mistral-7B-Erebus-v3-GGUF/blob/main/KoboldAI-Mistral-7B-Erebus-v3-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [KoboldAI-Mistral-7B-Erebus-v3-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/KoboldAI-Mistral-7B-Erebus-v3-GGUF/blob/main/KoboldAI-Mistral-7B-Erebus-v3-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [KoboldAI-Mistral-7B-Erebus-v3-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/KoboldAI-Mistral-7B-Erebus-v3-GGUF/blob/main/KoboldAI-Mistral-7B-Erebus-v3-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [KoboldAI-Mistral-7B-Erebus-v3-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/KoboldAI-Mistral-7B-Erebus-v3-GGUF/blob/main/KoboldAI-Mistral-7B-Erebus-v3-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [KoboldAI-Mistral-7B-Erebus-v3-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/KoboldAI-Mistral-7B-Erebus-v3-GGUF/blob/main/KoboldAI-Mistral-7B-Erebus-v3-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [KoboldAI-Mistral-7B-Erebus-v3-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/KoboldAI-Mistral-7B-Erebus-v3-GGUF/blob/main/KoboldAI-Mistral-7B-Erebus-v3-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [KoboldAI-Mistral-7B-Erebus-v3-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/KoboldAI-Mistral-7B-Erebus-v3-GGUF/blob/main/KoboldAI-Mistral-7B-Erebus-v3-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [KoboldAI-Mistral-7B-Erebus-v3-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/KoboldAI-Mistral-7B-Erebus-v3-GGUF/blob/main/KoboldAI-Mistral-7B-Erebus-v3-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [KoboldAI-Mistral-7B-Erebus-v3-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/KoboldAI-Mistral-7B-Erebus-v3-GGUF/blob/main/KoboldAI-Mistral-7B-Erebus-v3-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [KoboldAI-Mistral-7B-Erebus-v3-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/KoboldAI-Mistral-7B-Erebus-v3-GGUF/blob/main/KoboldAI-Mistral-7B-Erebus-v3-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [KoboldAI-Mistral-7B-Erebus-v3-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/KoboldAI-Mistral-7B-Erebus-v3-GGUF/blob/main/KoboldAI-Mistral-7B-Erebus-v3-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-GGUF | featherless-ai-quants | 2024-11-10T19:41:46Z | 41 | 0 | null | [
"gguf",
"text-generation",
"base_model:grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B",
"base_model:quantized:grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-05T22:25:46Z | ---
base_model: grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-GGUF/blob/main/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-GGUF/blob/main/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-GGUF/blob/main/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-GGUF/blob/main/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-GGUF/blob/main/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-GGUF/blob/main/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-GGUF/blob/main/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-GGUF/blob/main/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-GGUF/blob/main/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-GGUF/blob/main/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-GGUF/blob/main/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/nbeerbower-mistral-nemo-wissenschaft-12B-GGUF | featherless-ai-quants | 2024-11-10T19:41:43Z | 18 | 0 | null | [
"gguf",
"text-generation",
"base_model:nbeerbower/mistral-nemo-wissenschaft-12B",
"base_model:quantized:nbeerbower/mistral-nemo-wissenschaft-12B",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-05T22:03:56Z | ---
base_model: nbeerbower/mistral-nemo-wissenschaft-12B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# nbeerbower/mistral-nemo-wissenschaft-12B GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [nbeerbower-mistral-nemo-wissenschaft-12B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-wissenschaft-12B-GGUF/blob/main/nbeerbower-mistral-nemo-wissenschaft-12B-IQ4_XS.gguf) | 6485.04 MB |
| Q2_K | [nbeerbower-mistral-nemo-wissenschaft-12B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-wissenschaft-12B-GGUF/blob/main/nbeerbower-mistral-nemo-wissenschaft-12B-Q2_K.gguf) | 4569.10 MB |
| Q3_K_L | [nbeerbower-mistral-nemo-wissenschaft-12B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-wissenschaft-12B-GGUF/blob/main/nbeerbower-mistral-nemo-wissenschaft-12B-Q3_K_L.gguf) | 6257.54 MB |
| Q3_K_M | [nbeerbower-mistral-nemo-wissenschaft-12B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-wissenschaft-12B-GGUF/blob/main/nbeerbower-mistral-nemo-wissenschaft-12B-Q3_K_M.gguf) | 5801.29 MB |
| Q3_K_S | [nbeerbower-mistral-nemo-wissenschaft-12B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-wissenschaft-12B-GGUF/blob/main/nbeerbower-mistral-nemo-wissenschaft-12B-Q3_K_S.gguf) | 5277.85 MB |
| Q4_K_M | [nbeerbower-mistral-nemo-wissenschaft-12B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-wissenschaft-12B-GGUF/blob/main/nbeerbower-mistral-nemo-wissenschaft-12B-Q4_K_M.gguf) | 7130.82 MB |
| Q4_K_S | [nbeerbower-mistral-nemo-wissenschaft-12B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-wissenschaft-12B-GGUF/blob/main/nbeerbower-mistral-nemo-wissenschaft-12B-Q4_K_S.gguf) | 6790.35 MB |
| Q5_K_M | [nbeerbower-mistral-nemo-wissenschaft-12B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-wissenschaft-12B-GGUF/blob/main/nbeerbower-mistral-nemo-wissenschaft-12B-Q5_K_M.gguf) | 8323.32 MB |
| Q5_K_S | [nbeerbower-mistral-nemo-wissenschaft-12B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-wissenschaft-12B-GGUF/blob/main/nbeerbower-mistral-nemo-wissenschaft-12B-Q5_K_S.gguf) | 8124.10 MB |
| Q6_K | [nbeerbower-mistral-nemo-wissenschaft-12B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-wissenschaft-12B-GGUF/blob/main/nbeerbower-mistral-nemo-wissenschaft-12B-Q6_K.gguf) | 9590.35 MB |
| Q8_0 | [nbeerbower-mistral-nemo-wissenschaft-12B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-wissenschaft-12B-GGUF/blob/main/nbeerbower-mistral-nemo-wissenschaft-12B-Q8_0.gguf) | 12419.10 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/automerger-T3qm7xNeuralsirkrishna-7B-GGUF | featherless-ai-quants | 2024-11-10T19:41:42Z | 6 | 0 | null | [
"gguf",
"text-generation",
"base_model:automerger/T3qm7xNeuralsirkrishna-7B",
"base_model:quantized:automerger/T3qm7xNeuralsirkrishna-7B",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-05T21:42:12Z | ---
base_model: automerger/T3qm7xNeuralsirkrishna-7B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# automerger/T3qm7xNeuralsirkrishna-7B GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [automerger-T3qm7xNeuralsirkrishna-7B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/automerger-T3qm7xNeuralsirkrishna-7B-GGUF/blob/main/automerger-T3qm7xNeuralsirkrishna-7B-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [automerger-T3qm7xNeuralsirkrishna-7B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/automerger-T3qm7xNeuralsirkrishna-7B-GGUF/blob/main/automerger-T3qm7xNeuralsirkrishna-7B-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [automerger-T3qm7xNeuralsirkrishna-7B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/automerger-T3qm7xNeuralsirkrishna-7B-GGUF/blob/main/automerger-T3qm7xNeuralsirkrishna-7B-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [automerger-T3qm7xNeuralsirkrishna-7B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/automerger-T3qm7xNeuralsirkrishna-7B-GGUF/blob/main/automerger-T3qm7xNeuralsirkrishna-7B-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [automerger-T3qm7xNeuralsirkrishna-7B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/automerger-T3qm7xNeuralsirkrishna-7B-GGUF/blob/main/automerger-T3qm7xNeuralsirkrishna-7B-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [automerger-T3qm7xNeuralsirkrishna-7B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/automerger-T3qm7xNeuralsirkrishna-7B-GGUF/blob/main/automerger-T3qm7xNeuralsirkrishna-7B-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [automerger-T3qm7xNeuralsirkrishna-7B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/automerger-T3qm7xNeuralsirkrishna-7B-GGUF/blob/main/automerger-T3qm7xNeuralsirkrishna-7B-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [automerger-T3qm7xNeuralsirkrishna-7B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/automerger-T3qm7xNeuralsirkrishna-7B-GGUF/blob/main/automerger-T3qm7xNeuralsirkrishna-7B-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [automerger-T3qm7xNeuralsirkrishna-7B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/automerger-T3qm7xNeuralsirkrishna-7B-GGUF/blob/main/automerger-T3qm7xNeuralsirkrishna-7B-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [automerger-T3qm7xNeuralsirkrishna-7B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/automerger-T3qm7xNeuralsirkrishna-7B-GGUF/blob/main/automerger-T3qm7xNeuralsirkrishna-7B-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [automerger-T3qm7xNeuralsirkrishna-7B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/automerger-T3qm7xNeuralsirkrishna-7B-GGUF/blob/main/automerger-T3qm7xNeuralsirkrishna-7B-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/nbeerbower-mistral-nemo-gutades-12B-GGUF | featherless-ai-quants | 2024-11-10T19:41:37Z | 68 | 0 | null | [
"gguf",
"text-generation",
"base_model:nbeerbower/mistral-nemo-gutades-12B",
"base_model:quantized:nbeerbower/mistral-nemo-gutades-12B",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-05T21:19:05Z | ---
base_model: nbeerbower/mistral-nemo-gutades-12B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# nbeerbower/mistral-nemo-gutades-12B GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [nbeerbower-mistral-nemo-gutades-12B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-gutades-12B-GGUF/blob/main/nbeerbower-mistral-nemo-gutades-12B-IQ4_XS.gguf) | 6485.04 MB |
| Q2_K | [nbeerbower-mistral-nemo-gutades-12B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-gutades-12B-GGUF/blob/main/nbeerbower-mistral-nemo-gutades-12B-Q2_K.gguf) | 4569.10 MB |
| Q3_K_L | [nbeerbower-mistral-nemo-gutades-12B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-gutades-12B-GGUF/blob/main/nbeerbower-mistral-nemo-gutades-12B-Q3_K_L.gguf) | 6257.54 MB |
| Q3_K_M | [nbeerbower-mistral-nemo-gutades-12B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-gutades-12B-GGUF/blob/main/nbeerbower-mistral-nemo-gutades-12B-Q3_K_M.gguf) | 5801.29 MB |
| Q3_K_S | [nbeerbower-mistral-nemo-gutades-12B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-gutades-12B-GGUF/blob/main/nbeerbower-mistral-nemo-gutades-12B-Q3_K_S.gguf) | 5277.85 MB |
| Q4_K_M | [nbeerbower-mistral-nemo-gutades-12B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-gutades-12B-GGUF/blob/main/nbeerbower-mistral-nemo-gutades-12B-Q4_K_M.gguf) | 7130.82 MB |
| Q4_K_S | [nbeerbower-mistral-nemo-gutades-12B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-gutades-12B-GGUF/blob/main/nbeerbower-mistral-nemo-gutades-12B-Q4_K_S.gguf) | 6790.35 MB |
| Q5_K_M | [nbeerbower-mistral-nemo-gutades-12B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-gutades-12B-GGUF/blob/main/nbeerbower-mistral-nemo-gutades-12B-Q5_K_M.gguf) | 8323.32 MB |
| Q5_K_S | [nbeerbower-mistral-nemo-gutades-12B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-gutades-12B-GGUF/blob/main/nbeerbower-mistral-nemo-gutades-12B-Q5_K_S.gguf) | 8124.10 MB |
| Q6_K | [nbeerbower-mistral-nemo-gutades-12B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-gutades-12B-GGUF/blob/main/nbeerbower-mistral-nemo-gutades-12B-Q6_K.gguf) | 9590.35 MB |
| Q8_0 | [nbeerbower-mistral-nemo-gutades-12B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-gutades-12B-GGUF/blob/main/nbeerbower-mistral-nemo-gutades-12B-Q8_0.gguf) | 12419.10 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/icefog72-IceLemonTeaRP-32k-7b-GGUF | featherless-ai-quants | 2024-11-10T19:41:21Z | 52 | 0 | null | [
"gguf",
"text-generation",
"base_model:icefog72/IceLemonTeaRP-32k-7b",
"base_model:quantized:icefog72/IceLemonTeaRP-32k-7b",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-05T18:04:24Z | ---
base_model: icefog72/IceLemonTeaRP-32k-7b
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# icefog72/IceLemonTeaRP-32k-7b GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [icefog72-IceLemonTeaRP-32k-7b-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/icefog72-IceLemonTeaRP-32k-7b-GGUF/blob/main/icefog72-IceLemonTeaRP-32k-7b-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [icefog72-IceLemonTeaRP-32k-7b-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/icefog72-IceLemonTeaRP-32k-7b-GGUF/blob/main/icefog72-IceLemonTeaRP-32k-7b-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [icefog72-IceLemonTeaRP-32k-7b-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/icefog72-IceLemonTeaRP-32k-7b-GGUF/blob/main/icefog72-IceLemonTeaRP-32k-7b-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [icefog72-IceLemonTeaRP-32k-7b-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/icefog72-IceLemonTeaRP-32k-7b-GGUF/blob/main/icefog72-IceLemonTeaRP-32k-7b-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [icefog72-IceLemonTeaRP-32k-7b-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/icefog72-IceLemonTeaRP-32k-7b-GGUF/blob/main/icefog72-IceLemonTeaRP-32k-7b-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [icefog72-IceLemonTeaRP-32k-7b-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/icefog72-IceLemonTeaRP-32k-7b-GGUF/blob/main/icefog72-IceLemonTeaRP-32k-7b-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [icefog72-IceLemonTeaRP-32k-7b-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/icefog72-IceLemonTeaRP-32k-7b-GGUF/blob/main/icefog72-IceLemonTeaRP-32k-7b-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [icefog72-IceLemonTeaRP-32k-7b-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/icefog72-IceLemonTeaRP-32k-7b-GGUF/blob/main/icefog72-IceLemonTeaRP-32k-7b-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [icefog72-IceLemonTeaRP-32k-7b-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/icefog72-IceLemonTeaRP-32k-7b-GGUF/blob/main/icefog72-IceLemonTeaRP-32k-7b-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [icefog72-IceLemonTeaRP-32k-7b-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/icefog72-IceLemonTeaRP-32k-7b-GGUF/blob/main/icefog72-IceLemonTeaRP-32k-7b-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [icefog72-IceLemonTeaRP-32k-7b-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/icefog72-IceLemonTeaRP-32k-7b-GGUF/blob/main/icefog72-IceLemonTeaRP-32k-7b-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/fusionbase-fusion-guide-12b-0.1-GGUF | featherless-ai-quants | 2024-11-10T19:41:18Z | 12 | 0 | null | [
"gguf",
"text-generation",
"base_model:fusionbase/fusion-guide-12b-0.1",
"base_model:quantized:fusionbase/fusion-guide-12b-0.1",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-05T18:01:06Z | ---
base_model: fusionbase/fusion-guide-12b-0.1
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# fusionbase/fusion-guide-12b-0.1 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [fusionbase-fusion-guide-12b-0.1-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/fusionbase-fusion-guide-12b-0.1-GGUF/blob/main/fusionbase-fusion-guide-12b-0.1-IQ4_XS.gguf) | 6485.04 MB |
| Q2_K | [fusionbase-fusion-guide-12b-0.1-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/fusionbase-fusion-guide-12b-0.1-GGUF/blob/main/fusionbase-fusion-guide-12b-0.1-Q2_K.gguf) | 4569.10 MB |
| Q3_K_L | [fusionbase-fusion-guide-12b-0.1-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/fusionbase-fusion-guide-12b-0.1-GGUF/blob/main/fusionbase-fusion-guide-12b-0.1-Q3_K_L.gguf) | 6257.54 MB |
| Q3_K_M | [fusionbase-fusion-guide-12b-0.1-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/fusionbase-fusion-guide-12b-0.1-GGUF/blob/main/fusionbase-fusion-guide-12b-0.1-Q3_K_M.gguf) | 5801.29 MB |
| Q3_K_S | [fusionbase-fusion-guide-12b-0.1-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/fusionbase-fusion-guide-12b-0.1-GGUF/blob/main/fusionbase-fusion-guide-12b-0.1-Q3_K_S.gguf) | 5277.85 MB |
| Q4_K_M | [fusionbase-fusion-guide-12b-0.1-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/fusionbase-fusion-guide-12b-0.1-GGUF/blob/main/fusionbase-fusion-guide-12b-0.1-Q4_K_M.gguf) | 7130.82 MB |
| Q4_K_S | [fusionbase-fusion-guide-12b-0.1-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/fusionbase-fusion-guide-12b-0.1-GGUF/blob/main/fusionbase-fusion-guide-12b-0.1-Q4_K_S.gguf) | 6790.35 MB |
| Q5_K_M | [fusionbase-fusion-guide-12b-0.1-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/fusionbase-fusion-guide-12b-0.1-GGUF/blob/main/fusionbase-fusion-guide-12b-0.1-Q5_K_M.gguf) | 8323.32 MB |
| Q5_K_S | [fusionbase-fusion-guide-12b-0.1-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/fusionbase-fusion-guide-12b-0.1-GGUF/blob/main/fusionbase-fusion-guide-12b-0.1-Q5_K_S.gguf) | 8124.10 MB |
| Q6_K | [fusionbase-fusion-guide-12b-0.1-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/fusionbase-fusion-guide-12b-0.1-GGUF/blob/main/fusionbase-fusion-guide-12b-0.1-Q6_K.gguf) | 9590.35 MB |
| Q8_0 | [fusionbase-fusion-guide-12b-0.1-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/fusionbase-fusion-guide-12b-0.1-GGUF/blob/main/fusionbase-fusion-guide-12b-0.1-Q8_0.gguf) | 12419.10 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/Loyola-kulmistral-7b-it-GGUF | featherless-ai-quants | 2024-11-10T19:41:11Z | 8 | 0 | null | [
"gguf",
"text-generation",
"base_model:Loyola/kulmistral-7b-it",
"base_model:quantized:Loyola/kulmistral-7b-it",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-05T16:23:42Z | ---
base_model: Loyola/kulmistral-7b-it
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# Loyola/kulmistral-7b-it GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [Loyola-kulmistral-7b-it-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Loyola-kulmistral-7b-it-GGUF/blob/main/Loyola-kulmistral-7b-it-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [Loyola-kulmistral-7b-it-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Loyola-kulmistral-7b-it-GGUF/blob/main/Loyola-kulmistral-7b-it-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [Loyola-kulmistral-7b-it-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Loyola-kulmistral-7b-it-GGUF/blob/main/Loyola-kulmistral-7b-it-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [Loyola-kulmistral-7b-it-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Loyola-kulmistral-7b-it-GGUF/blob/main/Loyola-kulmistral-7b-it-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [Loyola-kulmistral-7b-it-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Loyola-kulmistral-7b-it-GGUF/blob/main/Loyola-kulmistral-7b-it-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [Loyola-kulmistral-7b-it-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Loyola-kulmistral-7b-it-GGUF/blob/main/Loyola-kulmistral-7b-it-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [Loyola-kulmistral-7b-it-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Loyola-kulmistral-7b-it-GGUF/blob/main/Loyola-kulmistral-7b-it-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [Loyola-kulmistral-7b-it-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Loyola-kulmistral-7b-it-GGUF/blob/main/Loyola-kulmistral-7b-it-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [Loyola-kulmistral-7b-it-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Loyola-kulmistral-7b-it-GGUF/blob/main/Loyola-kulmistral-7b-it-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [Loyola-kulmistral-7b-it-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Loyola-kulmistral-7b-it-GGUF/blob/main/Loyola-kulmistral-7b-it-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [Loyola-kulmistral-7b-it-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Loyola-kulmistral-7b-it-GGUF/blob/main/Loyola-kulmistral-7b-it-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/unsloth-Mistral-Nemo-Instruct-2407-GGUF | featherless-ai-quants | 2024-11-10T19:40:56Z | 20 | 0 | null | [
"gguf",
"text-generation",
"base_model:unsloth/Mistral-Nemo-Instruct-2407",
"base_model:quantized:unsloth/Mistral-Nemo-Instruct-2407",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-05T15:21:40Z | ---
base_model: unsloth/Mistral-Nemo-Instruct-2407
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# unsloth/Mistral-Nemo-Instruct-2407 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [unsloth-Mistral-Nemo-Instruct-2407-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/unsloth-Mistral-Nemo-Instruct-2407-GGUF/blob/main/unsloth-Mistral-Nemo-Instruct-2407-IQ4_XS.gguf) | 6485.04 MB |
| Q2_K | [unsloth-Mistral-Nemo-Instruct-2407-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/unsloth-Mistral-Nemo-Instruct-2407-GGUF/blob/main/unsloth-Mistral-Nemo-Instruct-2407-Q2_K.gguf) | 4569.10 MB |
| Q3_K_L | [unsloth-Mistral-Nemo-Instruct-2407-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/unsloth-Mistral-Nemo-Instruct-2407-GGUF/blob/main/unsloth-Mistral-Nemo-Instruct-2407-Q3_K_L.gguf) | 6257.54 MB |
| Q3_K_M | [unsloth-Mistral-Nemo-Instruct-2407-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/unsloth-Mistral-Nemo-Instruct-2407-GGUF/blob/main/unsloth-Mistral-Nemo-Instruct-2407-Q3_K_M.gguf) | 5801.29 MB |
| Q3_K_S | [unsloth-Mistral-Nemo-Instruct-2407-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/unsloth-Mistral-Nemo-Instruct-2407-GGUF/blob/main/unsloth-Mistral-Nemo-Instruct-2407-Q3_K_S.gguf) | 5277.85 MB |
| Q4_K_M | [unsloth-Mistral-Nemo-Instruct-2407-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/unsloth-Mistral-Nemo-Instruct-2407-GGUF/blob/main/unsloth-Mistral-Nemo-Instruct-2407-Q4_K_M.gguf) | 7130.82 MB |
| Q4_K_S | [unsloth-Mistral-Nemo-Instruct-2407-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/unsloth-Mistral-Nemo-Instruct-2407-GGUF/blob/main/unsloth-Mistral-Nemo-Instruct-2407-Q4_K_S.gguf) | 6790.35 MB |
| Q5_K_M | [unsloth-Mistral-Nemo-Instruct-2407-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/unsloth-Mistral-Nemo-Instruct-2407-GGUF/blob/main/unsloth-Mistral-Nemo-Instruct-2407-Q5_K_M.gguf) | 8323.32 MB |
| Q5_K_S | [unsloth-Mistral-Nemo-Instruct-2407-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/unsloth-Mistral-Nemo-Instruct-2407-GGUF/blob/main/unsloth-Mistral-Nemo-Instruct-2407-Q5_K_S.gguf) | 8124.10 MB |
| Q6_K | [unsloth-Mistral-Nemo-Instruct-2407-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/unsloth-Mistral-Nemo-Instruct-2407-GGUF/blob/main/unsloth-Mistral-Nemo-Instruct-2407-Q6_K.gguf) | 9590.35 MB |
| Q8_0 | [unsloth-Mistral-Nemo-Instruct-2407-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/unsloth-Mistral-Nemo-Instruct-2407-GGUF/blob/main/unsloth-Mistral-Nemo-Instruct-2407-Q8_0.gguf) | 12419.10 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/picAIso-TARS-8B-GGUF | featherless-ai-quants | 2024-11-10T19:40:53Z | 9 | 0 | null | [
"gguf",
"text-generation",
"base_model:picAIso/TARS-8B",
"base_model:quantized:picAIso/TARS-8B",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-05T15:21:33Z | ---
base_model: picAIso/TARS-8B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# picAIso/TARS-8B GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [picAIso-TARS-8B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/picAIso-TARS-8B-GGUF/blob/main/picAIso-TARS-8B-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [picAIso-TARS-8B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/picAIso-TARS-8B-GGUF/blob/main/picAIso-TARS-8B-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [picAIso-TARS-8B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/picAIso-TARS-8B-GGUF/blob/main/picAIso-TARS-8B-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [picAIso-TARS-8B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/picAIso-TARS-8B-GGUF/blob/main/picAIso-TARS-8B-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [picAIso-TARS-8B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/picAIso-TARS-8B-GGUF/blob/main/picAIso-TARS-8B-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [picAIso-TARS-8B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/picAIso-TARS-8B-GGUF/blob/main/picAIso-TARS-8B-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [picAIso-TARS-8B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/picAIso-TARS-8B-GGUF/blob/main/picAIso-TARS-8B-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [picAIso-TARS-8B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/picAIso-TARS-8B-GGUF/blob/main/picAIso-TARS-8B-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [picAIso-TARS-8B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/picAIso-TARS-8B-GGUF/blob/main/picAIso-TARS-8B-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [picAIso-TARS-8B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/picAIso-TARS-8B-GGUF/blob/main/picAIso-TARS-8B-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [picAIso-TARS-8B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/picAIso-TARS-8B-GGUF/blob/main/picAIso-TARS-8B-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/heegyu-Mistral-7B-v0.1-OKI-v20231124-1e-5-GGUF | featherless-ai-quants | 2024-11-10T19:40:37Z | 6 | 0 | null | [
"gguf",
"text-generation",
"base_model:heegyu/Mistral-7B-v0.1-OKI-v20231124-1e-5",
"base_model:quantized:heegyu/Mistral-7B-v0.1-OKI-v20231124-1e-5",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-05T13:46:43Z | ---
base_model: heegyu/Mistral-7B-v0.1-OKI-v20231124-1e-5
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# heegyu/Mistral-7B-v0.1-OKI-v20231124-1e-5 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [heegyu-Mistral-7B-v0.1-OKI-v20231124-1e-5-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/heegyu-Mistral-7B-v0.1-OKI-v20231124-1e-5-GGUF/blob/main/heegyu-Mistral-7B-v0.1-OKI-v20231124-1e-5-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [heegyu-Mistral-7B-v0.1-OKI-v20231124-1e-5-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/heegyu-Mistral-7B-v0.1-OKI-v20231124-1e-5-GGUF/blob/main/heegyu-Mistral-7B-v0.1-OKI-v20231124-1e-5-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [heegyu-Mistral-7B-v0.1-OKI-v20231124-1e-5-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/heegyu-Mistral-7B-v0.1-OKI-v20231124-1e-5-GGUF/blob/main/heegyu-Mistral-7B-v0.1-OKI-v20231124-1e-5-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [heegyu-Mistral-7B-v0.1-OKI-v20231124-1e-5-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/heegyu-Mistral-7B-v0.1-OKI-v20231124-1e-5-GGUF/blob/main/heegyu-Mistral-7B-v0.1-OKI-v20231124-1e-5-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [heegyu-Mistral-7B-v0.1-OKI-v20231124-1e-5-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/heegyu-Mistral-7B-v0.1-OKI-v20231124-1e-5-GGUF/blob/main/heegyu-Mistral-7B-v0.1-OKI-v20231124-1e-5-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [heegyu-Mistral-7B-v0.1-OKI-v20231124-1e-5-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/heegyu-Mistral-7B-v0.1-OKI-v20231124-1e-5-GGUF/blob/main/heegyu-Mistral-7B-v0.1-OKI-v20231124-1e-5-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [heegyu-Mistral-7B-v0.1-OKI-v20231124-1e-5-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/heegyu-Mistral-7B-v0.1-OKI-v20231124-1e-5-GGUF/blob/main/heegyu-Mistral-7B-v0.1-OKI-v20231124-1e-5-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [heegyu-Mistral-7B-v0.1-OKI-v20231124-1e-5-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/heegyu-Mistral-7B-v0.1-OKI-v20231124-1e-5-GGUF/blob/main/heegyu-Mistral-7B-v0.1-OKI-v20231124-1e-5-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [heegyu-Mistral-7B-v0.1-OKI-v20231124-1e-5-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/heegyu-Mistral-7B-v0.1-OKI-v20231124-1e-5-GGUF/blob/main/heegyu-Mistral-7B-v0.1-OKI-v20231124-1e-5-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [heegyu-Mistral-7B-v0.1-OKI-v20231124-1e-5-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/heegyu-Mistral-7B-v0.1-OKI-v20231124-1e-5-GGUF/blob/main/heegyu-Mistral-7B-v0.1-OKI-v20231124-1e-5-Q6_K.gguf) | 5666.79 MB |
| Q8_0 | [heegyu-Mistral-7B-v0.1-OKI-v20231124-1e-5-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/heegyu-Mistral-7B-v0.1-OKI-v20231124-1e-5-GGUF/blob/main/heegyu-Mistral-7B-v0.1-OKI-v20231124-1e-5-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-half-GGUF | featherless-ai-quants | 2024-11-10T19:40:06Z | 20 | 0 | null | [
"gguf",
"text-generation",
"base_model:lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half",
"base_model:quantized:lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-05T11:54:57Z | ---
base_model: lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [lightblue-suzume-llama-3-8B-multilingual-orpo-borda-half-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-half-GGUF/blob/main/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-half-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [lightblue-suzume-llama-3-8B-multilingual-orpo-borda-half-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-half-GGUF/blob/main/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-half-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [lightblue-suzume-llama-3-8B-multilingual-orpo-borda-half-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-half-GGUF/blob/main/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-half-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [lightblue-suzume-llama-3-8B-multilingual-orpo-borda-half-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-half-GGUF/blob/main/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-half-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [lightblue-suzume-llama-3-8B-multilingual-orpo-borda-half-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-half-GGUF/blob/main/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-half-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [lightblue-suzume-llama-3-8B-multilingual-orpo-borda-half-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-half-GGUF/blob/main/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-half-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [lightblue-suzume-llama-3-8B-multilingual-orpo-borda-half-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-half-GGUF/blob/main/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-half-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [lightblue-suzume-llama-3-8B-multilingual-orpo-borda-half-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-half-GGUF/blob/main/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-half-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [lightblue-suzume-llama-3-8B-multilingual-orpo-borda-half-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-half-GGUF/blob/main/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-half-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [lightblue-suzume-llama-3-8B-multilingual-orpo-borda-half-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-half-GGUF/blob/main/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-half-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [lightblue-suzume-llama-3-8B-multilingual-orpo-borda-half-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-half-GGUF/blob/main/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-half-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-GGUF | featherless-ai-quants | 2024-11-10T19:40:00Z | 12 | 0 | null | [
"gguf",
"text-generation",
"base_model:wang7776/Mistral-7B-Instruct-v0.2-attention-sparsity-20",
"base_model:quantized:wang7776/Mistral-7B-Instruct-v0.2-attention-sparsity-20",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-05T11:20:50Z | ---
base_model: wang7776/Mistral-7B-Instruct-v0.2-attention-sparsity-20
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# wang7776/Mistral-7B-Instruct-v0.2-attention-sparsity-20 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-GGUF/blob/main/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-GGUF/blob/main/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-GGUF/blob/main/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-GGUF/blob/main/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-GGUF/blob/main/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-GGUF/blob/main/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-GGUF/blob/main/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-GGUF/blob/main/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-GGUF/blob/main/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-GGUF/blob/main/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-GGUF/blob/main/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-GGUF | featherless-ai-quants | 2024-11-10T19:39:55Z | 25 | 0 | null | [
"gguf",
"text-generation",
"base_model:nbeerbower/Lyra-Gutenberg-mistral-nemo-12B",
"base_model:quantized:nbeerbower/Lyra-Gutenberg-mistral-nemo-12B",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-05T10:47:19Z | ---
base_model: nbeerbower/Lyra-Gutenberg-mistral-nemo-12B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# nbeerbower/Lyra-Gutenberg-mistral-nemo-12B GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-GGUF/blob/main/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-IQ4_XS.gguf) | 6485.04 MB |
| Q2_K | [nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-GGUF/blob/main/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-Q2_K.gguf) | 4569.10 MB |
| Q3_K_L | [nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-GGUF/blob/main/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-Q3_K_L.gguf) | 6257.54 MB |
| Q3_K_M | [nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-GGUF/blob/main/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-Q3_K_M.gguf) | 5801.29 MB |
| Q3_K_S | [nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-GGUF/blob/main/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-Q3_K_S.gguf) | 5277.85 MB |
| Q4_K_M | [nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-GGUF/blob/main/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-Q4_K_M.gguf) | 7130.82 MB |
| Q4_K_S | [nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-GGUF/blob/main/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-Q4_K_S.gguf) | 6790.35 MB |
| Q5_K_M | [nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-GGUF/blob/main/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-Q5_K_M.gguf) | 8323.32 MB |
| Q5_K_S | [nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-GGUF/blob/main/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-Q5_K_S.gguf) | 8124.10 MB |
| Q6_K | [nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-GGUF/blob/main/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-Q6_K.gguf) | 9590.35 MB |
| Q8_0 | [nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-GGUF/blob/main/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-Q8_0.gguf) | 12419.10 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/CerebrumTech-cere-llama-3-8b-tr-GGUF | featherless-ai-quants | 2024-11-10T19:39:52Z | 6 | 0 | null | [
"gguf",
"text-generation",
"base_model:CerebrumTech/cere-llama-3-8b-tr",
"base_model:quantized:CerebrumTech/cere-llama-3-8b-tr",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-05T10:41:31Z | ---
base_model: CerebrumTech/cere-llama-3-8b-tr
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# CerebrumTech/cere-llama-3-8b-tr GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [CerebrumTech-cere-llama-3-8b-tr-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/CerebrumTech-cere-llama-3-8b-tr-GGUF/blob/main/CerebrumTech-cere-llama-3-8b-tr-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [CerebrumTech-cere-llama-3-8b-tr-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/CerebrumTech-cere-llama-3-8b-tr-GGUF/blob/main/CerebrumTech-cere-llama-3-8b-tr-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [CerebrumTech-cere-llama-3-8b-tr-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/CerebrumTech-cere-llama-3-8b-tr-GGUF/blob/main/CerebrumTech-cere-llama-3-8b-tr-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [CerebrumTech-cere-llama-3-8b-tr-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/CerebrumTech-cere-llama-3-8b-tr-GGUF/blob/main/CerebrumTech-cere-llama-3-8b-tr-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [CerebrumTech-cere-llama-3-8b-tr-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/CerebrumTech-cere-llama-3-8b-tr-GGUF/blob/main/CerebrumTech-cere-llama-3-8b-tr-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [CerebrumTech-cere-llama-3-8b-tr-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/CerebrumTech-cere-llama-3-8b-tr-GGUF/blob/main/CerebrumTech-cere-llama-3-8b-tr-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [CerebrumTech-cere-llama-3-8b-tr-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/CerebrumTech-cere-llama-3-8b-tr-GGUF/blob/main/CerebrumTech-cere-llama-3-8b-tr-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [CerebrumTech-cere-llama-3-8b-tr-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/CerebrumTech-cere-llama-3-8b-tr-GGUF/blob/main/CerebrumTech-cere-llama-3-8b-tr-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [CerebrumTech-cere-llama-3-8b-tr-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/CerebrumTech-cere-llama-3-8b-tr-GGUF/blob/main/CerebrumTech-cere-llama-3-8b-tr-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [CerebrumTech-cere-llama-3-8b-tr-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/CerebrumTech-cere-llama-3-8b-tr-GGUF/blob/main/CerebrumTech-cere-llama-3-8b-tr-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [CerebrumTech-cere-llama-3-8b-tr-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/CerebrumTech-cere-llama-3-8b-tr-GGUF/blob/main/CerebrumTech-cere-llama-3-8b-tr-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/NeverSleep-Lumimaid-v0.2-12B-GGUF | featherless-ai-quants | 2024-11-10T19:39:38Z | 73 | 0 | null | [
"gguf",
"text-generation",
"base_model:NeverSleep/Lumimaid-v0.2-12B",
"base_model:quantized:NeverSleep/Lumimaid-v0.2-12B",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-05T09:58:59Z | ---
base_model: NeverSleep/Lumimaid-v0.2-12B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# NeverSleep/Lumimaid-v0.2-12B GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [NeverSleep-Lumimaid-v0.2-12B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/NeverSleep-Lumimaid-v0.2-12B-GGUF/blob/main/NeverSleep-Lumimaid-v0.2-12B-IQ4_XS.gguf) | 6485.04 MB |
| Q2_K | [NeverSleep-Lumimaid-v0.2-12B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/NeverSleep-Lumimaid-v0.2-12B-GGUF/blob/main/NeverSleep-Lumimaid-v0.2-12B-Q2_K.gguf) | 4569.10 MB |
| Q3_K_L | [NeverSleep-Lumimaid-v0.2-12B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/NeverSleep-Lumimaid-v0.2-12B-GGUF/blob/main/NeverSleep-Lumimaid-v0.2-12B-Q3_K_L.gguf) | 6257.54 MB |
| Q3_K_M | [NeverSleep-Lumimaid-v0.2-12B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/NeverSleep-Lumimaid-v0.2-12B-GGUF/blob/main/NeverSleep-Lumimaid-v0.2-12B-Q3_K_M.gguf) | 5801.29 MB |
| Q3_K_S | [NeverSleep-Lumimaid-v0.2-12B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/NeverSleep-Lumimaid-v0.2-12B-GGUF/blob/main/NeverSleep-Lumimaid-v0.2-12B-Q3_K_S.gguf) | 5277.85 MB |
| Q4_K_M | [NeverSleep-Lumimaid-v0.2-12B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/NeverSleep-Lumimaid-v0.2-12B-GGUF/blob/main/NeverSleep-Lumimaid-v0.2-12B-Q4_K_M.gguf) | 7130.82 MB |
| Q4_K_S | [NeverSleep-Lumimaid-v0.2-12B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/NeverSleep-Lumimaid-v0.2-12B-GGUF/blob/main/NeverSleep-Lumimaid-v0.2-12B-Q4_K_S.gguf) | 6790.36 MB |
| Q5_K_M | [NeverSleep-Lumimaid-v0.2-12B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/NeverSleep-Lumimaid-v0.2-12B-GGUF/blob/main/NeverSleep-Lumimaid-v0.2-12B-Q5_K_M.gguf) | 8323.32 MB |
| Q5_K_S | [NeverSleep-Lumimaid-v0.2-12B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/NeverSleep-Lumimaid-v0.2-12B-GGUF/blob/main/NeverSleep-Lumimaid-v0.2-12B-Q5_K_S.gguf) | 8124.11 MB |
| Q6_K | [NeverSleep-Lumimaid-v0.2-12B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/NeverSleep-Lumimaid-v0.2-12B-GGUF/blob/main/NeverSleep-Lumimaid-v0.2-12B-Q6_K.gguf) | 9590.36 MB |
| Q8_0 | [NeverSleep-Lumimaid-v0.2-12B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/NeverSleep-Lumimaid-v0.2-12B-GGUF/blob/main/NeverSleep-Lumimaid-v0.2-12B-Q8_0.gguf) | 12419.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/bunnycore-LLama-3.1-8B-Matrix-GGUF | featherless-ai-quants | 2024-11-10T19:39:34Z | 5 | 0 | null | [
"gguf",
"text-generation",
"base_model:bunnycore/LLama-3.1-8B-Matrix",
"base_model:quantized:bunnycore/LLama-3.1-8B-Matrix",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-05T09:29:29Z | ---
base_model: bunnycore/LLama-3.1-8B-Matrix
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# bunnycore/LLama-3.1-8B-Matrix GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [bunnycore-LLama-3.1-8B-Matrix-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-LLama-3.1-8B-Matrix-GGUF/blob/main/bunnycore-LLama-3.1-8B-Matrix-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [bunnycore-LLama-3.1-8B-Matrix-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-LLama-3.1-8B-Matrix-GGUF/blob/main/bunnycore-LLama-3.1-8B-Matrix-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [bunnycore-LLama-3.1-8B-Matrix-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-LLama-3.1-8B-Matrix-GGUF/blob/main/bunnycore-LLama-3.1-8B-Matrix-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [bunnycore-LLama-3.1-8B-Matrix-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-LLama-3.1-8B-Matrix-GGUF/blob/main/bunnycore-LLama-3.1-8B-Matrix-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [bunnycore-LLama-3.1-8B-Matrix-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-LLama-3.1-8B-Matrix-GGUF/blob/main/bunnycore-LLama-3.1-8B-Matrix-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [bunnycore-LLama-3.1-8B-Matrix-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-LLama-3.1-8B-Matrix-GGUF/blob/main/bunnycore-LLama-3.1-8B-Matrix-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [bunnycore-LLama-3.1-8B-Matrix-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-LLama-3.1-8B-Matrix-GGUF/blob/main/bunnycore-LLama-3.1-8B-Matrix-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [bunnycore-LLama-3.1-8B-Matrix-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-LLama-3.1-8B-Matrix-GGUF/blob/main/bunnycore-LLama-3.1-8B-Matrix-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [bunnycore-LLama-3.1-8B-Matrix-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-LLama-3.1-8B-Matrix-GGUF/blob/main/bunnycore-LLama-3.1-8B-Matrix-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [bunnycore-LLama-3.1-8B-Matrix-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-LLama-3.1-8B-Matrix-GGUF/blob/main/bunnycore-LLama-3.1-8B-Matrix-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [bunnycore-LLama-3.1-8B-Matrix-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-LLama-3.1-8B-Matrix-GGUF/blob/main/bunnycore-LLama-3.1-8B-Matrix-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-GGUF | featherless-ai-quants | 2024-11-10T19:39:27Z | 5 | 0 | null | [
"gguf",
"text-generation",
"base_model:eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO",
"base_model:quantized:eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-05T08:55:36Z | ---
base_model: eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-GGUF/blob/main/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-GGUF/blob/main/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-GGUF/blob/main/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-GGUF/blob/main/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-GGUF/blob/main/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-GGUF/blob/main/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-GGUF/blob/main/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-GGUF/blob/main/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-GGUF/blob/main/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-GGUF/blob/main/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-GGUF/blob/main/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/TheDrummer-Rocinante-12B-v1.1-GGUF | featherless-ai-quants | 2024-11-10T19:39:17Z | 57 | 0 | null | [
"gguf",
"text-generation",
"base_model:TheDrummer/Rocinante-12B-v1.1",
"base_model:quantized:TheDrummer/Rocinante-12B-v1.1",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-05T08:29:22Z | ---
base_model: TheDrummer/Rocinante-12B-v1.1
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# TheDrummer/Rocinante-12B-v1.1 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [TheDrummer-Rocinante-12B-v1.1-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/TheDrummer-Rocinante-12B-v1.1-GGUF/blob/main/TheDrummer-Rocinante-12B-v1.1-IQ4_XS.gguf) | 6485.04 MB |
| Q2_K | [TheDrummer-Rocinante-12B-v1.1-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/TheDrummer-Rocinante-12B-v1.1-GGUF/blob/main/TheDrummer-Rocinante-12B-v1.1-Q2_K.gguf) | 4569.10 MB |
| Q3_K_L | [TheDrummer-Rocinante-12B-v1.1-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/TheDrummer-Rocinante-12B-v1.1-GGUF/blob/main/TheDrummer-Rocinante-12B-v1.1-Q3_K_L.gguf) | 6257.54 MB |
| Q3_K_M | [TheDrummer-Rocinante-12B-v1.1-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/TheDrummer-Rocinante-12B-v1.1-GGUF/blob/main/TheDrummer-Rocinante-12B-v1.1-Q3_K_M.gguf) | 5801.29 MB |
| Q3_K_S | [TheDrummer-Rocinante-12B-v1.1-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/TheDrummer-Rocinante-12B-v1.1-GGUF/blob/main/TheDrummer-Rocinante-12B-v1.1-Q3_K_S.gguf) | 5277.85 MB |
| Q4_K_M | [TheDrummer-Rocinante-12B-v1.1-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/TheDrummer-Rocinante-12B-v1.1-GGUF/blob/main/TheDrummer-Rocinante-12B-v1.1-Q4_K_M.gguf) | 7130.82 MB |
| Q4_K_S | [TheDrummer-Rocinante-12B-v1.1-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/TheDrummer-Rocinante-12B-v1.1-GGUF/blob/main/TheDrummer-Rocinante-12B-v1.1-Q4_K_S.gguf) | 6790.35 MB |
| Q5_K_M | [TheDrummer-Rocinante-12B-v1.1-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/TheDrummer-Rocinante-12B-v1.1-GGUF/blob/main/TheDrummer-Rocinante-12B-v1.1-Q5_K_M.gguf) | 8323.32 MB |
| Q5_K_S | [TheDrummer-Rocinante-12B-v1.1-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/TheDrummer-Rocinante-12B-v1.1-GGUF/blob/main/TheDrummer-Rocinante-12B-v1.1-Q5_K_S.gguf) | 8124.10 MB |
| Q6_K | [TheDrummer-Rocinante-12B-v1.1-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/TheDrummer-Rocinante-12B-v1.1-GGUF/blob/main/TheDrummer-Rocinante-12B-v1.1-Q6_K.gguf) | 9590.35 MB |
| Q8_0 | [TheDrummer-Rocinante-12B-v1.1-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/TheDrummer-Rocinante-12B-v1.1-GGUF/blob/main/TheDrummer-Rocinante-12B-v1.1-Q8_0.gguf) | 12419.10 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/cookinai-Blitz-v0.1-GGUF | featherless-ai-quants | 2024-11-10T19:39:12Z | 6 | 0 | null | [
"gguf",
"text-generation",
"base_model:cookinai/Blitz-v0.1",
"base_model:quantized:cookinai/Blitz-v0.1",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-05T08:17:41Z | ---
base_model: cookinai/Blitz-v0.1
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# cookinai/Blitz-v0.1 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [cookinai-Blitz-v0.1-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/cookinai-Blitz-v0.1-GGUF/blob/main/cookinai-Blitz-v0.1-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [cookinai-Blitz-v0.1-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/cookinai-Blitz-v0.1-GGUF/blob/main/cookinai-Blitz-v0.1-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [cookinai-Blitz-v0.1-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/cookinai-Blitz-v0.1-GGUF/blob/main/cookinai-Blitz-v0.1-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [cookinai-Blitz-v0.1-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/cookinai-Blitz-v0.1-GGUF/blob/main/cookinai-Blitz-v0.1-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [cookinai-Blitz-v0.1-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/cookinai-Blitz-v0.1-GGUF/blob/main/cookinai-Blitz-v0.1-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [cookinai-Blitz-v0.1-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/cookinai-Blitz-v0.1-GGUF/blob/main/cookinai-Blitz-v0.1-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [cookinai-Blitz-v0.1-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/cookinai-Blitz-v0.1-GGUF/blob/main/cookinai-Blitz-v0.1-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [cookinai-Blitz-v0.1-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/cookinai-Blitz-v0.1-GGUF/blob/main/cookinai-Blitz-v0.1-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [cookinai-Blitz-v0.1-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/cookinai-Blitz-v0.1-GGUF/blob/main/cookinai-Blitz-v0.1-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [cookinai-Blitz-v0.1-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/cookinai-Blitz-v0.1-GGUF/blob/main/cookinai-Blitz-v0.1-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [cookinai-Blitz-v0.1-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/cookinai-Blitz-v0.1-GGUF/blob/main/cookinai-Blitz-v0.1-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/lcw99-llama-3-8b-it-ko-chang-GGUF | featherless-ai-quants | 2024-11-10T19:39:08Z | 12 | 0 | null | [
"gguf",
"text-generation",
"base_model:lcw99/llama-3-8b-it-ko-chang",
"base_model:quantized:lcw99/llama-3-8b-it-ko-chang",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-05T07:38:37Z | ---
base_model: lcw99/llama-3-8b-it-ko-chang
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# lcw99/llama-3-8b-it-ko-chang GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [lcw99-llama-3-8b-it-ko-chang-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/lcw99-llama-3-8b-it-ko-chang-GGUF/blob/main/lcw99-llama-3-8b-it-ko-chang-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [lcw99-llama-3-8b-it-ko-chang-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/lcw99-llama-3-8b-it-ko-chang-GGUF/blob/main/lcw99-llama-3-8b-it-ko-chang-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [lcw99-llama-3-8b-it-ko-chang-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/lcw99-llama-3-8b-it-ko-chang-GGUF/blob/main/lcw99-llama-3-8b-it-ko-chang-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [lcw99-llama-3-8b-it-ko-chang-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/lcw99-llama-3-8b-it-ko-chang-GGUF/blob/main/lcw99-llama-3-8b-it-ko-chang-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [lcw99-llama-3-8b-it-ko-chang-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/lcw99-llama-3-8b-it-ko-chang-GGUF/blob/main/lcw99-llama-3-8b-it-ko-chang-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [lcw99-llama-3-8b-it-ko-chang-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/lcw99-llama-3-8b-it-ko-chang-GGUF/blob/main/lcw99-llama-3-8b-it-ko-chang-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [lcw99-llama-3-8b-it-ko-chang-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/lcw99-llama-3-8b-it-ko-chang-GGUF/blob/main/lcw99-llama-3-8b-it-ko-chang-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [lcw99-llama-3-8b-it-ko-chang-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/lcw99-llama-3-8b-it-ko-chang-GGUF/blob/main/lcw99-llama-3-8b-it-ko-chang-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [lcw99-llama-3-8b-it-ko-chang-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/lcw99-llama-3-8b-it-ko-chang-GGUF/blob/main/lcw99-llama-3-8b-it-ko-chang-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [lcw99-llama-3-8b-it-ko-chang-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/lcw99-llama-3-8b-it-ko-chang-GGUF/blob/main/lcw99-llama-3-8b-it-ko-chang-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [lcw99-llama-3-8b-it-ko-chang-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/lcw99-llama-3-8b-it-ko-chang-GGUF/blob/main/lcw99-llama-3-8b-it-ko-chang-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/ichigoberry-MonarchPipe-7B-slerp-GGUF | featherless-ai-quants | 2024-11-10T19:38:50Z | 7 | 0 | null | [
"gguf",
"text-generation",
"base_model:ichigoberry/MonarchPipe-7B-slerp",
"base_model:quantized:ichigoberry/MonarchPipe-7B-slerp",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-05T06:02:18Z | ---
base_model: ichigoberry/MonarchPipe-7B-slerp
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# ichigoberry/MonarchPipe-7B-slerp GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [ichigoberry-MonarchPipe-7B-slerp-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/ichigoberry-MonarchPipe-7B-slerp-GGUF/blob/main/ichigoberry-MonarchPipe-7B-slerp-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [ichigoberry-MonarchPipe-7B-slerp-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/ichigoberry-MonarchPipe-7B-slerp-GGUF/blob/main/ichigoberry-MonarchPipe-7B-slerp-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [ichigoberry-MonarchPipe-7B-slerp-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/ichigoberry-MonarchPipe-7B-slerp-GGUF/blob/main/ichigoberry-MonarchPipe-7B-slerp-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [ichigoberry-MonarchPipe-7B-slerp-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/ichigoberry-MonarchPipe-7B-slerp-GGUF/blob/main/ichigoberry-MonarchPipe-7B-slerp-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [ichigoberry-MonarchPipe-7B-slerp-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/ichigoberry-MonarchPipe-7B-slerp-GGUF/blob/main/ichigoberry-MonarchPipe-7B-slerp-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [ichigoberry-MonarchPipe-7B-slerp-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/ichigoberry-MonarchPipe-7B-slerp-GGUF/blob/main/ichigoberry-MonarchPipe-7B-slerp-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [ichigoberry-MonarchPipe-7B-slerp-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/ichigoberry-MonarchPipe-7B-slerp-GGUF/blob/main/ichigoberry-MonarchPipe-7B-slerp-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [ichigoberry-MonarchPipe-7B-slerp-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/ichigoberry-MonarchPipe-7B-slerp-GGUF/blob/main/ichigoberry-MonarchPipe-7B-slerp-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [ichigoberry-MonarchPipe-7B-slerp-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/ichigoberry-MonarchPipe-7B-slerp-GGUF/blob/main/ichigoberry-MonarchPipe-7B-slerp-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [ichigoberry-MonarchPipe-7B-slerp-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/ichigoberry-MonarchPipe-7B-slerp-GGUF/blob/main/ichigoberry-MonarchPipe-7B-slerp-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [ichigoberry-MonarchPipe-7B-slerp-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/ichigoberry-MonarchPipe-7B-slerp-GGUF/blob/main/ichigoberry-MonarchPipe-7B-slerp-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/Locutusque-Hyperion-3.0-Mistral-7B-alpha-GGUF | featherless-ai-quants | 2024-11-10T19:38:49Z | 12 | 0 | null | [
"gguf",
"text-generation",
"base_model:Locutusque/Hyperion-3.0-Mistral-7B-alpha",
"base_model:quantized:Locutusque/Hyperion-3.0-Mistral-7B-alpha",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-05T05:58:53Z | ---
base_model: Locutusque/Hyperion-3.0-Mistral-7B-alpha
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# Locutusque/Hyperion-3.0-Mistral-7B-alpha GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [Locutusque-Hyperion-3.0-Mistral-7B-alpha-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-3.0-Mistral-7B-alpha-GGUF/blob/main/Locutusque-Hyperion-3.0-Mistral-7B-alpha-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [Locutusque-Hyperion-3.0-Mistral-7B-alpha-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-3.0-Mistral-7B-alpha-GGUF/blob/main/Locutusque-Hyperion-3.0-Mistral-7B-alpha-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [Locutusque-Hyperion-3.0-Mistral-7B-alpha-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-3.0-Mistral-7B-alpha-GGUF/blob/main/Locutusque-Hyperion-3.0-Mistral-7B-alpha-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [Locutusque-Hyperion-3.0-Mistral-7B-alpha-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-3.0-Mistral-7B-alpha-GGUF/blob/main/Locutusque-Hyperion-3.0-Mistral-7B-alpha-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [Locutusque-Hyperion-3.0-Mistral-7B-alpha-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-3.0-Mistral-7B-alpha-GGUF/blob/main/Locutusque-Hyperion-3.0-Mistral-7B-alpha-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [Locutusque-Hyperion-3.0-Mistral-7B-alpha-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-3.0-Mistral-7B-alpha-GGUF/blob/main/Locutusque-Hyperion-3.0-Mistral-7B-alpha-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [Locutusque-Hyperion-3.0-Mistral-7B-alpha-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-3.0-Mistral-7B-alpha-GGUF/blob/main/Locutusque-Hyperion-3.0-Mistral-7B-alpha-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [Locutusque-Hyperion-3.0-Mistral-7B-alpha-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-3.0-Mistral-7B-alpha-GGUF/blob/main/Locutusque-Hyperion-3.0-Mistral-7B-alpha-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [Locutusque-Hyperion-3.0-Mistral-7B-alpha-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-3.0-Mistral-7B-alpha-GGUF/blob/main/Locutusque-Hyperion-3.0-Mistral-7B-alpha-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [Locutusque-Hyperion-3.0-Mistral-7B-alpha-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-3.0-Mistral-7B-alpha-GGUF/blob/main/Locutusque-Hyperion-3.0-Mistral-7B-alpha-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [Locutusque-Hyperion-3.0-Mistral-7B-alpha-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-3.0-Mistral-7B-alpha-GGUF/blob/main/Locutusque-Hyperion-3.0-Mistral-7B-alpha-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/eren23-dpo-binarized-NeutrixOmnibe-7B-GGUF | featherless-ai-quants | 2024-11-10T19:38:47Z | 17 | 0 | null | [
"gguf",
"text-generation",
"base_model:eren23/dpo-binarized-NeutrixOmnibe-7B",
"base_model:quantized:eren23/dpo-binarized-NeutrixOmnibe-7B",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-05T05:53:13Z | ---
base_model: eren23/dpo-binarized-NeutrixOmnibe-7B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# eren23/dpo-binarized-NeutrixOmnibe-7B GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [eren23-dpo-binarized-NeutrixOmnibe-7B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/eren23-dpo-binarized-NeutrixOmnibe-7B-GGUF/blob/main/eren23-dpo-binarized-NeutrixOmnibe-7B-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [eren23-dpo-binarized-NeutrixOmnibe-7B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/eren23-dpo-binarized-NeutrixOmnibe-7B-GGUF/blob/main/eren23-dpo-binarized-NeutrixOmnibe-7B-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [eren23-dpo-binarized-NeutrixOmnibe-7B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/eren23-dpo-binarized-NeutrixOmnibe-7B-GGUF/blob/main/eren23-dpo-binarized-NeutrixOmnibe-7B-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [eren23-dpo-binarized-NeutrixOmnibe-7B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/eren23-dpo-binarized-NeutrixOmnibe-7B-GGUF/blob/main/eren23-dpo-binarized-NeutrixOmnibe-7B-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [eren23-dpo-binarized-NeutrixOmnibe-7B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/eren23-dpo-binarized-NeutrixOmnibe-7B-GGUF/blob/main/eren23-dpo-binarized-NeutrixOmnibe-7B-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [eren23-dpo-binarized-NeutrixOmnibe-7B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/eren23-dpo-binarized-NeutrixOmnibe-7B-GGUF/blob/main/eren23-dpo-binarized-NeutrixOmnibe-7B-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [eren23-dpo-binarized-NeutrixOmnibe-7B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/eren23-dpo-binarized-NeutrixOmnibe-7B-GGUF/blob/main/eren23-dpo-binarized-NeutrixOmnibe-7B-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [eren23-dpo-binarized-NeutrixOmnibe-7B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/eren23-dpo-binarized-NeutrixOmnibe-7B-GGUF/blob/main/eren23-dpo-binarized-NeutrixOmnibe-7B-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [eren23-dpo-binarized-NeutrixOmnibe-7B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/eren23-dpo-binarized-NeutrixOmnibe-7B-GGUF/blob/main/eren23-dpo-binarized-NeutrixOmnibe-7B-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [eren23-dpo-binarized-NeutrixOmnibe-7B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/eren23-dpo-binarized-NeutrixOmnibe-7B-GGUF/blob/main/eren23-dpo-binarized-NeutrixOmnibe-7B-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [eren23-dpo-binarized-NeutrixOmnibe-7B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/eren23-dpo-binarized-NeutrixOmnibe-7B-GGUF/blob/main/eren23-dpo-binarized-NeutrixOmnibe-7B-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/Azazelle-L3-RP_io-GGUF | featherless-ai-quants | 2024-11-10T19:38:44Z | 8 | 0 | null | [
"gguf",
"text-generation",
"base_model:Azazelle/L3-RP_io",
"base_model:quantized:Azazelle/L3-RP_io",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-05T05:40:48Z | ---
base_model: Azazelle/L3-RP_io
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# Azazelle/L3-RP_io GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [Azazelle-L3-RP_io-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Azazelle-L3-RP_io-GGUF/blob/main/Azazelle-L3-RP_io-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [Azazelle-L3-RP_io-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Azazelle-L3-RP_io-GGUF/blob/main/Azazelle-L3-RP_io-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [Azazelle-L3-RP_io-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Azazelle-L3-RP_io-GGUF/blob/main/Azazelle-L3-RP_io-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [Azazelle-L3-RP_io-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Azazelle-L3-RP_io-GGUF/blob/main/Azazelle-L3-RP_io-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [Azazelle-L3-RP_io-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Azazelle-L3-RP_io-GGUF/blob/main/Azazelle-L3-RP_io-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [Azazelle-L3-RP_io-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Azazelle-L3-RP_io-GGUF/blob/main/Azazelle-L3-RP_io-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [Azazelle-L3-RP_io-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Azazelle-L3-RP_io-GGUF/blob/main/Azazelle-L3-RP_io-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [Azazelle-L3-RP_io-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Azazelle-L3-RP_io-GGUF/blob/main/Azazelle-L3-RP_io-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [Azazelle-L3-RP_io-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Azazelle-L3-RP_io-GGUF/blob/main/Azazelle-L3-RP_io-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [Azazelle-L3-RP_io-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Azazelle-L3-RP_io-GGUF/blob/main/Azazelle-L3-RP_io-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [Azazelle-L3-RP_io-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Azazelle-L3-RP_io-GGUF/blob/main/Azazelle-L3-RP_io-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/ohyeah1-Pantheon-Hermes-rp-GGUF | featherless-ai-quants | 2024-11-10T19:38:41Z | 78 | 0 | null | [
"gguf",
"text-generation",
"base_model:ohyeah1/Pantheon-Hermes-rp",
"base_model:quantized:ohyeah1/Pantheon-Hermes-rp",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-05T05:40:29Z | ---
base_model: ohyeah1/Pantheon-Hermes-rp
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# ohyeah1/Pantheon-Hermes-rp GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [ohyeah1-Pantheon-Hermes-rp-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/ohyeah1-Pantheon-Hermes-rp-GGUF/blob/main/ohyeah1-Pantheon-Hermes-rp-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [ohyeah1-Pantheon-Hermes-rp-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/ohyeah1-Pantheon-Hermes-rp-GGUF/blob/main/ohyeah1-Pantheon-Hermes-rp-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [ohyeah1-Pantheon-Hermes-rp-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/ohyeah1-Pantheon-Hermes-rp-GGUF/blob/main/ohyeah1-Pantheon-Hermes-rp-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [ohyeah1-Pantheon-Hermes-rp-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/ohyeah1-Pantheon-Hermes-rp-GGUF/blob/main/ohyeah1-Pantheon-Hermes-rp-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [ohyeah1-Pantheon-Hermes-rp-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/ohyeah1-Pantheon-Hermes-rp-GGUF/blob/main/ohyeah1-Pantheon-Hermes-rp-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [ohyeah1-Pantheon-Hermes-rp-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/ohyeah1-Pantheon-Hermes-rp-GGUF/blob/main/ohyeah1-Pantheon-Hermes-rp-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [ohyeah1-Pantheon-Hermes-rp-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/ohyeah1-Pantheon-Hermes-rp-GGUF/blob/main/ohyeah1-Pantheon-Hermes-rp-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [ohyeah1-Pantheon-Hermes-rp-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/ohyeah1-Pantheon-Hermes-rp-GGUF/blob/main/ohyeah1-Pantheon-Hermes-rp-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [ohyeah1-Pantheon-Hermes-rp-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/ohyeah1-Pantheon-Hermes-rp-GGUF/blob/main/ohyeah1-Pantheon-Hermes-rp-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [ohyeah1-Pantheon-Hermes-rp-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/ohyeah1-Pantheon-Hermes-rp-GGUF/blob/main/ohyeah1-Pantheon-Hermes-rp-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [ohyeah1-Pantheon-Hermes-rp-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/ohyeah1-Pantheon-Hermes-rp-GGUF/blob/main/ohyeah1-Pantheon-Hermes-rp-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/antiven0m-reverie-7b-GGUF | featherless-ai-quants | 2024-11-10T19:38:39Z | 15 | 0 | null | [
"gguf",
"text-generation",
"base_model:antiven0m/reverie-7b",
"base_model:quantized:antiven0m/reverie-7b",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-05T05:24:44Z | ---
base_model: antiven0m/reverie-7b
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# antiven0m/reverie-7b GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [antiven0m-reverie-7b-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/antiven0m-reverie-7b-GGUF/blob/main/antiven0m-reverie-7b-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [antiven0m-reverie-7b-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/antiven0m-reverie-7b-GGUF/blob/main/antiven0m-reverie-7b-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [antiven0m-reverie-7b-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/antiven0m-reverie-7b-GGUF/blob/main/antiven0m-reverie-7b-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [antiven0m-reverie-7b-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/antiven0m-reverie-7b-GGUF/blob/main/antiven0m-reverie-7b-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [antiven0m-reverie-7b-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/antiven0m-reverie-7b-GGUF/blob/main/antiven0m-reverie-7b-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [antiven0m-reverie-7b-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/antiven0m-reverie-7b-GGUF/blob/main/antiven0m-reverie-7b-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [antiven0m-reverie-7b-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/antiven0m-reverie-7b-GGUF/blob/main/antiven0m-reverie-7b-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [antiven0m-reverie-7b-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/antiven0m-reverie-7b-GGUF/blob/main/antiven0m-reverie-7b-Q5_K_M.gguf) | 4893.70 MB |
| Q5_K_S | [antiven0m-reverie-7b-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/antiven0m-reverie-7b-GGUF/blob/main/antiven0m-reverie-7b-Q5_K_S.gguf) | 4766.20 MB |
| Q6_K | [antiven0m-reverie-7b-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/antiven0m-reverie-7b-GGUF/blob/main/antiven0m-reverie-7b-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [antiven0m-reverie-7b-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/antiven0m-reverie-7b-GGUF/blob/main/antiven0m-reverie-7b-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/ruslanmv-Meta-Llama-3.1-8B-Text-to-SQL-GGUF | featherless-ai-quants | 2024-11-10T19:38:36Z | 6 | 0 | null | [
"gguf",
"text-generation",
"base_model:ruslanmv/Meta-Llama-3.1-8B-Text-to-SQL",
"base_model:quantized:ruslanmv/Meta-Llama-3.1-8B-Text-to-SQL",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-05T05:22:27Z | ---
base_model: ruslanmv/Meta-Llama-3.1-8B-Text-to-SQL
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# ruslanmv/Meta-Llama-3.1-8B-Text-to-SQL GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [ruslanmv-Meta-Llama-3.1-8B-Text-to-SQL-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/ruslanmv-Meta-Llama-3.1-8B-Text-to-SQL-GGUF/blob/main/ruslanmv-Meta-Llama-3.1-8B-Text-to-SQL-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [ruslanmv-Meta-Llama-3.1-8B-Text-to-SQL-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/ruslanmv-Meta-Llama-3.1-8B-Text-to-SQL-GGUF/blob/main/ruslanmv-Meta-Llama-3.1-8B-Text-to-SQL-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [ruslanmv-Meta-Llama-3.1-8B-Text-to-SQL-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/ruslanmv-Meta-Llama-3.1-8B-Text-to-SQL-GGUF/blob/main/ruslanmv-Meta-Llama-3.1-8B-Text-to-SQL-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [ruslanmv-Meta-Llama-3.1-8B-Text-to-SQL-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/ruslanmv-Meta-Llama-3.1-8B-Text-to-SQL-GGUF/blob/main/ruslanmv-Meta-Llama-3.1-8B-Text-to-SQL-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [ruslanmv-Meta-Llama-3.1-8B-Text-to-SQL-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/ruslanmv-Meta-Llama-3.1-8B-Text-to-SQL-GGUF/blob/main/ruslanmv-Meta-Llama-3.1-8B-Text-to-SQL-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [ruslanmv-Meta-Llama-3.1-8B-Text-to-SQL-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/ruslanmv-Meta-Llama-3.1-8B-Text-to-SQL-GGUF/blob/main/ruslanmv-Meta-Llama-3.1-8B-Text-to-SQL-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [ruslanmv-Meta-Llama-3.1-8B-Text-to-SQL-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/ruslanmv-Meta-Llama-3.1-8B-Text-to-SQL-GGUF/blob/main/ruslanmv-Meta-Llama-3.1-8B-Text-to-SQL-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [ruslanmv-Meta-Llama-3.1-8B-Text-to-SQL-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/ruslanmv-Meta-Llama-3.1-8B-Text-to-SQL-GGUF/blob/main/ruslanmv-Meta-Llama-3.1-8B-Text-to-SQL-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [ruslanmv-Meta-Llama-3.1-8B-Text-to-SQL-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/ruslanmv-Meta-Llama-3.1-8B-Text-to-SQL-GGUF/blob/main/ruslanmv-Meta-Llama-3.1-8B-Text-to-SQL-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [ruslanmv-Meta-Llama-3.1-8B-Text-to-SQL-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/ruslanmv-Meta-Llama-3.1-8B-Text-to-SQL-GGUF/blob/main/ruslanmv-Meta-Llama-3.1-8B-Text-to-SQL-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [ruslanmv-Meta-Llama-3.1-8B-Text-to-SQL-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/ruslanmv-Meta-Llama-3.1-8B-Text-to-SQL-GGUF/blob/main/ruslanmv-Meta-Llama-3.1-8B-Text-to-SQL-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-GGUF | featherless-ai-quants | 2024-11-10T19:38:15Z | 14 | 0 | null | [
"gguf",
"text-generation",
"base_model:BarryFutureman/WestLakeX-7B-EvoMerge-Variant2",
"base_model:quantized:BarryFutureman/WestLakeX-7B-EvoMerge-Variant2",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-05T04:18:09Z | ---
base_model: BarryFutureman/WestLakeX-7B-EvoMerge-Variant2
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# BarryFutureman/WestLakeX-7B-EvoMerge-Variant2 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-GGUF/blob/main/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-GGUF/blob/main/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-GGUF/blob/main/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-GGUF/blob/main/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-GGUF/blob/main/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-GGUF/blob/main/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-GGUF/blob/main/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-GGUF/blob/main/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-GGUF/blob/main/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-GGUF/blob/main/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-GGUF/blob/main/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/uukuguy-speechless-mistral-hermes-code-7b-GGUF | featherless-ai-quants | 2024-11-10T19:38:11Z | 29 | 0 | null | [
"gguf",
"text-generation",
"base_model:uukuguy/speechless-mistral-hermes-code-7b",
"base_model:quantized:uukuguy/speechless-mistral-hermes-code-7b",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-05T04:15:02Z | ---
base_model: uukuguy/speechless-mistral-hermes-code-7b
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# uukuguy/speechless-mistral-hermes-code-7b GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [uukuguy-speechless-mistral-hermes-code-7b-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/uukuguy-speechless-mistral-hermes-code-7b-GGUF/blob/main/uukuguy-speechless-mistral-hermes-code-7b-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [uukuguy-speechless-mistral-hermes-code-7b-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/uukuguy-speechless-mistral-hermes-code-7b-GGUF/blob/main/uukuguy-speechless-mistral-hermes-code-7b-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [uukuguy-speechless-mistral-hermes-code-7b-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/uukuguy-speechless-mistral-hermes-code-7b-GGUF/blob/main/uukuguy-speechless-mistral-hermes-code-7b-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [uukuguy-speechless-mistral-hermes-code-7b-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/uukuguy-speechless-mistral-hermes-code-7b-GGUF/blob/main/uukuguy-speechless-mistral-hermes-code-7b-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [uukuguy-speechless-mistral-hermes-code-7b-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/uukuguy-speechless-mistral-hermes-code-7b-GGUF/blob/main/uukuguy-speechless-mistral-hermes-code-7b-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [uukuguy-speechless-mistral-hermes-code-7b-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/uukuguy-speechless-mistral-hermes-code-7b-GGUF/blob/main/uukuguy-speechless-mistral-hermes-code-7b-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [uukuguy-speechless-mistral-hermes-code-7b-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/uukuguy-speechless-mistral-hermes-code-7b-GGUF/blob/main/uukuguy-speechless-mistral-hermes-code-7b-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [uukuguy-speechless-mistral-hermes-code-7b-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/uukuguy-speechless-mistral-hermes-code-7b-GGUF/blob/main/uukuguy-speechless-mistral-hermes-code-7b-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [uukuguy-speechless-mistral-hermes-code-7b-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/uukuguy-speechless-mistral-hermes-code-7b-GGUF/blob/main/uukuguy-speechless-mistral-hermes-code-7b-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [uukuguy-speechless-mistral-hermes-code-7b-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/uukuguy-speechless-mistral-hermes-code-7b-GGUF/blob/main/uukuguy-speechless-mistral-hermes-code-7b-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [uukuguy-speechless-mistral-hermes-code-7b-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/uukuguy-speechless-mistral-hermes-code-7b-GGUF/blob/main/uukuguy-speechless-mistral-hermes-code-7b-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/Kukedlc-NeuTrixOmniBe-DPO-GGUF | featherless-ai-quants | 2024-11-10T19:38:04Z | 12 | 0 | null | [
"gguf",
"text-generation",
"base_model:Kukedlc/NeuTrixOmniBe-DPO",
"base_model:quantized:Kukedlc/NeuTrixOmniBe-DPO",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-05T03:51:28Z | ---
base_model: Kukedlc/NeuTrixOmniBe-DPO
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# Kukedlc/NeuTrixOmniBe-DPO GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [Kukedlc-NeuTrixOmniBe-DPO-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Kukedlc-NeuTrixOmniBe-DPO-GGUF/blob/main/Kukedlc-NeuTrixOmniBe-DPO-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [Kukedlc-NeuTrixOmniBe-DPO-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Kukedlc-NeuTrixOmniBe-DPO-GGUF/blob/main/Kukedlc-NeuTrixOmniBe-DPO-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [Kukedlc-NeuTrixOmniBe-DPO-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Kukedlc-NeuTrixOmniBe-DPO-GGUF/blob/main/Kukedlc-NeuTrixOmniBe-DPO-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [Kukedlc-NeuTrixOmniBe-DPO-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Kukedlc-NeuTrixOmniBe-DPO-GGUF/blob/main/Kukedlc-NeuTrixOmniBe-DPO-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [Kukedlc-NeuTrixOmniBe-DPO-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Kukedlc-NeuTrixOmniBe-DPO-GGUF/blob/main/Kukedlc-NeuTrixOmniBe-DPO-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [Kukedlc-NeuTrixOmniBe-DPO-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Kukedlc-NeuTrixOmniBe-DPO-GGUF/blob/main/Kukedlc-NeuTrixOmniBe-DPO-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [Kukedlc-NeuTrixOmniBe-DPO-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Kukedlc-NeuTrixOmniBe-DPO-GGUF/blob/main/Kukedlc-NeuTrixOmniBe-DPO-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [Kukedlc-NeuTrixOmniBe-DPO-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Kukedlc-NeuTrixOmniBe-DPO-GGUF/blob/main/Kukedlc-NeuTrixOmniBe-DPO-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [Kukedlc-NeuTrixOmniBe-DPO-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Kukedlc-NeuTrixOmniBe-DPO-GGUF/blob/main/Kukedlc-NeuTrixOmniBe-DPO-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [Kukedlc-NeuTrixOmniBe-DPO-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Kukedlc-NeuTrixOmniBe-DPO-GGUF/blob/main/Kukedlc-NeuTrixOmniBe-DPO-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [Kukedlc-NeuTrixOmniBe-DPO-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Kukedlc-NeuTrixOmniBe-DPO-GGUF/blob/main/Kukedlc-NeuTrixOmniBe-DPO-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/amd-Meta-Llama-3-8B_fp8_quark-GGUF | featherless-ai-quants | 2024-11-10T19:38:01Z | 5 | 0 | null | [
"gguf",
"text-generation",
"base_model:amd/Meta-Llama-3-8B_fp8_quark",
"base_model:quantized:amd/Meta-Llama-3-8B_fp8_quark",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-05T03:47:06Z | ---
base_model: amd/Meta-Llama-3-8B_fp8_quark
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# amd/Meta-Llama-3-8B_fp8_quark GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [amd-Meta-Llama-3-8B_fp8_quark-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/amd-Meta-Llama-3-8B_fp8_quark-GGUF/blob/main/amd-Meta-Llama-3-8B_fp8_quark-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [amd-Meta-Llama-3-8B_fp8_quark-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/amd-Meta-Llama-3-8B_fp8_quark-GGUF/blob/main/amd-Meta-Llama-3-8B_fp8_quark-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [amd-Meta-Llama-3-8B_fp8_quark-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/amd-Meta-Llama-3-8B_fp8_quark-GGUF/blob/main/amd-Meta-Llama-3-8B_fp8_quark-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [amd-Meta-Llama-3-8B_fp8_quark-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/amd-Meta-Llama-3-8B_fp8_quark-GGUF/blob/main/amd-Meta-Llama-3-8B_fp8_quark-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [amd-Meta-Llama-3-8B_fp8_quark-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/amd-Meta-Llama-3-8B_fp8_quark-GGUF/blob/main/amd-Meta-Llama-3-8B_fp8_quark-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [amd-Meta-Llama-3-8B_fp8_quark-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/amd-Meta-Llama-3-8B_fp8_quark-GGUF/blob/main/amd-Meta-Llama-3-8B_fp8_quark-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [amd-Meta-Llama-3-8B_fp8_quark-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/amd-Meta-Llama-3-8B_fp8_quark-GGUF/blob/main/amd-Meta-Llama-3-8B_fp8_quark-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [amd-Meta-Llama-3-8B_fp8_quark-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/amd-Meta-Llama-3-8B_fp8_quark-GGUF/blob/main/amd-Meta-Llama-3-8B_fp8_quark-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [amd-Meta-Llama-3-8B_fp8_quark-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/amd-Meta-Llama-3-8B_fp8_quark-GGUF/blob/main/amd-Meta-Llama-3-8B_fp8_quark-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [amd-Meta-Llama-3-8B_fp8_quark-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/amd-Meta-Llama-3-8B_fp8_quark-GGUF/blob/main/amd-Meta-Llama-3-8B_fp8_quark-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [amd-Meta-Llama-3-8B_fp8_quark-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/amd-Meta-Llama-3-8B_fp8_quark-GGUF/blob/main/amd-Meta-Llama-3-8B_fp8_quark-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/XavierSpycy-Meta-Llama-3-8B-Instruct-zh-10k-GGUF | featherless-ai-quants | 2024-11-10T19:37:54Z | 21 | 0 | null | [
"gguf",
"text-generation",
"base_model:XavierSpycy/Meta-Llama-3-8B-Instruct-zh-10k",
"base_model:quantized:XavierSpycy/Meta-Llama-3-8B-Instruct-zh-10k",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-05T03:22:45Z | ---
base_model: XavierSpycy/Meta-Llama-3-8B-Instruct-zh-10k
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# XavierSpycy/Meta-Llama-3-8B-Instruct-zh-10k GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [XavierSpycy-Meta-Llama-3-8B-Instruct-zh-10k-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/XavierSpycy-Meta-Llama-3-8B-Instruct-zh-10k-GGUF/blob/main/XavierSpycy-Meta-Llama-3-8B-Instruct-zh-10k-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [XavierSpycy-Meta-Llama-3-8B-Instruct-zh-10k-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/XavierSpycy-Meta-Llama-3-8B-Instruct-zh-10k-GGUF/blob/main/XavierSpycy-Meta-Llama-3-8B-Instruct-zh-10k-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [XavierSpycy-Meta-Llama-3-8B-Instruct-zh-10k-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/XavierSpycy-Meta-Llama-3-8B-Instruct-zh-10k-GGUF/blob/main/XavierSpycy-Meta-Llama-3-8B-Instruct-zh-10k-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [XavierSpycy-Meta-Llama-3-8B-Instruct-zh-10k-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/XavierSpycy-Meta-Llama-3-8B-Instruct-zh-10k-GGUF/blob/main/XavierSpycy-Meta-Llama-3-8B-Instruct-zh-10k-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [XavierSpycy-Meta-Llama-3-8B-Instruct-zh-10k-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/XavierSpycy-Meta-Llama-3-8B-Instruct-zh-10k-GGUF/blob/main/XavierSpycy-Meta-Llama-3-8B-Instruct-zh-10k-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [XavierSpycy-Meta-Llama-3-8B-Instruct-zh-10k-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/XavierSpycy-Meta-Llama-3-8B-Instruct-zh-10k-GGUF/blob/main/XavierSpycy-Meta-Llama-3-8B-Instruct-zh-10k-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [XavierSpycy-Meta-Llama-3-8B-Instruct-zh-10k-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/XavierSpycy-Meta-Llama-3-8B-Instruct-zh-10k-GGUF/blob/main/XavierSpycy-Meta-Llama-3-8B-Instruct-zh-10k-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [XavierSpycy-Meta-Llama-3-8B-Instruct-zh-10k-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/XavierSpycy-Meta-Llama-3-8B-Instruct-zh-10k-GGUF/blob/main/XavierSpycy-Meta-Llama-3-8B-Instruct-zh-10k-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [XavierSpycy-Meta-Llama-3-8B-Instruct-zh-10k-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/XavierSpycy-Meta-Llama-3-8B-Instruct-zh-10k-GGUF/blob/main/XavierSpycy-Meta-Llama-3-8B-Instruct-zh-10k-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [XavierSpycy-Meta-Llama-3-8B-Instruct-zh-10k-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/XavierSpycy-Meta-Llama-3-8B-Instruct-zh-10k-GGUF/blob/main/XavierSpycy-Meta-Llama-3-8B-Instruct-zh-10k-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [XavierSpycy-Meta-Llama-3-8B-Instruct-zh-10k-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/XavierSpycy-Meta-Llama-3-8B-Instruct-zh-10k-GGUF/blob/main/XavierSpycy-Meta-Llama-3-8B-Instruct-zh-10k-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/NurtureAI-Meta-Llama-3-8B-Instruct-64k-GGUF | featherless-ai-quants | 2024-11-10T19:37:45Z | 15 | 0 | null | [
"gguf",
"text-generation",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-05T02:38:55Z | ---
base_model: NurtureAI/Meta-Llama-3-8B-Instruct-64k
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# NurtureAI/Meta-Llama-3-8B-Instruct-64k GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [NurtureAI-Meta-Llama-3-8B-Instruct-64k-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/NurtureAI-Meta-Llama-3-8B-Instruct-64k-GGUF/blob/main/NurtureAI-Meta-Llama-3-8B-Instruct-64k-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [NurtureAI-Meta-Llama-3-8B-Instruct-64k-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/NurtureAI-Meta-Llama-3-8B-Instruct-64k-GGUF/blob/main/NurtureAI-Meta-Llama-3-8B-Instruct-64k-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [NurtureAI-Meta-Llama-3-8B-Instruct-64k-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/NurtureAI-Meta-Llama-3-8B-Instruct-64k-GGUF/blob/main/NurtureAI-Meta-Llama-3-8B-Instruct-64k-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [NurtureAI-Meta-Llama-3-8B-Instruct-64k-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/NurtureAI-Meta-Llama-3-8B-Instruct-64k-GGUF/blob/main/NurtureAI-Meta-Llama-3-8B-Instruct-64k-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [NurtureAI-Meta-Llama-3-8B-Instruct-64k-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/NurtureAI-Meta-Llama-3-8B-Instruct-64k-GGUF/blob/main/NurtureAI-Meta-Llama-3-8B-Instruct-64k-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [NurtureAI-Meta-Llama-3-8B-Instruct-64k-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/NurtureAI-Meta-Llama-3-8B-Instruct-64k-GGUF/blob/main/NurtureAI-Meta-Llama-3-8B-Instruct-64k-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [NurtureAI-Meta-Llama-3-8B-Instruct-64k-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/NurtureAI-Meta-Llama-3-8B-Instruct-64k-GGUF/blob/main/NurtureAI-Meta-Llama-3-8B-Instruct-64k-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [NurtureAI-Meta-Llama-3-8B-Instruct-64k-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/NurtureAI-Meta-Llama-3-8B-Instruct-64k-GGUF/blob/main/NurtureAI-Meta-Llama-3-8B-Instruct-64k-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [NurtureAI-Meta-Llama-3-8B-Instruct-64k-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/NurtureAI-Meta-Llama-3-8B-Instruct-64k-GGUF/blob/main/NurtureAI-Meta-Llama-3-8B-Instruct-64k-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [NurtureAI-Meta-Llama-3-8B-Instruct-64k-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/NurtureAI-Meta-Llama-3-8B-Instruct-64k-GGUF/blob/main/NurtureAI-Meta-Llama-3-8B-Instruct-64k-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [NurtureAI-Meta-Llama-3-8B-Instruct-64k-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/NurtureAI-Meta-Llama-3-8B-Instruct-64k-GGUF/blob/main/NurtureAI-Meta-Llama-3-8B-Instruct-64k-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/leesalminen-model-3-GGUF | featherless-ai-quants | 2024-11-10T19:37:42Z | 70 | 0 | null | [
"gguf",
"text-generation",
"base_model:leesalminen/model-3",
"base_model:quantized:leesalminen/model-3",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-05T02:31:34Z | ---
base_model: leesalminen/model-3
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# leesalminen/model-3 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [leesalminen-model-3-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/leesalminen-model-3-GGUF/blob/main/leesalminen-model-3-IQ4_XS.gguf) | 4276.63 MB |
| Q2_K | [leesalminen-model-3-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/leesalminen-model-3-GGUF/blob/main/leesalminen-model-3-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [leesalminen-model-3-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/leesalminen-model-3-GGUF/blob/main/leesalminen-model-3-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [leesalminen-model-3-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/leesalminen-model-3-GGUF/blob/main/leesalminen-model-3-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [leesalminen-model-3-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/leesalminen-model-3-GGUF/blob/main/leesalminen-model-3-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [leesalminen-model-3-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/leesalminen-model-3-GGUF/blob/main/leesalminen-model-3-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [leesalminen-model-3-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/leesalminen-model-3-GGUF/blob/main/leesalminen-model-3-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [leesalminen-model-3-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/leesalminen-model-3-GGUF/blob/main/leesalminen-model-3-Q5_K_M.gguf) | 5467.41 MB |
| Q5_K_S | [leesalminen-model-3-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/leesalminen-model-3-GGUF/blob/main/leesalminen-model-3-Q5_K_S.gguf) | 5339.91 MB |
| Q6_K | [leesalminen-model-3-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/leesalminen-model-3-GGUF/blob/main/leesalminen-model-3-Q6_K.gguf) | 6290.45 MB |
| Q8_0 | [leesalminen-model-3-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/leesalminen-model-3-GGUF/blob/main/leesalminen-model-3-Q8_0.gguf) | 8145.12 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/bongbongs-NewMes-v15-GGUF | featherless-ai-quants | 2024-11-10T19:37:39Z | 15 | 0 | null | [
"gguf",
"text-generation",
"base_model:bongbongs/NewMes-v15",
"base_model:quantized:bongbongs/NewMes-v15",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-05T02:30:48Z | ---
base_model: bongbongs/NewMes-v15
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# bongbongs/NewMes-v15 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [bongbongs-NewMes-v15-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/bongbongs-NewMes-v15-GGUF/blob/main/bongbongs-NewMes-v15-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [bongbongs-NewMes-v15-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/bongbongs-NewMes-v15-GGUF/blob/main/bongbongs-NewMes-v15-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [bongbongs-NewMes-v15-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/bongbongs-NewMes-v15-GGUF/blob/main/bongbongs-NewMes-v15-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [bongbongs-NewMes-v15-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/bongbongs-NewMes-v15-GGUF/blob/main/bongbongs-NewMes-v15-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [bongbongs-NewMes-v15-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/bongbongs-NewMes-v15-GGUF/blob/main/bongbongs-NewMes-v15-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [bongbongs-NewMes-v15-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/bongbongs-NewMes-v15-GGUF/blob/main/bongbongs-NewMes-v15-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [bongbongs-NewMes-v15-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/bongbongs-NewMes-v15-GGUF/blob/main/bongbongs-NewMes-v15-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [bongbongs-NewMes-v15-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/bongbongs-NewMes-v15-GGUF/blob/main/bongbongs-NewMes-v15-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [bongbongs-NewMes-v15-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/bongbongs-NewMes-v15-GGUF/blob/main/bongbongs-NewMes-v15-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [bongbongs-NewMes-v15-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/bongbongs-NewMes-v15-GGUF/blob/main/bongbongs-NewMes-v15-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [bongbongs-NewMes-v15-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/bongbongs-NewMes-v15-GGUF/blob/main/bongbongs-NewMes-v15-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/AgentPublic-llama3-instruct-guillaumetell-GGUF | featherless-ai-quants | 2024-11-10T19:37:36Z | 16 | 0 | null | [
"gguf",
"text-generation",
"base_model:AgentPublic/llama3-instruct-guillaumetell",
"base_model:quantized:AgentPublic/llama3-instruct-guillaumetell",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-05T02:26:02Z | ---
base_model: AgentPublic/llama3-instruct-guillaumetell
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# AgentPublic/llama3-instruct-guillaumetell GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [AgentPublic-llama3-instruct-guillaumetell-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/AgentPublic-llama3-instruct-guillaumetell-GGUF/blob/main/AgentPublic-llama3-instruct-guillaumetell-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [AgentPublic-llama3-instruct-guillaumetell-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/AgentPublic-llama3-instruct-guillaumetell-GGUF/blob/main/AgentPublic-llama3-instruct-guillaumetell-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [AgentPublic-llama3-instruct-guillaumetell-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/AgentPublic-llama3-instruct-guillaumetell-GGUF/blob/main/AgentPublic-llama3-instruct-guillaumetell-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [AgentPublic-llama3-instruct-guillaumetell-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/AgentPublic-llama3-instruct-guillaumetell-GGUF/blob/main/AgentPublic-llama3-instruct-guillaumetell-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [AgentPublic-llama3-instruct-guillaumetell-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/AgentPublic-llama3-instruct-guillaumetell-GGUF/blob/main/AgentPublic-llama3-instruct-guillaumetell-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [AgentPublic-llama3-instruct-guillaumetell-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/AgentPublic-llama3-instruct-guillaumetell-GGUF/blob/main/AgentPublic-llama3-instruct-guillaumetell-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [AgentPublic-llama3-instruct-guillaumetell-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/AgentPublic-llama3-instruct-guillaumetell-GGUF/blob/main/AgentPublic-llama3-instruct-guillaumetell-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [AgentPublic-llama3-instruct-guillaumetell-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/AgentPublic-llama3-instruct-guillaumetell-GGUF/blob/main/AgentPublic-llama3-instruct-guillaumetell-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [AgentPublic-llama3-instruct-guillaumetell-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/AgentPublic-llama3-instruct-guillaumetell-GGUF/blob/main/AgentPublic-llama3-instruct-guillaumetell-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [AgentPublic-llama3-instruct-guillaumetell-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/AgentPublic-llama3-instruct-guillaumetell-GGUF/blob/main/AgentPublic-llama3-instruct-guillaumetell-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [AgentPublic-llama3-instruct-guillaumetell-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/AgentPublic-llama3-instruct-guillaumetell-GGUF/blob/main/AgentPublic-llama3-instruct-guillaumetell-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/OpenLLMAI-Llama-2-13b-sft-model-ocra-500k-GGUF | featherless-ai-quants | 2024-11-10T19:37:33Z | 14 | 0 | null | [
"gguf",
"text-generation",
"base_model:OpenRLHF/Llama-2-13b-sft-model-ocra-500k",
"base_model:quantized:OpenRLHF/Llama-2-13b-sft-model-ocra-500k",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-05T02:08:04Z | ---
base_model: OpenLLMAI/Llama-2-13b-sft-model-ocra-500k
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# OpenLLMAI/Llama-2-13b-sft-model-ocra-500k GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [OpenLLMAI-Llama-2-13b-sft-model-ocra-500k-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/OpenLLMAI-Llama-2-13b-sft-model-ocra-500k-GGUF/blob/main/OpenLLMAI-Llama-2-13b-sft-model-ocra-500k-IQ4_XS.gguf) | 6694.34 MB |
| Q2_K | [OpenLLMAI-Llama-2-13b-sft-model-ocra-500k-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/OpenLLMAI-Llama-2-13b-sft-model-ocra-500k-GGUF/blob/main/OpenLLMAI-Llama-2-13b-sft-model-ocra-500k-Q2_K.gguf) | 4629.39 MB |
| Q3_K_L | [OpenLLMAI-Llama-2-13b-sft-model-ocra-500k-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/OpenLLMAI-Llama-2-13b-sft-model-ocra-500k-GGUF/blob/main/OpenLLMAI-Llama-2-13b-sft-model-ocra-500k-Q3_K_L.gguf) | 6608.54 MB |
| Q3_K_M | [OpenLLMAI-Llama-2-13b-sft-model-ocra-500k-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/OpenLLMAI-Llama-2-13b-sft-model-ocra-500k-GGUF/blob/main/OpenLLMAI-Llama-2-13b-sft-model-ocra-500k-Q3_K_M.gguf) | 6044.17 MB |
| Q3_K_S | [OpenLLMAI-Llama-2-13b-sft-model-ocra-500k-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/OpenLLMAI-Llama-2-13b-sft-model-ocra-500k-GGUF/blob/main/OpenLLMAI-Llama-2-13b-sft-model-ocra-500k-Q3_K_S.gguf) | 5396.83 MB |
| Q4_K_M | [OpenLLMAI-Llama-2-13b-sft-model-ocra-500k-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/OpenLLMAI-Llama-2-13b-sft-model-ocra-500k-GGUF/blob/main/OpenLLMAI-Llama-2-13b-sft-model-ocra-500k-Q4_K_M.gguf) | 7501.56 MB |
| Q4_K_S | [OpenLLMAI-Llama-2-13b-sft-model-ocra-500k-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/OpenLLMAI-Llama-2-13b-sft-model-ocra-500k-GGUF/blob/main/OpenLLMAI-Llama-2-13b-sft-model-ocra-500k-Q4_K_S.gguf) | 7079.30 MB |
| Q5_K_M | [OpenLLMAI-Llama-2-13b-sft-model-ocra-500k-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/OpenLLMAI-Llama-2-13b-sft-model-ocra-500k-GGUF/blob/main/OpenLLMAI-Llama-2-13b-sft-model-ocra-500k-Q5_K_M.gguf) | 8802.34 MB |
| Q5_K_S | [OpenLLMAI-Llama-2-13b-sft-model-ocra-500k-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/OpenLLMAI-Llama-2-13b-sft-model-ocra-500k-GGUF/blob/main/OpenLLMAI-Llama-2-13b-sft-model-ocra-500k-Q5_K_S.gguf) | 8556.64 MB |
| Q6_K | [OpenLLMAI-Llama-2-13b-sft-model-ocra-500k-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/OpenLLMAI-Llama-2-13b-sft-model-ocra-500k-GGUF/blob/main/OpenLLMAI-Llama-2-13b-sft-model-ocra-500k-Q6_K.gguf) | 10184.42 MB |
| Q8_0 | [OpenLLMAI-Llama-2-13b-sft-model-ocra-500k-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/OpenLLMAI-Llama-2-13b-sft-model-ocra-500k-GGUF/blob/main/OpenLLMAI-Llama-2-13b-sft-model-ocra-500k-Q8_0.gguf) | 13190.58 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/AuriAetherwiing-MN-12B-Starcannon-v2-GGUF | featherless-ai-quants | 2024-11-10T19:37:08Z | 76 | 0 | null | [
"gguf",
"text-generation",
"base_model:AuriAetherwiing/MN-12B-Starcannon-v2",
"base_model:quantized:AuriAetherwiing/MN-12B-Starcannon-v2",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-05T01:03:51Z | ---
base_model: AuriAetherwiing/MN-12B-Starcannon-v2
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# AuriAetherwiing/MN-12B-Starcannon-v2 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [AuriAetherwiing-MN-12B-Starcannon-v2-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/AuriAetherwiing-MN-12B-Starcannon-v2-GGUF/blob/main/AuriAetherwiing-MN-12B-Starcannon-v2-IQ4_XS.gguf) | 6485.04 MB |
| Q2_K | [AuriAetherwiing-MN-12B-Starcannon-v2-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/AuriAetherwiing-MN-12B-Starcannon-v2-GGUF/blob/main/AuriAetherwiing-MN-12B-Starcannon-v2-Q2_K.gguf) | 4569.10 MB |
| Q3_K_L | [AuriAetherwiing-MN-12B-Starcannon-v2-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/AuriAetherwiing-MN-12B-Starcannon-v2-GGUF/blob/main/AuriAetherwiing-MN-12B-Starcannon-v2-Q3_K_L.gguf) | 6257.54 MB |
| Q3_K_M | [AuriAetherwiing-MN-12B-Starcannon-v2-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/AuriAetherwiing-MN-12B-Starcannon-v2-GGUF/blob/main/AuriAetherwiing-MN-12B-Starcannon-v2-Q3_K_M.gguf) | 5801.29 MB |
| Q3_K_S | [AuriAetherwiing-MN-12B-Starcannon-v2-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/AuriAetherwiing-MN-12B-Starcannon-v2-GGUF/blob/main/AuriAetherwiing-MN-12B-Starcannon-v2-Q3_K_S.gguf) | 5277.85 MB |
| Q4_K_M | [AuriAetherwiing-MN-12B-Starcannon-v2-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/AuriAetherwiing-MN-12B-Starcannon-v2-GGUF/blob/main/AuriAetherwiing-MN-12B-Starcannon-v2-Q4_K_M.gguf) | 7130.82 MB |
| Q4_K_S | [AuriAetherwiing-MN-12B-Starcannon-v2-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/AuriAetherwiing-MN-12B-Starcannon-v2-GGUF/blob/main/AuriAetherwiing-MN-12B-Starcannon-v2-Q4_K_S.gguf) | 6790.35 MB |
| Q5_K_M | [AuriAetherwiing-MN-12B-Starcannon-v2-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/AuriAetherwiing-MN-12B-Starcannon-v2-GGUF/blob/main/AuriAetherwiing-MN-12B-Starcannon-v2-Q5_K_M.gguf) | 8323.32 MB |
| Q5_K_S | [AuriAetherwiing-MN-12B-Starcannon-v2-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/AuriAetherwiing-MN-12B-Starcannon-v2-GGUF/blob/main/AuriAetherwiing-MN-12B-Starcannon-v2-Q5_K_S.gguf) | 8124.10 MB |
| Q6_K | [AuriAetherwiing-MN-12B-Starcannon-v2-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/AuriAetherwiing-MN-12B-Starcannon-v2-GGUF/blob/main/AuriAetherwiing-MN-12B-Starcannon-v2-Q6_K.gguf) | 9590.35 MB |
| Q8_0 | [AuriAetherwiing-MN-12B-Starcannon-v2-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/AuriAetherwiing-MN-12B-Starcannon-v2-GGUF/blob/main/AuriAetherwiing-MN-12B-Starcannon-v2-Q8_0.gguf) | 12419.10 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/macadeliccc-WestLake-7B-v2-laser-truthy-dpo-GGUF | featherless-ai-quants | 2024-11-10T19:37:07Z | 24 | 0 | null | [
"gguf",
"text-generation",
"base_model:macadeliccc/WestLake-7B-v2-laser-truthy-dpo",
"base_model:quantized:macadeliccc/WestLake-7B-v2-laser-truthy-dpo",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-05T00:55:52Z | ---
base_model: macadeliccc/WestLake-7B-v2-laser-truthy-dpo
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# macadeliccc/WestLake-7B-v2-laser-truthy-dpo GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [macadeliccc-WestLake-7B-v2-laser-truthy-dpo-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/macadeliccc-WestLake-7B-v2-laser-truthy-dpo-GGUF/blob/main/macadeliccc-WestLake-7B-v2-laser-truthy-dpo-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [macadeliccc-WestLake-7B-v2-laser-truthy-dpo-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/macadeliccc-WestLake-7B-v2-laser-truthy-dpo-GGUF/blob/main/macadeliccc-WestLake-7B-v2-laser-truthy-dpo-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [macadeliccc-WestLake-7B-v2-laser-truthy-dpo-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/macadeliccc-WestLake-7B-v2-laser-truthy-dpo-GGUF/blob/main/macadeliccc-WestLake-7B-v2-laser-truthy-dpo-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [macadeliccc-WestLake-7B-v2-laser-truthy-dpo-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/macadeliccc-WestLake-7B-v2-laser-truthy-dpo-GGUF/blob/main/macadeliccc-WestLake-7B-v2-laser-truthy-dpo-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [macadeliccc-WestLake-7B-v2-laser-truthy-dpo-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/macadeliccc-WestLake-7B-v2-laser-truthy-dpo-GGUF/blob/main/macadeliccc-WestLake-7B-v2-laser-truthy-dpo-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [macadeliccc-WestLake-7B-v2-laser-truthy-dpo-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/macadeliccc-WestLake-7B-v2-laser-truthy-dpo-GGUF/blob/main/macadeliccc-WestLake-7B-v2-laser-truthy-dpo-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [macadeliccc-WestLake-7B-v2-laser-truthy-dpo-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/macadeliccc-WestLake-7B-v2-laser-truthy-dpo-GGUF/blob/main/macadeliccc-WestLake-7B-v2-laser-truthy-dpo-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [macadeliccc-WestLake-7B-v2-laser-truthy-dpo-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/macadeliccc-WestLake-7B-v2-laser-truthy-dpo-GGUF/blob/main/macadeliccc-WestLake-7B-v2-laser-truthy-dpo-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [macadeliccc-WestLake-7B-v2-laser-truthy-dpo-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/macadeliccc-WestLake-7B-v2-laser-truthy-dpo-GGUF/blob/main/macadeliccc-WestLake-7B-v2-laser-truthy-dpo-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [macadeliccc-WestLake-7B-v2-laser-truthy-dpo-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/macadeliccc-WestLake-7B-v2-laser-truthy-dpo-GGUF/blob/main/macadeliccc-WestLake-7B-v2-laser-truthy-dpo-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [macadeliccc-WestLake-7B-v2-laser-truthy-dpo-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/macadeliccc-WestLake-7B-v2-laser-truthy-dpo-GGUF/blob/main/macadeliccc-WestLake-7B-v2-laser-truthy-dpo-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/Locutusque-OpenCerebrum-1.5-Mistral-7B-v0.2-beta-GGUF | featherless-ai-quants | 2024-11-10T19:37:05Z | 23 | 0 | null | [
"gguf",
"text-generation",
"base_model:Locutusque/OpenCerebrum-1.5-Mistral-7B-v0.2-beta",
"base_model:quantized:Locutusque/OpenCerebrum-1.5-Mistral-7B-v0.2-beta",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-05T00:27:38Z | ---
base_model: Locutusque/OpenCerebrum-1.5-Mistral-7B-v0.2-beta
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# Locutusque/OpenCerebrum-1.5-Mistral-7B-v0.2-beta GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [Locutusque-OpenCerebrum-1.5-Mistral-7B-v0.2-beta-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-OpenCerebrum-1.5-Mistral-7B-v0.2-beta-GGUF/blob/main/Locutusque-OpenCerebrum-1.5-Mistral-7B-v0.2-beta-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [Locutusque-OpenCerebrum-1.5-Mistral-7B-v0.2-beta-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-OpenCerebrum-1.5-Mistral-7B-v0.2-beta-GGUF/blob/main/Locutusque-OpenCerebrum-1.5-Mistral-7B-v0.2-beta-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [Locutusque-OpenCerebrum-1.5-Mistral-7B-v0.2-beta-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-OpenCerebrum-1.5-Mistral-7B-v0.2-beta-GGUF/blob/main/Locutusque-OpenCerebrum-1.5-Mistral-7B-v0.2-beta-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [Locutusque-OpenCerebrum-1.5-Mistral-7B-v0.2-beta-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-OpenCerebrum-1.5-Mistral-7B-v0.2-beta-GGUF/blob/main/Locutusque-OpenCerebrum-1.5-Mistral-7B-v0.2-beta-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [Locutusque-OpenCerebrum-1.5-Mistral-7B-v0.2-beta-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-OpenCerebrum-1.5-Mistral-7B-v0.2-beta-GGUF/blob/main/Locutusque-OpenCerebrum-1.5-Mistral-7B-v0.2-beta-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [Locutusque-OpenCerebrum-1.5-Mistral-7B-v0.2-beta-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-OpenCerebrum-1.5-Mistral-7B-v0.2-beta-GGUF/blob/main/Locutusque-OpenCerebrum-1.5-Mistral-7B-v0.2-beta-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [Locutusque-OpenCerebrum-1.5-Mistral-7B-v0.2-beta-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-OpenCerebrum-1.5-Mistral-7B-v0.2-beta-GGUF/blob/main/Locutusque-OpenCerebrum-1.5-Mistral-7B-v0.2-beta-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [Locutusque-OpenCerebrum-1.5-Mistral-7B-v0.2-beta-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-OpenCerebrum-1.5-Mistral-7B-v0.2-beta-GGUF/blob/main/Locutusque-OpenCerebrum-1.5-Mistral-7B-v0.2-beta-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [Locutusque-OpenCerebrum-1.5-Mistral-7B-v0.2-beta-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-OpenCerebrum-1.5-Mistral-7B-v0.2-beta-GGUF/blob/main/Locutusque-OpenCerebrum-1.5-Mistral-7B-v0.2-beta-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [Locutusque-OpenCerebrum-1.5-Mistral-7B-v0.2-beta-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-OpenCerebrum-1.5-Mistral-7B-v0.2-beta-GGUF/blob/main/Locutusque-OpenCerebrum-1.5-Mistral-7B-v0.2-beta-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [Locutusque-OpenCerebrum-1.5-Mistral-7B-v0.2-beta-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-OpenCerebrum-1.5-Mistral-7B-v0.2-beta-GGUF/blob/main/Locutusque-OpenCerebrum-1.5-Mistral-7B-v0.2-beta-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/jondurbin-bagel-7b-v0.1-GGUF | featherless-ai-quants | 2024-11-10T19:37:02Z | 24 | 0 | null | [
"gguf",
"text-generation",
"base_model:jondurbin/bagel-7b-v0.1",
"base_model:quantized:jondurbin/bagel-7b-v0.1",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-05T00:05:33Z | ---
base_model: jondurbin/bagel-7b-v0.1
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# jondurbin/bagel-7b-v0.1 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [jondurbin-bagel-7b-v0.1-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/jondurbin-bagel-7b-v0.1-GGUF/blob/main/jondurbin-bagel-7b-v0.1-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [jondurbin-bagel-7b-v0.1-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/jondurbin-bagel-7b-v0.1-GGUF/blob/main/jondurbin-bagel-7b-v0.1-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [jondurbin-bagel-7b-v0.1-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/jondurbin-bagel-7b-v0.1-GGUF/blob/main/jondurbin-bagel-7b-v0.1-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [jondurbin-bagel-7b-v0.1-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/jondurbin-bagel-7b-v0.1-GGUF/blob/main/jondurbin-bagel-7b-v0.1-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [jondurbin-bagel-7b-v0.1-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/jondurbin-bagel-7b-v0.1-GGUF/blob/main/jondurbin-bagel-7b-v0.1-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [jondurbin-bagel-7b-v0.1-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/jondurbin-bagel-7b-v0.1-GGUF/blob/main/jondurbin-bagel-7b-v0.1-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [jondurbin-bagel-7b-v0.1-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/jondurbin-bagel-7b-v0.1-GGUF/blob/main/jondurbin-bagel-7b-v0.1-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [jondurbin-bagel-7b-v0.1-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/jondurbin-bagel-7b-v0.1-GGUF/blob/main/jondurbin-bagel-7b-v0.1-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [jondurbin-bagel-7b-v0.1-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/jondurbin-bagel-7b-v0.1-GGUF/blob/main/jondurbin-bagel-7b-v0.1-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [jondurbin-bagel-7b-v0.1-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/jondurbin-bagel-7b-v0.1-GGUF/blob/main/jondurbin-bagel-7b-v0.1-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [jondurbin-bagel-7b-v0.1-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/jondurbin-bagel-7b-v0.1-GGUF/blob/main/jondurbin-bagel-7b-v0.1-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
prithivMLmods/Pastel-BG-Flux-LoRA | prithivMLmods | 2024-11-10T19:36:33Z | 650 | 14 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"Pastel",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-11-10T19:26:57Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
- Pastel
widget:
- text: 'Pastel BG, a young woman with brown hair and blue eyes stands in front of a colorful backdrop. The womans face is adorned with freckles, adding a pop of color to her outfit. The backdrop is a vibrant shade of purple, with yellow stars and stripes on it.'
output:
url: images/PB1.png
- text: 'Pastel BG, An eye-level view of a gray tabby cat with long white whiskers and a pink nose. The cats head is tilted slightly to the right, and its eyes are wide open. Its ears are pointed up, and the cats fur is a mix of gray and black. The background is a combination of pink, purple, and yellow, with white dots dotting the background. To the left of the cat, there is a purple star with a white butterfly on it.'
output:
url: images/PB2.png
- text: 'Pastel BG, a man stands in front of a colorful backdrop. He is dressed in a light pink suit jacket, a yellow collared shirt, and a pair of sunglasses. His hair is styled in a short bob, and his eyes are slightly open. His lips are slightly parted, as if he is looking to the right. The backdrop is a combination of pink, yellow, and green, with small white stars on the right side of the wall.'
output:
url: images/PB3.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Pastel BG
license: creativeml-openrail-m
---
# Pastel-BG-Flux-LoRA
<Gallery />
- Hosted Hereπ§¨: https://huggingface.co/spaces/prithivMLmods/FLUX-LoRA-DLC
**The model is still in the training phase. This is not the final version and may contain artifacts and perform poorly in some cases.**
## Model description
**prithivMLmods/Pastel-BG-Flux-LoRA**
Image Processing Parameters
| Parameter | Value | Parameter | Value |
|---------------------------|--------|---------------------------|--------|
| LR Scheduler | constant | Noise Offset | 0.03 |
| Optimizer | AdamW | Multires Noise Discount | 0.1 |
| Network Dim | 64 | Multires Noise Iterations | 10 |
| Network Alpha | 32 | Repeat & Steps | 28 & 3340|
| Epoch | 15 | Save Every N Epochs | 1 |
Labeling: florence2-en(natural language & English)
Total Images Used for Training : 18 [ Hi-RES ]
## Best Dimensions
- 1024 x 1024 (Default)
## Setting Up
```
import torch
from pipelines import DiffusionPipeline
base_model = "black-forest-labs/FLUX.1-dev"
pipe = DiffusionPipeline.from_pretrained(base_model, torch_dtype=torch.bfloat16)
lora_repo = "prithivMLmods/Pastel-BG-Flux-LoRA"
trigger_word = "Pastel BG"
pipe.load_lora_weights(lora_repo)
device = torch.device("cuda")
pipe.to(device)
```
## Trigger words
You should use `Pastel BG` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/prithivMLmods/Pastel-BG-Flux-LoRA/tree/main) them in the Files & versions tab. |
iamjoshgreen/mspackage | iamjoshgreen | 2024-11-10T19:32:43Z | 6 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-11-10T19:03:19Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: mspackage
---
# Mspackage
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `mspackage` to trigger the image generation.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('iamjoshgreen/mspackage', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
featherless-ai-quants/FallenMerick-Chunky-Lemon-Cookie-11B-GGUF | featherless-ai-quants | 2024-11-10T19:31:49Z | 24 | 1 | null | [
"gguf",
"text-generation",
"base_model:FallenMerick/Chunky-Lemon-Cookie-11B",
"base_model:quantized:FallenMerick/Chunky-Lemon-Cookie-11B",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-05T08:11:37Z | ---
base_model: FallenMerick/Chunky-Lemon-Cookie-11B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# FallenMerick/Chunky-Lemon-Cookie-11B GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [FallenMerick-Chunky-Lemon-Cookie-11B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/FallenMerick-Chunky-Lemon-Cookie-11B-GGUF/blob/main/FallenMerick-Chunky-Lemon-Cookie-11B-IQ4_XS.gguf) | 5557.67 MB |
| Q2_K | [FallenMerick-Chunky-Lemon-Cookie-11B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/FallenMerick-Chunky-Lemon-Cookie-11B-GGUF/blob/main/FallenMerick-Chunky-Lemon-Cookie-11B-Q2_K.gguf) | 3817.78 MB |
| Q3_K_L | [FallenMerick-Chunky-Lemon-Cookie-11B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/FallenMerick-Chunky-Lemon-Cookie-11B-GGUF/blob/main/FallenMerick-Chunky-Lemon-Cookie-11B-Q3_K_L.gguf) | 5388.98 MB |
| Q3_K_M | [FallenMerick-Chunky-Lemon-Cookie-11B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/FallenMerick-Chunky-Lemon-Cookie-11B-GGUF/blob/main/FallenMerick-Chunky-Lemon-Cookie-11B-Q3_K_M.gguf) | 4954.98 MB |
| Q3_K_S | [FallenMerick-Chunky-Lemon-Cookie-11B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/FallenMerick-Chunky-Lemon-Cookie-11B-GGUF/blob/main/FallenMerick-Chunky-Lemon-Cookie-11B-Q3_K_S.gguf) | 4448.48 MB |
| Q4_K_M | [FallenMerick-Chunky-Lemon-Cookie-11B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/FallenMerick-Chunky-Lemon-Cookie-11B-GGUF/blob/main/FallenMerick-Chunky-Lemon-Cookie-11B-Q4_K_M.gguf) | 6162.33 MB |
| Q4_K_S | [FallenMerick-Chunky-Lemon-Cookie-11B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/FallenMerick-Chunky-Lemon-Cookie-11B-GGUF/blob/main/FallenMerick-Chunky-Lemon-Cookie-11B-Q4_K_S.gguf) | 5835.08 MB |
| Q5_K_M | [FallenMerick-Chunky-Lemon-Cookie-11B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/FallenMerick-Chunky-Lemon-Cookie-11B-GGUF/blob/main/FallenMerick-Chunky-Lemon-Cookie-11B-Q5_K_M.gguf) | 7245.95 MB |
| Q5_K_S | [FallenMerick-Chunky-Lemon-Cookie-11B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/FallenMerick-Chunky-Lemon-Cookie-11B-GGUF/blob/main/FallenMerick-Chunky-Lemon-Cookie-11B-Q5_K_S.gguf) | 7054.70 MB |
| Q6_K | [FallenMerick-Chunky-Lemon-Cookie-11B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/FallenMerick-Chunky-Lemon-Cookie-11B-GGUF/blob/main/FallenMerick-Chunky-Lemon-Cookie-11B-Q6_K.gguf) | 8397.30 MB |
| Q8_0 | [FallenMerick-Chunky-Lemon-Cookie-11B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/FallenMerick-Chunky-Lemon-Cookie-11B-GGUF/blob/main/FallenMerick-Chunky-Lemon-Cookie-11B-Q8_0.gguf) | 10875.85 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
zelk12/MT-Gen2-MA-gemma-2-MT4RAv0.1t0.25-9B | zelk12 | 2024-11-10T19:24:22Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:zelk12/MT4-gemma-2-9B",
"base_model:merge:zelk12/MT4-gemma-2-9B",
"base_model:zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25",
"base_model:merge:zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-10T19:18:14Z | ---
base_model:
- zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25
- zelk12/MT4-gemma-2-9B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25](https://huggingface.co/zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25)
* [zelk12/MT4-gemma-2-9B](https://huggingface.co/zelk12/MT4-gemma-2-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: zelk12/MT4-gemma-2-9B
- model: zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25
merge_method: slerp
base_model: zelk12/MT4-gemma-2-9B
dtype: bfloat16
parameters:
t: 0.25
```
|
waloneai/mawc-cc | waloneai | 2024-11-10T19:20:21Z | 189 | 0 | diffusers | [
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-11-10T19:20:18Z | ---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: mawc
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# mawc cc
<Gallery />
## Model description
## Trigger words
You should use `mawc` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/shweaung/mawc-cc/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
|
waloneai/mawc | waloneai | 2024-11-10T19:19:01Z | 6 | 0 | diffusers | [
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-11-10T19:18:57Z | ---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: mawc
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# mawc
<Gallery />
## Model description
## Trigger words
You should use `mawc` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/shweaung/mawc/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
|
zelk12/MT-Gen2-BB-gemma-2-MTMMT2-9B | zelk12 | 2024-11-10T19:14:32Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:zelk12/MT-Merge-gemma-2-9B",
"base_model:merge:zelk12/MT-Merge-gemma-2-9B",
"base_model:zelk12/MT2-gemma-2-9B",
"base_model:merge:zelk12/MT2-gemma-2-9B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-10T19:08:18Z | ---
base_model:
- zelk12/MT2-gemma-2-9B
- zelk12/MT-Merge-gemma-2-9B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zelk12/MT2-gemma-2-9B](https://huggingface.co/zelk12/MT2-gemma-2-9B)
* [zelk12/MT-Merge-gemma-2-9B](https://huggingface.co/zelk12/MT-Merge-gemma-2-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: zelk12/MT-Merge-gemma-2-9B
- model: zelk12/MT2-gemma-2-9B
merge_method: slerp
base_model: zelk12/MT-Merge-gemma-2-9B
dtype: bfloat16
parameters:
t: 0.25
```
|
ihughes15234/phi35_tictactoe_dpo2epoch_v5 | ihughes15234 | 2024-11-10T19:14:02Z | 79 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:ihughes15234/phi35_tictactoe_dpo1epoch_v5",
"base_model:finetune:ihughes15234/phi35_tictactoe_dpo1epoch_v5",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-10T18:58:50Z | ---
base_model: ihughes15234/phi35_tictactoe_dpo1epoch_v5
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** ihughes15234
- **License:** apache-2.0
- **Finetuned from model :** ihughes15234/phi35_tictactoe_dpo1epoch_v5
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Abiggj99/stock-summary-model | Abiggj99 | 2024-11-10T19:11:59Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-11-09T16:11:48Z | ---
library_name: transformers
license: apache-2.0
base_model: facebook/bart-base
tags:
- generated_from_trainer
model-index:
- name: stock-summary-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# stock-summary-model
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0047
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 17 | 3.0099 |
| No log | 2.0 | 34 | 0.8798 |
| 3.3639 | 3.0 | 51 | 0.1632 |
| 3.3639 | 4.0 | 68 | 0.0385 |
| 3.3639 | 5.0 | 85 | 0.0146 |
| 0.0802 | 6.0 | 102 | 0.0091 |
| 0.0802 | 7.0 | 119 | 0.0067 |
| 0.0802 | 8.0 | 136 | 0.0057 |
| 0.0147 | 9.0 | 153 | 0.0048 |
| 0.0147 | 10.0 | 170 | 0.0047 |
### Framework versions
- Transformers 4.44.1
- Pytorch 2.4.1
- Datasets 1.18.3
- Tokenizers 0.19.1
|
LBK95/Llama-2-7b-hf-DPO-LookAhead-5_TTree1.4_TT0.9_TP0.7_TE0.2_V4 | LBK95 | 2024-11-10T19:10:01Z | 16 | 0 | peft | [
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2024-11-10T11:55:22Z | ---
base_model: meta-llama/Llama-2-7b-hf
library_name: peft
license: llama2
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-DPO-LookAhead-5_TTree1.4_TT0.9_TP0.7_TE0.2_V4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-DPO-LookAhead-5_TTree1.4_TT0.9_TP0.7_TE0.2_V4
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2125
- Rewards/chosen: -3.3104
- Rewards/rejected: -2.9319
- Rewards/accuracies: 0.4167
- Rewards/margins: -0.3786
- Logps/rejected: -192.9225
- Logps/chosen: -170.2794
- Logits/rejected: 0.1199
- Logits/chosen: 0.1595
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6179 | 0.3027 | 79 | 0.7115 | -0.1031 | -0.0593 | 0.25 | -0.0438 | -164.1966 | -138.2057 | 0.5429 | 0.5748 |
| 0.6065 | 0.6054 | 158 | 0.7348 | -0.0751 | 0.0129 | 0.25 | -0.0879 | -163.4753 | -137.9259 | 0.5242 | 0.5565 |
| 0.621 | 0.9080 | 237 | 0.7932 | -0.0433 | 0.1366 | 0.5 | -0.1800 | -162.2375 | -137.6083 | 0.4932 | 0.5259 |
| 0.4714 | 1.2107 | 316 | 0.7928 | -0.6963 | -0.5927 | 0.5 | -0.1037 | -169.5308 | -144.1387 | 0.4698 | 0.5037 |
| 0.3829 | 1.5134 | 395 | 0.8637 | -1.6604 | -1.5528 | 0.3333 | -0.1075 | -179.1323 | -153.7787 | 0.3664 | 0.4026 |
| 0.3589 | 1.8161 | 474 | 0.9222 | -1.4397 | -1.1360 | 0.25 | -0.3037 | -174.9637 | -151.5720 | 0.3400 | 0.3770 |
| 0.2138 | 2.1188 | 553 | 0.9860 | -1.9991 | -1.6486 | 0.3333 | -0.3505 | -180.0903 | -157.1666 | 0.2605 | 0.2992 |
| 0.0437 | 2.4215 | 632 | 1.1781 | -3.1628 | -2.7961 | 0.4167 | -0.3666 | -191.5652 | -168.8030 | 0.1441 | 0.1838 |
| 0.1667 | 2.7241 | 711 | 1.2125 | -3.3104 | -2.9319 | 0.4167 | -0.3786 | -192.9225 | -170.2794 | 0.1199 | 0.1595 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.44.0
- Pytorch 2.4.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1 |
mradermacher/DataVortexS-10.7B-dpo-v1.7-i1-GGUF | mradermacher | 2024-11-10T19:07:12Z | 18 | 0 | transformers | [
"transformers",
"gguf",
"text-generation",
"ko",
"base_model:Edentns/DataVortexS-10.7B-dpo-v1.7",
"base_model:quantized:Edentns/DataVortexS-10.7B-dpo-v1.7",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text-generation | 2024-11-10T09:29:12Z | ---
base_model: Edentns/DataVortexS-10.7B-dpo-v1.7
language:
- ko
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- text-generation
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Edentns/DataVortexS-10.7B-dpo-v1.7
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/DataVortexS-10.7B-dpo-v1.7-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-dpo-v1.7-i1-GGUF/resolve/main/DataVortexS-10.7B-dpo-v1.7.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-dpo-v1.7-i1-GGUF/resolve/main/DataVortexS-10.7B-dpo-v1.7.i1-IQ1_M.gguf) | i1-IQ1_M | 2.7 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-dpo-v1.7-i1-GGUF/resolve/main/DataVortexS-10.7B-dpo-v1.7.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-dpo-v1.7-i1-GGUF/resolve/main/DataVortexS-10.7B-dpo-v1.7.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-dpo-v1.7-i1-GGUF/resolve/main/DataVortexS-10.7B-dpo-v1.7.i1-IQ2_S.gguf) | i1-IQ2_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-dpo-v1.7-i1-GGUF/resolve/main/DataVortexS-10.7B-dpo-v1.7.i1-IQ2_M.gguf) | i1-IQ2_M | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-dpo-v1.7-i1-GGUF/resolve/main/DataVortexS-10.7B-dpo-v1.7.i1-Q2_K.gguf) | i1-Q2_K | 4.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-dpo-v1.7-i1-GGUF/resolve/main/DataVortexS-10.7B-dpo-v1.7.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 4.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-dpo-v1.7-i1-GGUF/resolve/main/DataVortexS-10.7B-dpo-v1.7.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-dpo-v1.7-i1-GGUF/resolve/main/DataVortexS-10.7B-dpo-v1.7.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-dpo-v1.7-i1-GGUF/resolve/main/DataVortexS-10.7B-dpo-v1.7.i1-IQ3_S.gguf) | i1-IQ3_S | 4.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-dpo-v1.7-i1-GGUF/resolve/main/DataVortexS-10.7B-dpo-v1.7.i1-IQ3_M.gguf) | i1-IQ3_M | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-dpo-v1.7-i1-GGUF/resolve/main/DataVortexS-10.7B-dpo-v1.7.i1-Q3_K_M.gguf) | i1-Q3_K_M | 5.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-dpo-v1.7-i1-GGUF/resolve/main/DataVortexS-10.7B-dpo-v1.7.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-dpo-v1.7-i1-GGUF/resolve/main/DataVortexS-10.7B-dpo-v1.7.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-dpo-v1.7-i1-GGUF/resolve/main/DataVortexS-10.7B-dpo-v1.7.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 6.3 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-dpo-v1.7-i1-GGUF/resolve/main/DataVortexS-10.7B-dpo-v1.7.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 6.3 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-dpo-v1.7-i1-GGUF/resolve/main/DataVortexS-10.7B-dpo-v1.7.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 6.3 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-dpo-v1.7-i1-GGUF/resolve/main/DataVortexS-10.7B-dpo-v1.7.i1-Q4_0.gguf) | i1-Q4_0 | 6.3 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-dpo-v1.7-i1-GGUF/resolve/main/DataVortexS-10.7B-dpo-v1.7.i1-Q4_K_S.gguf) | i1-Q4_K_S | 6.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-dpo-v1.7-i1-GGUF/resolve/main/DataVortexS-10.7B-dpo-v1.7.i1-Q4_K_M.gguf) | i1-Q4_K_M | 6.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-dpo-v1.7-i1-GGUF/resolve/main/DataVortexS-10.7B-dpo-v1.7.i1-Q5_K_S.gguf) | i1-Q5_K_S | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-dpo-v1.7-i1-GGUF/resolve/main/DataVortexS-10.7B-dpo-v1.7.i1-Q5_K_M.gguf) | i1-Q5_K_M | 7.8 | |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-dpo-v1.7-i1-GGUF/resolve/main/DataVortexS-10.7B-dpo-v1.7.i1-Q6_K.gguf) | i1-Q6_K | 9.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
huyquoctrinh/musicgen-melody-lora-punk | huyquoctrinh | 2024-11-10T19:06:08Z | 5 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"musicgen_melody",
"text-to-audio",
"ylacombe/tiny-punk",
"generated_from_trainer",
"base_model:facebook/musicgen-melody",
"base_model:adapter:facebook/musicgen-melody",
"license:cc-by-nc-4.0",
"region:us"
] | text-to-audio | 2024-11-10T18:58:59Z | ---
library_name: peft
license: cc-by-nc-4.0
base_model: facebook/musicgen-melody
tags:
- text-to-audio
- ylacombe/tiny-punk
- generated_from_trainer
model-index:
- name: musicgen-melody-lora-punk
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# musicgen-melody-lora-punk
This model is a fine-tuned version of [facebook/musicgen-melody](https://huggingface.co/facebook/musicgen-melody) on the YLACOMBE/TINY-PUNK - DEFAULT dataset.
It achieves the following results on the evaluation set:
- Loss: 4.7421
- Clap: -0.0067
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 2
- seed: 456
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.99) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.2
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
zelk12/MT-Gen2-IF-gemma-2-MTMMT1-9B | zelk12 | 2024-11-10T19:02:59Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:zelk12/MT-Merge-gemma-2-9B",
"base_model:merge:zelk12/MT-Merge-gemma-2-9B",
"base_model:zelk12/MT1-gemma-2-9B",
"base_model:merge:zelk12/MT1-gemma-2-9B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-10T18:56:36Z | ---
base_model:
- zelk12/MT-Merge-gemma-2-9B
- zelk12/MT1-gemma-2-9B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zelk12/MT-Merge-gemma-2-9B](https://huggingface.co/zelk12/MT-Merge-gemma-2-9B)
* [zelk12/MT1-gemma-2-9B](https://huggingface.co/zelk12/MT1-gemma-2-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: zelk12/MT-Merge-gemma-2-9B
- model: zelk12/MT1-gemma-2-9B
merge_method: slerp
base_model: zelk12/MT-Merge-gemma-2-9B
dtype: bfloat16
parameters:
t: 0.25
```
|
AbuZaforCSE/BanglaFinGPT | AbuZaforCSE | 2024-11-10T18:56:29Z | 76 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-11-10T18:42:02Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mav23/mistral-rrc-GGUF | mav23 | 2024-11-10T18:55:21Z | 184 | 0 | null | [
"gguf",
"legal",
"housing",
"covenants",
"property",
"deed",
"racial-covenant",
"en",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:quantized:mistralai/Mistral-7B-v0.1",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-11-10T17:56:45Z | ---
license: mit
language:
- en
base_model:
- mistralai/Mistral-7B-v0.1
tags:
- legal
- housing
- covenants
- property
- deed
- racial-covenant
---
# reglab-rrc/mistral-rrc
**Paper:** [AI for Scaling Legal Reform: Mapping and Redacting Racial Covenants in Santa Clara County](https://reglab.github.io/racialcovenants)
**Overview of Model Details**
* Model name: `reglab-rrc/mistral-rrc`
* Version: 1.0
* Release date: October 17, 2024
* Model type: Finetuned causal language model (Mistral 7B)
* License: Open-source, licensed under the MIT License
* Language: English
Domains: Legal documents (real property deeds)
* Task: Text classification and extraction (racial covenant detection)
## Usage
Here is an example of how to use the model to find racial covenants in a page of a deed:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import re
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("reglab/mistral-rrc")
model = AutoModelForCausalLM.from_pretrained("reglab/mistral-rrc")
def format_prompt(document):
return f"""### Instruction:
Determine whether the property deed contains a racial covenant. A racial covenant is a clause in a document that \
restricts who can reside, own, or occupy a property on the basis of race, ethnicity, national origin, or religion. \
Answer "Yes" or "No". If "Yes", provide the exact text of the relevant passage and then a quotation of the passage \
with spelling and formatting errors fixed.
### Input:
{document}
### Response:"""
def parse_output(output):
answer_match = re.search(r"\[ANSWER\](.*?)\[/ANSWER\]", output, re.DOTALL)
raw_passage_match = re.search(r"\[RAW PASSAGE\](.*?)\[/RAW PASSAGE\]", output, re.DOTALL)
quotation_match = re.search(r"\[CORRECTED QUOTATION\](.*?)\[/CORRECTED QUOTATION\]", output, re.DOTALL)
answer = answer_match.group(1).strip() if answer_match else None
raw_passage = raw_passage_match.group(1).strip() if raw_passage_match else None
quotation = quotation_match.group(1).strip() if quotation_match else None
return {
"answer": answer == "Yes",
"raw_passage": raw_passage,
"quotation": quotation
}
# Example usage
document = "[[Your property deed text here...]]"
prompt = format_prompt(document)
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=512)
result = tokenizer.decode(outputs[0])
parsed_result = parse_output(result)
print(parsed_result)
```
## Input and Output Formats
The model was trained with the input and output formats above, so please make sure to use these formats
when running inference.
- **Input Format:** The model accepts property deed documents in text format. It expects properly formatted prompts based on the instructional format outlined in the usage example, including the instruction to detect racial covenants and provide corrected text if found.
- **Output Format:** The output includes a response that provides:
- An answer to whether a racial covenant is present ("Yes" or "No").
- The raw text of the racial covenant if detected.
- A corrected quotation of the racial covenant text with spelling and formatting errors fixed.
## Intended Use
The finetuned Mistral model (`reglab-rrc/mistral-rrc`) is designed to detect and extract racial covenants from property deed documents. Racial covenants are clauses that historically restricted property ownership or residence based on race, ethnicity, national origin, or religion. This model aims to aid jurisdictions, such as Santa Clara County (CA), in identifying these covenants for removal or redaction, as mandated by laws like California's AB 1466. The intended use is to prioritize documents for review, reducing the time and resources required for human auditors to locate RRCs manually, particularly in large datasets of property deeds. Legal professionals and government entities can integrate the model into workflows to streamline and scale up the process of identifying racially discriminatory language in real estate records.
---
## Training Data
The Mistral 7B model was finetuned on a collection of property deed documents gathered from eight counties across the United States, including Santa Clara County (CA). To account for potential variations in document formatting, OCR quality, and phrasing, data augmentation included property deeds from other jurisdictions, such as Bexar County (TX), Cuyahoga County (OH), and Hidalgo County (TX). In total, the training dataset comprised 3,801 annotated deed pages, with 2,987 (78.6%) containing racially restrictive covenants. The dataset was balanced with both positive and negative examples, derived from keyword-based searches and manual annotation efforts. The data was annotated through a multi-stage process, which included manual verification of model predictions and the development of a web-based annotation tool for more efficient data labeling. (For additional details about data augmentation and training, please refer to our paper.)
---
## Performance
The finetuned model was evaluated on a held-out test set of 739 pages from the original dataset, with approximately 70% of these pages containing racial covenants. Performance metrics for the model include page-level precision, recall, and F1 score, as well as span-level BLEU scores, to measure how accurately the model reproduced the exact span of the detected covenant text. The results are as follows:
- **Precision:** 1.000 (95% CI: 0.995-1.000)
- **Recall:** 0.994 (95% CI: 0.984-0.997)
- **F1 score:** 0.997
- **BLEU score:** 0.932 (for span-level accuracy of detected covenants)
The finetuned Mistral model outperformed other approaches, including keyword and fuzzy matching as well as zero-shot and few-shot GPT models, particularly in recall and precision.
---
### Limitations
Despite the performance of the finetuned Mistral model in detecting racial covenants, several limitations remain that must be considered and stated:
1. **Generalizability Across Jurisdictions:** This model was primarily finetuned on property deeds from eight counties, including Bexar County (TX), Cuyahoga County (OH), and Santa Clara County (CA). While we took care to include a variety of document types and OCR qualities, property deed language and formatting can vary significantly by jurisdiction. As a result, the model's performance may degrade when applied to regions with distinct linguistic, legal, or historical document structures. Future efforts should include jurisdiction-specific validation to ensure accurate detection in areas with unique property deed formats.
2. **Sensitivity to OCR Artifacts:** Although the model is robust to many types of OCR (optical character recognition) errors, heavily degraded documents or those with extremely poor scan quality may still pose challenges. Scanning artifacts can introduce noise that obscures key terms, leading to either missed racial covenants (false negatives) or incorrect detections (false positives). This remains a potential source of error, particularly in counties with older, handwritten, or poorly preserved records.
3. **Contextual Ambiguity:** The model relies on semantic analysis to identify racial covenants, and while this enhances its ability to detect atypical language, some ambiguity remains. For instance, terms like "white" could refer to a racial category or a person's name, and the model's ability to disambiguate such terms is not perfect, especially in cases where poor scanning quality makes it difficult to distinguish the usage of the ambigious term based on the semantic content of the deed. In such cases, legal professionals must still verify the results, ensuring no improper redactions or omissions occur.
4. **Historical Document Complexity:** The language used in older property deeds can be complex and archaic. Some racial covenants may be expressed in subtle or convoluted ways that could evade even the most advanced language models. While the model has shown strong performance in capturing most covenants, human oversight remains crucial, particularly for documents with unusual or legally obscure phrasing.
5. **Dependency on Human Review:** Although the model reduces the manual work pretty significantly, legal review is still required for final verification. This human-in-the-loop approach mitigates the risk of false positives, but it does not entirely eliminate the need for expert intervention, particularly in the redaction and historical preservation processes.
---
### Ethical Considerations
The deployment of a language model for detecting racial covenants raises several important ethical considerations. We have done our best to carefully address these concerns throughout the project:
1. **Preservation of Historical Memory:** A key ethical consideration in this project is balancing the removal of offensive language from property deeds with the need to preserve historical records. While the model identifies and assists in redacting racially restrictive covenants, these covenants are also preserved in a historical registry by the County. This ensures that the history of housing discrimination is not erased but documented and made accessible for future research and public awareness. The creation of this historical record serves as an educational tool to understand the deep and troubling legacy of racial exclusion in housing markets.
2. **Accountability and Oversight:** The system has been designed with a clear chain of accountability, as required by Californiaβs AB 1466. All flagged documents must undergo legal review, ensuring that no inappropriate redactions occur and that the process is transparent and accurate. This human oversight safeguards against over-reliance on automated systems, which, while highly effective, are not infallible. Our current AI-driven pipeline prioritizes documents for review, but final decisions rest with human experts (most specifically, legal professionals), mitigating the risk of both false positives and false negatives.
3. **Bias and Fairness:** The model is trained on historical documents that reflect the racial and social biases of the time. While the model itself is neutral in its detection of racially restrictive language, the training data may inherently carry these biases, as they originate from a time when discriminatory covenants were legally permissible. Ongoing efforts are required to ensure that the model does not perpetuate unintended biases, especially in jurisdictions with different historical contexts. Regular validation across diverse datasets and jurisdictions is essential to prevent any unfair outcomes.
4. **Accessibility and Open Model:** By choosing to finetune an open-source model (Mistral 7B), this project has prioritized transparency and accessibility. This decision makes the technology available to smaller counties and community-based organizations, many of which lack the resources to develop or license proprietary solutions. The release of the model empowers a broader range of actors to engage in legal reform efforts, fostering greater equity in the identification and removal of racial covenants. Additionally, privacy concerns have been addressed by masking private information in the training data, ensuring that the model does not learn or reproduce sensitive data.
5. **Advancing Public Good:** This project exemplifies how AI can be leveraged for the public good. By revealing patterns of housing discrimination and aiding in legal reform, the model contributes to ongoing efforts to address historical injustices. Beyond merely automating a legal task, this project enhances our understanding of systemic racism in the housing market, adding valuable insights to the academic and public discourse. It is a powerful illustration of how technology can assist in the pursuit of justice, equity, and historical accountability.
## Citation
If your work makes use of our model, data, or results, we request that you cite our paper as follows:
```bibtex
@article{suranisuzgun2024,
title={AI for Scaling Legal Reform: Mapping and Redacting Racial Covenants in Santa Clara County},
author={Surani, Faiz and Suzgun, Mirac and Raman, Vyoma and Manning, Christopher D. and Henderson, Peter and Ho, Daniel E.},
url={https://dho.stanford.edu/wp-content/uploads/Covenants.pdf},
year={2024}
}
``` |
alidenewade/unit_5_exercise | alidenewade | 2024-11-10T18:54:34Z | 88 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dv",
"dataset:mozilla-foundation/common_voice_13_0",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-11-10T18:11:41Z | ---
library_name: transformers
language:
- dv
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Unit 5 Ali's exercise
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13 (Alid)
type: mozilla-foundation/common_voice_13_0
config: dv
split: test
args: dv
metrics:
- name: Wer
type: wer
value: 116.39426922140697
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Unit 5 Ali's exercise
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 13 (Alid) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9533
- Wer Ortho: 223.8248
- Wer: 116.3943
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 550
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:------:|:----:|:---------------:|:---------:|:--------:|
| 0.9416 | 1.6287 | 500 | 0.9533 | 223.8248 | 116.3943 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
phogen/FineLlama-3.1-8B | phogen | 2024-11-10T18:53:04Z | 11 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-10T18:49:20Z | ---
base_model: unsloth/Meta-Llama-3.1-8B
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** phogen
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ahsannawazch/phi-3.5-disaster-tweets | ahsannawazch | 2024-11-10T18:47:28Z | 5 | 0 | null | [
"safetensors",
"phi3",
"trl",
"sft",
"custom_code",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-11-10T18:42:26Z | ---
license: apache-2.0
tags:
- trl
- sft
---
|
mradermacher/internlm-20b-llama-i1-GGUF | mradermacher | 2024-11-10T18:33:12Z | 84 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:KnutJaegersberg/internlm-20b-llama",
"base_model:quantized:KnutJaegersberg/internlm-20b-llama",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-10T15:22:15Z | ---
base_model: KnutJaegersberg/internlm-20b-llama
language:
- en
library_name: transformers
license: other
license_link: LICENSE
license_name: internlm
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/KnutJaegersberg/internlm-20b-llama
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/internlm-20b-llama-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/internlm-20b-llama-i1-GGUF/resolve/main/internlm-20b-llama.i1-IQ1_S.gguf) | i1-IQ1_S | 4.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/internlm-20b-llama-i1-GGUF/resolve/main/internlm-20b-llama.i1-IQ1_M.gguf) | i1-IQ1_M | 5.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/internlm-20b-llama-i1-GGUF/resolve/main/internlm-20b-llama.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/internlm-20b-llama-i1-GGUF/resolve/main/internlm-20b-llama.i1-IQ2_XS.gguf) | i1-IQ2_XS | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/internlm-20b-llama-i1-GGUF/resolve/main/internlm-20b-llama.i1-IQ2_S.gguf) | i1-IQ2_S | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/internlm-20b-llama-i1-GGUF/resolve/main/internlm-20b-llama.i1-IQ2_M.gguf) | i1-IQ2_M | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/internlm-20b-llama-i1-GGUF/resolve/main/internlm-20b-llama.i1-Q2_K.gguf) | i1-Q2_K | 7.7 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/internlm-20b-llama-i1-GGUF/resolve/main/internlm-20b-llama.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 7.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/internlm-20b-llama-i1-GGUF/resolve/main/internlm-20b-llama.i1-IQ3_XS.gguf) | i1-IQ3_XS | 8.5 | |
| [GGUF](https://huggingface.co/mradermacher/internlm-20b-llama-i1-GGUF/resolve/main/internlm-20b-llama.i1-IQ3_S.gguf) | i1-IQ3_S | 8.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/internlm-20b-llama-i1-GGUF/resolve/main/internlm-20b-llama.i1-Q3_K_S.gguf) | i1-Q3_K_S | 8.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/internlm-20b-llama-i1-GGUF/resolve/main/internlm-20b-llama.i1-IQ3_M.gguf) | i1-IQ3_M | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/internlm-20b-llama-i1-GGUF/resolve/main/internlm-20b-llama.i1-Q3_K_M.gguf) | i1-Q3_K_M | 10.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/internlm-20b-llama-i1-GGUF/resolve/main/internlm-20b-llama.i1-Q3_K_L.gguf) | i1-Q3_K_L | 10.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/internlm-20b-llama-i1-GGUF/resolve/main/internlm-20b-llama.i1-IQ4_XS.gguf) | i1-IQ4_XS | 10.9 | |
| [GGUF](https://huggingface.co/mradermacher/internlm-20b-llama-i1-GGUF/resolve/main/internlm-20b-llama.i1-Q4_0.gguf) | i1-Q4_0 | 11.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/internlm-20b-llama-i1-GGUF/resolve/main/internlm-20b-llama.i1-Q4_K_S.gguf) | i1-Q4_K_S | 11.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/internlm-20b-llama-i1-GGUF/resolve/main/internlm-20b-llama.i1-Q4_K_M.gguf) | i1-Q4_K_M | 12.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/internlm-20b-llama-i1-GGUF/resolve/main/internlm-20b-llama.i1-Q5_K_S.gguf) | i1-Q5_K_S | 14.0 | |
| [GGUF](https://huggingface.co/mradermacher/internlm-20b-llama-i1-GGUF/resolve/main/internlm-20b-llama.i1-Q5_K_M.gguf) | i1-Q5_K_M | 14.4 | |
| [GGUF](https://huggingface.co/mradermacher/internlm-20b-llama-i1-GGUF/resolve/main/internlm-20b-llama.i1-Q6_K.gguf) | i1-Q6_K | 16.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
SufficientPrune3897/magnum-v4-123b-exl2-RPCAL-2.6bpw | SufficientPrune3897 | 2024-11-10T18:27:53Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"chat",
"conversational",
"en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2024-11-10T16:22:34Z | ---
license: other
license_name: mrl
language:
- en
tags:
- chat
pipeline_tag: text-generation
library_name: transformers
---

This is a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus.
This model is fine-tuned on top of [mistralai/Mistral-Large-Instruct-2407](https://huggingface.co/mistralai/Mistral-Large-Instruct-2407).
## Prompting
A typical input would look like this:
```py
<s>[INST] SYSTEM MESSAGE\nUSER MESSAGE[/INST] ASSISTANT MESSAGE</s>[INST] USER MESSAGE[/INST]
```
## SillyTavern templates
Below are Instruct and Context templates for use within SillyTavern.
<details><summary>context template</summary>
```yaml
default SillyTavern template works fine
```
</details><br>
<details><summary>instruct template</summary>
```yaml
default SillyTavern template works fine
```
</details><br>
## Axolotl config
<details><summary>See axolotl config</summary>
```yaml
base_model: mistralai/Mistral-Large-Instruct-2407
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_swiglu: true
liger_fused_linear_cross_entropy: true
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: anthracite-org/c2_logs_16k_mistral-large_v1.2
type: sharegpt
conversation: mistral
- path: anthracite-org/kalo-opus-instruct-22k-no-refusal
type: sharegpt
conversation: mistral
- path: lodrick-the-lafted/kalo-opus-instruct-3k-filtered
type: sharegpt
conversation: mistral
- path: anthracite-org/nopm_claude_writing_fixed
type: sharegpt
conversation: mistral
- path: anthracite-org/kalo_opus_misc_240827
type: sharegpt
conversation: mistral
- path: anthracite-org/kalo_misc_part2
type: sharegpt
conversation: mistral
#chat_template: chatml
shuffle_merged_datasets: true
#default_system_message: "You are an assistant that responds to the user."
dataset_prepared_path: ./data/magnum-123b-data
val_set_size: 0.0
output_dir: ./data/123b-fft-out
sequence_len: 16384
sample_packing: true
pad_to_sequence_len: true
adapter:
lora_model_dir:
lora_r:
lora_alpha:
lora_dropout:
lora_target_linear:
lora_fan_in_fan_out:
wandb_project: 123b-magnum-fft
wandb_entity:
wandb_watch:
wandb_name: alter-attempt-04
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 1
num_epochs: 2
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0000015
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: unsloth
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 40
evals_per_epoch:
eval_table_size:
eval_max_new_tokens:
saves_per_epoch: 2
debug:
deepspeed: deepspeed_configs/zero3_bf16.json
weight_decay: 0.01
fsdp:
fsdp_config:
special_tokens:
```
</details><br>
## Credits
We'd like to thank [Eric Hartford](https://huggingface.co/ehartford) for sponsoring the compute for this train.
We would also like to thank all members of Anthracite who made this finetune possible.
## Datasets
- [anthracite-org/c2_logs_16k_mistral-large_v1.2](https://huggingface.co/datasets/anthracite-org/c2_logs_16k_mistral-large_v1.2)
- [anthracite-org/kalo-opus-instruct-22k-no-refusal](https://huggingface.co/datasets/anthracite-org/kalo-opus-instruct-22k-no-refusal)
- [lodrick-the-lafted/kalo-opus-instruct-3k-filtered](https://huggingface.co/datasets/lodrick-the-lafted/kalo-opus-instruct-3k-filtered)
- [anthracite-org/nopm_claude_writing_fixed](https://huggingface.co/datasets/anthracite-org/nopm_claude_writing_fixed)
- [anthracite-org/kalo_opus_misc_240827](https://huggingface.co/datasets/anthracite-org/kalo_opus_misc_240827)
- [anthracite-org/kalo_misc_part2](https://huggingface.co/datasets/anthracite-org/kalo_misc_part2)
## Training
We used 8x mi300x GPUs graciously provided by [Eric Hartford](https://huggingface.co/ehartford) for the full-parameter fine-tuning of the model.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
## Safety
... |
pxyyy/rlhflow_mixture_clean_empty_round_with_dart_scalebiosampled-600k | pxyyy | 2024-11-10T18:24:17Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-10T18:16:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
```
#!/bin/bash
#SBATCH --job-name="fintune"
#SBATCH --partition=ghx4
#SBATCH --nodes=1
#SBATCH --gpus-per-node=4
#SBATCH --tasks=1
#SBATCH --tasks-per-node=1
#SBATCH --cpus-per-task=20
#SBATCH --mem=512g
#SBATCH --time=23:59:00
#SBATCH --output="run.log"
#SBATCH --error="run.err"
set -e
export WANDB_API_KEY='1b2611814911cad498235f1ccb1a2e182638bd62'
# set up exp1 or exp3!!!!!
# launch this script after bilevel weighting and preparing data
# this script is for exp1 and exp3
# 1. finetune on bilevel and baseline
CUDA_VISIBLE=0,1,2,3
hf_ds=pxyyy/rlhflow_mixture_clean_empty_round_with_dart-math_scalebiosampled-600k
hf_val_ds=pxyyy/rlhflow_scalbio_test
model_and_tok=meta-llama/Meta-Llama-3-8B
conv_template=llama3
hf_ds_str=$(echo ${hf_ds}|sed 's/\//-/g')
tmp_data_dir=./tmp_data/${hf_ds_str}/
val_data_dir=./tmp_data/${hf_ds_str}_val/
mkdir -p ${tmp_data_dir}
mkdir -p ${val_data_dir}
python3 hf2lmflow.py --ds_name ${hf_ds} --save ${tmp_data_dir}/data.json
python3 hf2lmflow.py --ds_name ${hf_val_ds} --save ${val_data_dir}/data.json
model_str=$(echo ${model_and_tok}|sed 's/\//-/g')
lisa_activated_layers=2
lisa_interval_steps=20
gradient_accumulation_steps=2
per_device_train_batch_size=8
epoch=1
project_dir=/u/xpan2/projects/scalebio/finetune/
for lr in 2e-5
do
# Finetune
exp_id=scalebio-scalebio-${model_str}-${hf_ds_str}-${epoch}-$lr-lisa_${lisa_activated_layers}_${lisa_interval_steps}
# project_dir=$(cd "$(dirname $0)"; pwd)
log_dir=${project_dir}/log/${exp_id}
output_dir=${project_dir}/output_models/${exp_id}
echo $exp_id
mkdir -p ${output_dir} ${log_dir}
export TRANSFORMERS_VERBOSITY=info
deepspeed --master_port=7964 --include=localhost:${CUDA_VISIBLE} finetune.py \
--model_name_or_path ${model_and_tok} \
--trust_remote_code 1 \
--dataset_path ${tmp_data_dir}/ \
--eval_dataset_path ${val_data_dir}/ \
--output_dir ${output_dir} --overwrite_output_dir \
--conversation_template ${conv_template} \
--num_train_epochs $epoch \
--learning_rate $lr \
--disable_group_texts 1 \
--block_size 1024 \
--per_device_train_batch_size ${per_device_train_batch_size} \
--per_device_eval_batch_size 1 \
--bf16 \
--deepspeed configs/ds_config_zero2_no_offload.json \
--torch_dtype bfloat16 \
--run_name ${exp_id} \
--optim adamw_torch_fused \
--logging_steps 1 \
--do_train \
--do_eval \
--ddp_timeout 72000 \
--save_total_limit 1 \
--load_best_model_at_end False \
--eval_steps 10 \
--save_only_model \
--evaluation_strategy "steps" \
--dataloader_num_workers 1 \
--lr_scheduler_type cosine \
--warmup_ratio 0.03 \
--gradient_checkpointing True \
--use_flash_attention 1 \
--gradient_accumulation_steps ${gradient_accumulation_steps} \
--lisa_activated_layers ${lisa_activated_layers} \
--lisa_interval_steps ${lisa_interval_steps} \
| tee ${log_dir}/train.log \
2> ${log_dir}/train.err
done
```
`no lisa`
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
borisf/best-ludka1-bob | borisf | 2024-11-10T18:22:40Z | 29 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-11-10T17:25:08Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: ludka1
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# best-ludka1-bob
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `ludka1` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
cuongdev/3nguoi-2000 | cuongdev | 2024-11-10T18:21:05Z | 29 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-11-10T18:15:34Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### 3nguoi-2000 Dreambooth model trained by cuongdev with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
michizavrel14/my_small_gpt2_hasek_dataset | michizavrel14 | 2024-11-10T18:16:17Z | 138 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-10T15:32:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
morturr/Llama-2-7b-hf-LOO_amazon-2024-11-10 | morturr | 2024-11-10T18:08:05Z | 7 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2024-11-10T16:02:32Z | ---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_amazon-2024-11-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_amazon-2024-11-10
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- training_steps: 3
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
bryanchrist/MATHWELL | bryanchrist | 2024-11-10T17:57:01Z | 0 | 2 | peft | [
"peft",
"arxiv:2402.15861",
"license:gpl-3.0",
"region:us"
] | null | 2024-02-21T23:33:15Z | ---
library_name: peft
license: gpl-3.0
---
## MATHWELL
MATHWELL is the model released in the paper [MATHWELL: Generating Educational Math Word Problems Using Teacher Annotations](https://arxiv.org/abs/2402.15861).
MATHWELL is a finetuned Llama-2 (70B) model that generates customized educational grade school math word problems and Python function solutions to these problems. Generated problems are 1) solvable, 2) accurate, and 3) appropriate. These criteria are essential to successfully supplement grade-school studentsβ math education. On average, 74% of MATHWELL's problems with executable solutions are solvable, accurate, and appropriate.
For more details on how MATHWELL was trained and evaluated, please see our [paper](https://arxiv.org/abs/2402.15861). Our [repo](https://github.com/bryanchrist/MATHWELL) contains a sample script for loading and interacting with MATHWELL.
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
## Citation
```bash
@inproceedings{christ-etal-2024-mathwell,
title = "{MATHWELL}: Generating Educational Math Word Problems Using Teacher Annotations",
author = "Christ, Bryan R and
Kropko, Jonathan and
Hartvigsen, Thomas",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2024",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-emnlp.696",
pages = "11914--11938",
abstract = "Math word problems are critical K-8 educational tools, but writing them is time consuming and requires extensive expertise. To be educational, problems must be solvable, have accurate answers, and, most importantly, be educationally appropriate. We propose that language models have potential to support K-8 math education by automatically generating word problems. However, evaluating educational appropriateness is hard to quantify. We fill this gap by having teachers evaluate problems generated by LLMs, who find existing models and data often fail to be educationally appropriate. We then explore automatically generating *educational* word problems, ultimately using our expert annotations to finetune a 70B language model. Our model, MATHWELL, is the first K-8 word problem generator targeted at educational appropriateness. Further expert studies find MATHWELL generates problems far more solvable, accurate, and appropriate than public models. MATHWELL also matches GPT-4{'}s problem quality while attaining more appropriate reading levels for K-8 students and avoiding generating harmful questions.",
}
``` |
furrutiav/roberta_mixtral_nllfg_rubric_sst2 | furrutiav | 2024-11-10T17:51:29Z | 109 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-11-06T18:45:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LBK95/Llama-2-7b-hf-DPO-LookAhead-0_TTree1.4_TT0.9_TP0.7_TE0.2_V5 | LBK95 | 2024-11-10T17:45:20Z | 13 | 0 | peft | [
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2024-11-10T11:55:38Z | ---
base_model: meta-llama/Llama-2-7b-hf
library_name: peft
license: llama2
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-DPO-LookAhead-0_TTree1.4_TT0.9_TP0.7_TE0.2_V5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-DPO-LookAhead-0_TTree1.4_TT0.9_TP0.7_TE0.2_V5
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0897
- Rewards/chosen: -2.9914
- Rewards/rejected: -2.7155
- Rewards/accuracies: 0.4000
- Rewards/margins: -0.2759
- Logps/rejected: -168.0010
- Logps/chosen: -174.0661
- Logits/rejected: -0.5254
- Logits/chosen: -0.5339
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.7826 | 0.2993 | 66 | 0.6590 | 0.0849 | 0.0090 | 0.8000 | 0.0759 | -140.7556 | -143.3033 | 0.0847 | 0.0794 |
| 0.639 | 0.5986 | 132 | 0.6196 | 0.1097 | -0.0511 | 0.9000 | 0.1607 | -141.3567 | -143.0557 | 0.0753 | 0.0696 |
| 0.5359 | 0.8980 | 198 | 0.6393 | 0.0423 | -0.0866 | 0.8000 | 0.1290 | -141.7119 | -143.7288 | 0.0629 | 0.0567 |
| 0.2727 | 1.1973 | 264 | 0.8080 | -1.1508 | -1.3039 | 0.6000 | 0.1532 | -153.8851 | -155.6598 | -0.0274 | -0.0343 |
| 0.3407 | 1.4966 | 330 | 0.6648 | -0.9615 | -1.1845 | 0.7000 | 0.2230 | -152.6907 | -153.7668 | -0.0764 | -0.0838 |
| 0.3991 | 1.7959 | 396 | 0.7534 | -1.2141 | -1.2811 | 0.6000 | 0.0670 | -153.6568 | -156.2932 | -0.1934 | -0.2005 |
| 0.1309 | 2.0952 | 462 | 0.8973 | -1.9586 | -1.8725 | 0.4000 | -0.0861 | -159.5707 | -163.7383 | -0.3197 | -0.3272 |
| 0.0603 | 2.3946 | 528 | 1.0892 | -2.8596 | -2.5458 | 0.3000 | -0.3138 | -166.3034 | -172.7478 | -0.4837 | -0.4920 |
| 0.1481 | 2.6939 | 594 | 1.1046 | -3.0656 | -2.7656 | 0.4000 | -0.2999 | -168.5022 | -174.8080 | -0.5326 | -0.5412 |
| 0.2564 | 2.9932 | 660 | 1.0897 | -2.9914 | -2.7155 | 0.4000 | -0.2759 | -168.0010 | -174.0661 | -0.5254 | -0.5339 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.44.0
- Pytorch 2.4.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1 |
mradermacher/Mistral-7B-Discord-0.1-DPO-GGUF | mradermacher | 2024-11-10T17:41:13Z | 56 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"base_model:BlouseJury/Mistral-7B-Discord-0.1-DPO",
"base_model:quantized:BlouseJury/Mistral-7B-Discord-0.1-DPO",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-06T16:16:44Z | ---
base_model: BlouseJury/Mistral-7B-Discord-0.1-DPO
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/BlouseJury/Mistral-7B-Discord-0.1-DPO
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Mistral-7B-Discord-0.1-DPO-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Discord-0.1-DPO-GGUF/resolve/main/Mistral-7B-Discord-0.1-DPO.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Discord-0.1-DPO-GGUF/resolve/main/Mistral-7B-Discord-0.1-DPO.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Discord-0.1-DPO-GGUF/resolve/main/Mistral-7B-Discord-0.1-DPO.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Discord-0.1-DPO-GGUF/resolve/main/Mistral-7B-Discord-0.1-DPO.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Discord-0.1-DPO-GGUF/resolve/main/Mistral-7B-Discord-0.1-DPO.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Discord-0.1-DPO-GGUF/resolve/main/Mistral-7B-Discord-0.1-DPO.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Discord-0.1-DPO-GGUF/resolve/main/Mistral-7B-Discord-0.1-DPO.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Discord-0.1-DPO-GGUF/resolve/main/Mistral-7B-Discord-0.1-DPO.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Discord-0.1-DPO-GGUF/resolve/main/Mistral-7B-Discord-0.1-DPO.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Discord-0.1-DPO-GGUF/resolve/main/Mistral-7B-Discord-0.1-DPO.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Discord-0.1-DPO-GGUF/resolve/main/Mistral-7B-Discord-0.1-DPO.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Discord-0.1-DPO-GGUF/resolve/main/Mistral-7B-Discord-0.1-DPO.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Discord-0.1-DPO-GGUF/resolve/main/Mistral-7B-Discord-0.1-DPO.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Mistral-7B-Discord-0.1-DPO-i1-GGUF | mradermacher | 2024-11-10T17:41:13Z | 12 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"base_model:BlouseJury/Mistral-7B-Discord-0.1-DPO",
"base_model:quantized:BlouseJury/Mistral-7B-Discord-0.1-DPO",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-11-10T15:00:39Z | ---
base_model: BlouseJury/Mistral-7B-Discord-0.1-DPO
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/BlouseJury/Mistral-7B-Discord-0.1-DPO
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Mistral-7B-Discord-0.1-DPO-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Discord-0.1-DPO-i1-GGUF/resolve/main/Mistral-7B-Discord-0.1-DPO.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Discord-0.1-DPO-i1-GGUF/resolve/main/Mistral-7B-Discord-0.1-DPO.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Discord-0.1-DPO-i1-GGUF/resolve/main/Mistral-7B-Discord-0.1-DPO.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Discord-0.1-DPO-i1-GGUF/resolve/main/Mistral-7B-Discord-0.1-DPO.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Discord-0.1-DPO-i1-GGUF/resolve/main/Mistral-7B-Discord-0.1-DPO.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Discord-0.1-DPO-i1-GGUF/resolve/main/Mistral-7B-Discord-0.1-DPO.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Discord-0.1-DPO-i1-GGUF/resolve/main/Mistral-7B-Discord-0.1-DPO.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Discord-0.1-DPO-i1-GGUF/resolve/main/Mistral-7B-Discord-0.1-DPO.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Discord-0.1-DPO-i1-GGUF/resolve/main/Mistral-7B-Discord-0.1-DPO.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Discord-0.1-DPO-i1-GGUF/resolve/main/Mistral-7B-Discord-0.1-DPO.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Discord-0.1-DPO-i1-GGUF/resolve/main/Mistral-7B-Discord-0.1-DPO.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Discord-0.1-DPO-i1-GGUF/resolve/main/Mistral-7B-Discord-0.1-DPO.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Discord-0.1-DPO-i1-GGUF/resolve/main/Mistral-7B-Discord-0.1-DPO.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Discord-0.1-DPO-i1-GGUF/resolve/main/Mistral-7B-Discord-0.1-DPO.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Discord-0.1-DPO-i1-GGUF/resolve/main/Mistral-7B-Discord-0.1-DPO.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Discord-0.1-DPO-i1-GGUF/resolve/main/Mistral-7B-Discord-0.1-DPO.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Discord-0.1-DPO-i1-GGUF/resolve/main/Mistral-7B-Discord-0.1-DPO.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Discord-0.1-DPO-i1-GGUF/resolve/main/Mistral-7B-Discord-0.1-DPO.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Discord-0.1-DPO-i1-GGUF/resolve/main/Mistral-7B-Discord-0.1-DPO.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Discord-0.1-DPO-i1-GGUF/resolve/main/Mistral-7B-Discord-0.1-DPO.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Discord-0.1-DPO-i1-GGUF/resolve/main/Mistral-7B-Discord-0.1-DPO.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Discord-0.1-DPO-i1-GGUF/resolve/main/Mistral-7B-Discord-0.1-DPO.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Discord-0.1-DPO-i1-GGUF/resolve/main/Mistral-7B-Discord-0.1-DPO.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Discord-0.1-DPO-i1-GGUF/resolve/main/Mistral-7B-Discord-0.1-DPO.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
lafarizo/code_defect_detection_v1 | lafarizo | 2024-11-10T17:24:12Z | 110 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"code-defect-detection",
"c",
"dataset:semeru/code-code-DefectDetection",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-10T15:16:50Z | ---
title: "Code Defect Detection v1"
tags:
- code-defect-detection
- c
library_name: "transformers"
datasets:
- semeru/code-code-DefectDetection
---
# Code Defect Detection v1
Code Defect Detection for C language
### Model Sources
- **Repository:** [mrm8488/codebert2codebert-finetuned-code-defect-detection](https://huggingface.co/mrm8488/codebert2codebert-finetuned-code-defect-detection)
### Dataset
- **Repository:** [semeru/code-code-DefectDetection](https://huggingface.co/datasets/semeru/code-code-DefectDetection)
| Results | Value |
|---------------------------|--------------|
| **Evaluation Loss** | 0.7605 |
| **Accuracy** | 66.76% |
| **Precision** | 65.64% |
| **Recall** | 58.01% |
| **F1 Score** | 61.59% |
| **AUC** | 73.52% |
|
jithuj12344321/whisper-small-en | jithuj12344321 | 2024-11-10T17:22:24Z | 76 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"dataset:kaggle/medical-speech-transcription-and-intent",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-11-09T19:43:25Z | ---
library_name: transformers
language:
- en
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- kaggle/medical-speech-transcription-and-intent
metrics:
- wer
model-index:
- name: Whisper Small En - Jithu J
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: medical-speech-transcription-and-intent
type: kaggle/medical-speech-transcription-and-intent
args: 'config: en, split: test'
metrics:
- name: Wer
type: wer
value: 3.9012226512226515
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small En - Jithu J
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the medical-speech-transcription-and-intent dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0931
- Wer: 3.9012
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 0.0165 | 3.3898 | 1000 | 0.0971 | 4.7860 |
| 0.0012 | 6.7797 | 2000 | 0.0905 | 4.1425 |
| 0.0001 | 10.1695 | 3000 | 0.0930 | 4.0138 |
| 0.0001 | 13.5593 | 4000 | 0.0931 | 3.9012 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu118
- Datasets 3.1.0
- Tokenizers 0.20.3
|
lafarizo/code_translation_v2 | lafarizo | 2024-11-10T17:15:34Z | 141 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_bigcode",
"text-generation",
"code-translation",
"code-to-code",
"java",
"csharp",
"dataset:google/code_x_glue_cc_code_to_code_trans",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-08T08:39:09Z | ---
title: "Code Translation v2"
tags:
- code-translation
- code-to-code
- java
- csharp
library_name: "transformers"
datasets:
- google/code_x_glue_cc_code_to_code_trans
widget:
- text: "public class HelloWorld { public static void main(String[] args) { System.out.println(\"Hello, World!\"); } }"
---
# Code Translation v2
Code Translation from Java to C#
### Model Sources
- **Repository:** [bigcode/tiny_starcoder_py](https://huggingface.co/bigcode/tiny_starcoder_py)
### Dataset
- **Repository:** [google/code_x_glue_cc_code_to_code_trans](https://huggingface.co/datasets/google/code_x_glue_cc_code_to_code_trans)
### Testing Data
- [Testing Data](https://huggingface.co/datasets/google/code_x_glue_cc_code_to_code_trans/viewer/default/test)
|
AJMALm/Gemma-2-9b-it-chat-doctor | AJMALm | 2024-11-10T17:10:31Z | 89 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-09T17:48:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
FINGU-AI/Qwen2.5-32B-Lora-HQ-e-4 | FINGU-AI | 2024-11-10T16:50:57Z | 5 | 0 | peft | [
"peft",
"safetensors",
"en",
"ko",
"zh",
"pt",
"ja",
"uz",
"tl",
"th",
"vi",
"id",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-32B-Instruct",
"license:mit",
"region:us"
] | null | 2024-11-10T16:49:43Z | ---
base_model: Qwen/Qwen2.5-32B-Instruct
library_name: peft
license: mit
language:
- en
- ko
- zh
- pt
- ja
- uz
- tl
- th
- vi
- id
---
# FINGU-AI/Qwen2.5-32B-Lora-HQ-e-4
## Overview
`FINGU-AI/Qwen2.5-32B-Lora-HQ-e-4` is a powerful causal language model designed for a variety of natural language processing (NLP) tasks, including machine translation, text generation, and chat-based applications. This model is particularly useful for translating between Korean and Uzbek, as well as supporting other custom NLP tasks through flexible input.
## Model Details
- **Model ID**: `FINGU-AI/Qwen2.5-32B-Lora-HQ-e-4`
- **Architecture**: Causal Language Model (LM)
- **Parameters**: 32 billion
- **Precision**: Torch BF16 for efficient GPU memory usage
- **Attention**: SDPA (Scaled Dot-Product Attention)
- **Primary Use Case**: Translation (e.g., Korean to Uzbek), text generation, and dialogue systems.
## Example Usage
### Installation
Make sure to install the required packages:
```bash
pip install torch transformers
```
### Loading the Model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Model and Tokenizer
model_id = 'FINGU-AI/Qwen2.5-32B-Lora-HQ-e-4'
model = AutoModelForCausalLM.from_pretrained(model_id, attn_implementation="sdpa", torch_dtype=torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model.to('cuda')
# Input Messages for Translation
messages = [
{"role": "system", "content": "translate korean to Uzbek"},
{"role": "user", "content": """μλ‘μ΄ μν κ³μ’λ₯Ό κ°μ€νλ μ μ°¨λ λ€μκ³Ό κ°μ΅λλ€:
1. κ³μ’ κ°μ€ λͺ©μ κ³Ό μ λΆ νμΈμ μν μλ₯ μ μΆ
2. μλ₯ κ²ν κ³Όμ μ κ±°μΉλ κ²
3. κ³ κ°λμ μ μ νμΈ μ μ°¨λ₯Ό μ§ννλ κ²
4. λͺ¨λ μ μ°¨κ° μλ£λλ©΄ κ³μ’ κ°μ€μ΄ κ°λ₯ν©λλ€.
κ³μ’ κ°μ€μ μνμλ κ²½μ°, μ λΆμ¦κ³Ό ν¨κ» λ°©λ¬Έν΄ μ£Όμλ©΄ λ©λλ€.
"""},
]
# Tokenize and Generate Response
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to('cuda')
outputs = model.generate(
input_ids,
max_new_tokens=500,
do_sample=True,
)
# Decode and Print the Translation
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
``` |
gavinqiangli/my-awesome-cross-encoder | gavinqiangli | 2024-11-10T16:43:05Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"cross-encoder",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-10T16:42:47Z | ---
library_name: transformers
tags:
- cross-encoder
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kikeavi36/Orpo_Qwen2.5-3B-Instruct-FT | kikeavi36 | 2024-11-10T16:37:36Z | 138 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-10T16:30:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SKNahin/functionary-medium-v3.1-fine-llamafactory | SKNahin | 2024-11-10T16:37:27Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"feature-extraction",
"llama-factory",
"full",
"generated_from_trainer",
"custom_code",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-11-10T16:17:32Z | ---
library_name: transformers
base_model: functionary-small-v3.1
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: functionary-medium-v3.1-fine-llamafactory
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# functionary-medium-v3.1-fine-llamafactory
This model is a fine-tuned version of [functionary-small-v3.1](https://huggingface.co/functionary-small-v3.1) on the sample_1 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- total_eval_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.001
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
PriyHF/brand_product_recog | PriyHF | 2024-11-10T16:27:20Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-10T16:24:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yejinkim/forget10_expert_epoch7 | yejinkim | 2024-11-10T16:26:46Z | 135 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-10T16:20:03Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dmabby/bert2-finetuned-ner | dmabby | 2024-11-10T16:26:18Z | 63 | 0 | transformers | [
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-11-10T15:51:54Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_keras_callback
model-index:
- name: dmabby/bert2-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dmabby/bert2-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3737
- Validation Loss: 0.3562
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 21, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.4645 | 0.3562 | 0 |
| 0.3770 | 0.3562 | 1 |
| 0.3737 | 0.3562 | 2 |
### Framework versions
- Transformers 4.45.1
- TensorFlow 2.17.0
- Datasets 3.1.0
- Tokenizers 0.20.0
|
PriyHF/emotion_recog | PriyHF | 2024-11-10T16:21:49Z | 104 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-10T16:20:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Criser2013/NER-finetuning-XML-RoBERTa-BIOBERT | Criser2013 | 2024-11-10T16:20:20Z | 25 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:biobert_json",
"base_model:raulgdp/xml-roberta-large-finetuned-ner",
"base_model:finetune:raulgdp/xml-roberta-large-finetuned-ner",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-11-09T15:08:20Z | ---
library_name: transformers
base_model: raulgdp/xml-roberta-large-finetuned-ner
tags:
- generated_from_trainer
datasets:
- biobert_json
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: NER-finetuning-XML-RoBERTa-BIOBERT
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: biobert_json
type: biobert_json
config: Biobert_json
split: validation
args: Biobert_json
metrics:
- name: Precision
type: precision
value: 0.9497881598534296
- name: Recall
type: recall
value: 0.9714235521461615
- name: F1
type: f1
value: 0.9604840343919173
- name: Accuracy
type: accuracy
value: 0.981362755330252
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NER-finetuning-XML-RoBERTa-BIOBERT
This model is a fine-tuned version of [raulgdp/xml-roberta-large-finetuned-ner](https://huggingface.co/raulgdp/xml-roberta-large-finetuned-ner) on the biobert_json dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0946
- Precision: 0.9498
- Recall: 0.9714
- F1: 0.9605
- Accuracy: 0.9814
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1306 | 1.0 | 1224 | 0.1013 | 0.9299 | 0.9609 | 0.9451 | 0.9735 |
| 0.0996 | 2.0 | 2448 | 0.0932 | 0.9383 | 0.9656 | 0.9517 | 0.9777 |
| 0.0608 | 3.0 | 3672 | 0.0865 | 0.9493 | 0.9720 | 0.9605 | 0.9813 |
| 0.0445 | 4.0 | 4896 | 0.0927 | 0.9531 | 0.9729 | 0.9629 | 0.9819 |
| 0.0327 | 5.0 | 6120 | 0.0946 | 0.9498 | 0.9714 | 0.9605 | 0.9814 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
harshvardhanj733/results_english | harshvardhanj733 | 2024-11-10T16:19:49Z | 180 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-10T16:18:27Z | ---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: results_english
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results_english
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7851
- Accuracy: 0.7178
- Precision: 0.7201
- Recall: 0.7178
- F1: 0.7182
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 264 | 0.8362 | 0.6667 | 0.6659 | 0.6667 | 0.6634 |
| 0.9341 | 2.0 | 528 | 0.7913 | 0.6856 | 0.6901 | 0.6856 | 0.6794 |
| 0.9341 | 3.0 | 792 | 0.7716 | 0.6951 | 0.6974 | 0.6951 | 0.6919 |
| 0.6719 | 4.0 | 1056 | 0.8301 | 0.7159 | 0.7185 | 0.7159 | 0.7163 |
| 0.6719 | 5.0 | 1320 | 0.7851 | 0.7178 | 0.7201 | 0.7178 | 0.7182 |
| 0.5313 | 6.0 | 1584 | 0.9683 | 0.6761 | 0.6809 | 0.6761 | 0.6698 |
| 0.5313 | 7.0 | 1848 | 1.1330 | 0.6913 | 0.6923 | 0.6913 | 0.6883 |
| 0.4155 | 8.0 | 2112 | 1.2025 | 0.7102 | 0.7094 | 0.7102 | 0.7084 |
| 0.4155 | 9.0 | 2376 | 1.5090 | 0.6686 | 0.6711 | 0.6686 | 0.6595 |
| 0.3457 | 10.0 | 2640 | 1.6342 | 0.6856 | 0.6871 | 0.6856 | 0.6847 |
| 0.3457 | 11.0 | 2904 | 1.7451 | 0.6875 | 0.6923 | 0.6875 | 0.6879 |
| 0.3272 | 12.0 | 3168 | 1.8827 | 0.7027 | 0.7017 | 0.7027 | 0.6991 |
| 0.3272 | 13.0 | 3432 | 1.9303 | 0.6875 | 0.6868 | 0.6875 | 0.6865 |
| 0.2553 | 14.0 | 3696 | 1.9490 | 0.6913 | 0.6897 | 0.6913 | 0.6895 |
| 0.2553 | 15.0 | 3960 | 1.9609 | 0.6913 | 0.6902 | 0.6913 | 0.6895 |
| 0.2349 | 16.0 | 4224 | 1.9921 | 0.6875 | 0.6850 | 0.6875 | 0.6848 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
SufficientPrune3897/magnum-v4-123b-exl2-2.65bpw | SufficientPrune3897 | 2024-11-10T16:18:46Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"chat",
"conversational",
"en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2024-11-10T14:04:02Z | ---
license: other
license_name: mrl
language:
- en
tags:
- chat
pipeline_tag: text-generation
library_name: transformers
---
Quant of:

This is a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus.
This model is fine-tuned on top of [mistralai/Mistral-Large-Instruct-2407](https://huggingface.co/mistralai/Mistral-Large-Instruct-2407).
## Prompting
A typical input would look like this:
```py
<s>[INST] SYSTEM MESSAGE\nUSER MESSAGE[/INST] ASSISTANT MESSAGE</s>[INST] USER MESSAGE[/INST]
```
## SillyTavern templates
Below are Instruct and Context templates for use within SillyTavern.
<details><summary>context template</summary>
```yaml
default SillyTavern template works fine
```
</details><br>
<details><summary>instruct template</summary>
```yaml
default SillyTavern template works fine
```
</details><br>
## Axolotl config
<details><summary>See axolotl config</summary>
```yaml
base_model: mistralai/Mistral-Large-Instruct-2407
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_swiglu: true
liger_fused_linear_cross_entropy: true
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: anthracite-org/c2_logs_16k_mistral-large_v1.2
type: sharegpt
conversation: mistral
- path: anthracite-org/kalo-opus-instruct-22k-no-refusal
type: sharegpt
conversation: mistral
- path: lodrick-the-lafted/kalo-opus-instruct-3k-filtered
type: sharegpt
conversation: mistral
- path: anthracite-org/nopm_claude_writing_fixed
type: sharegpt
conversation: mistral
- path: anthracite-org/kalo_opus_misc_240827
type: sharegpt
conversation: mistral
- path: anthracite-org/kalo_misc_part2
type: sharegpt
conversation: mistral
#chat_template: chatml
shuffle_merged_datasets: true
#default_system_message: "You are an assistant that responds to the user."
dataset_prepared_path: ./data/magnum-123b-data
val_set_size: 0.0
output_dir: ./data/123b-fft-out
sequence_len: 16384
sample_packing: true
pad_to_sequence_len: true
adapter:
lora_model_dir:
lora_r:
lora_alpha:
lora_dropout:
lora_target_linear:
lora_fan_in_fan_out:
wandb_project: 123b-magnum-fft
wandb_entity:
wandb_watch:
wandb_name: alter-attempt-04
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 1
num_epochs: 2
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0000015
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: unsloth
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 40
evals_per_epoch:
eval_table_size:
eval_max_new_tokens:
saves_per_epoch: 2
debug:
deepspeed: deepspeed_configs/zero3_bf16.json
weight_decay: 0.01
fsdp:
fsdp_config:
special_tokens:
```
</details><br>
## Credits
We'd like to thank [Eric Hartford](https://huggingface.co/ehartford) for sponsoring the compute for this train.
We would also like to thank all members of Anthracite who made this finetune possible.
## Datasets
- [anthracite-org/c2_logs_16k_mistral-large_v1.2](https://huggingface.co/datasets/anthracite-org/c2_logs_16k_mistral-large_v1.2)
- [anthracite-org/kalo-opus-instruct-22k-no-refusal](https://huggingface.co/datasets/anthracite-org/kalo-opus-instruct-22k-no-refusal)
- [lodrick-the-lafted/kalo-opus-instruct-3k-filtered](https://huggingface.co/datasets/lodrick-the-lafted/kalo-opus-instruct-3k-filtered)
- [anthracite-org/nopm_claude_writing_fixed](https://huggingface.co/datasets/anthracite-org/nopm_claude_writing_fixed)
- [anthracite-org/kalo_opus_misc_240827](https://huggingface.co/datasets/anthracite-org/kalo_opus_misc_240827)
- [anthracite-org/kalo_misc_part2](https://huggingface.co/datasets/anthracite-org/kalo_misc_part2)
## Training
We used 8x mi300x GPUs graciously provided by [Eric Hartford](https://huggingface.co/ehartford) for the full-parameter fine-tuning of the model.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
## Safety
... |
amin1123/whisper-small-ps | amin1123 | 2024-11-10T16:15:28Z | 77 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ps",
"dataset:pairsys/open_asr",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-11-10T04:58:42Z | ---
library_name: transformers
language:
- ps
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- pairsys/open_asr
metrics:
- wer
model-index:
- name: Whisper Small Pashto
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Open ASR
type: pairsys/open_asr
args: 'config: pashto'
metrics:
- name: Wer
type: wer
value: 34.475374732334046
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Pashto
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Open ASR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7846
- Wer: 34.4754
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:-------:|
| 0.0112 | 17.8571 | 1000 | 0.6265 | 38.1462 |
| 0.0023 | 35.7143 | 2000 | 0.7230 | 35.0260 |
| 0.0006 | 53.5714 | 3000 | 0.7555 | 34.7201 |
| 0.0001 | 71.4286 | 4000 | 0.7708 | 34.9342 |
| 0.0001 | 89.2857 | 5000 | 0.7846 | 34.4754 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1
- Datasets 3.1.0
- Tokenizers 0.20.3
|
cuongdev/3nguoi-4000 | cuongdev | 2024-11-10T16:14:20Z | 31 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-11-10T16:10:52Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### 3nguoi-4000 Dreambooth model trained by cuongdev with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
ihughes15234/phi35_tictactoe_dpo1epoch_v3 | ihughes15234 | 2024-11-10T16:10:45Z | 77 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:ihughes15234/phi35_tictactoe_dpo6epoch_v2",
"base_model:finetune:ihughes15234/phi35_tictactoe_dpo6epoch_v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-10T16:07:21Z | ---
base_model: ihughes15234/phi35_tictactoe_dpo6epoch_v2
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** ihughes15234
- **License:** apache-2.0
- **Finetuned from model :** ihughes15234/phi35_tictactoe_dpo6epoch_v2
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
PopularPenguin/t5-small-awesome-text-to-sql-2024-11-10_13-40 | PopularPenguin | 2024-11-10T15:57:55Z | 45 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:arrow",
"base_model:cssupport/t5-small-awesome-text-to-sql",
"base_model:finetune:cssupport/t5-small-awesome-text-to-sql",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-11-10T13:42:52Z | ---
library_name: transformers
license: apache-2.0
base_model: cssupport/t5-small-awesome-text-to-sql
tags:
- generated_from_trainer
datasets:
- arrow
model-index:
- name: t5-small-awesome-text-to-sql-2024-11-10_13-40
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-awesome-text-to-sql-2024-11-10_13-40
This model is a fine-tuned version of [cssupport/t5-small-awesome-text-to-sql](https://huggingface.co/cssupport/t5-small-awesome-text-to-sql) on the arrow dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1505
- Gen Len: 19.0
- Bertscorer-p: 0.5983
- Bertscorer-r: 0.1002
- Bertscorer-f1: 0.3375
- Sacrebleu-score: 6.1735
- Sacrebleu-precisions: [92.82196987876635, 86.09309987961223, 81.16865589315682, 77.5936294965929]
- Bleu-bp: 0.0733
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | Bertscorer-p | Bertscorer-r | Bertscorer-f1 | Sacrebleu-score | Sacrebleu-precisions | Bleu-bp |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------------:|:------------:|:-------------:|:---------------:|:----------------------------------------------------------------------------:|:-------:|
| 0.2655 | 1.0 | 4772 | 0.2099 | 19.0 | 0.5770 | 0.0864 | 0.3203 | 5.7173 | [91.0934769807022, 81.88030009989161, 75.59001146341751, 71.32247244849066] | 0.0718 |
| 0.1951 | 2.0 | 9544 | 0.1772 | 19.0 | 0.5695 | 0.0718 | 0.3090 | 5.7315 | [91.38097911302968, 82.52214039836731, 76.55664627495614, 73.06145893164847] | 0.0711 |
| 0.1609 | 3.0 | 14316 | 0.1628 | 19.0 | 0.5960 | 0.1033 | 0.3382 | 6.0737 | [92.32304047118862, 84.75338215740487, 79.32502315982035, 75.25860249102807] | 0.0735 |
| 0.1412 | 4.0 | 19088 | 0.1551 | 19.0 | 0.5925 | 0.0959 | 0.3326 | 6.0701 | [92.56176903043524, 85.09918369073299, 79.79597353297214, 76.12497023888257] | 0.0730 |
| 0.1191 | 5.0 | 23860 | 0.1512 | 19.0 | 0.5905 | 0.0928 | 0.3300 | 6.0937 | [92.29263048778147, 84.9906547977318, 79.83711978971085, 76.22241882452364] | 0.0733 |
| 0.1063 | 6.0 | 28632 | 0.1486 | 19.0 | 0.5959 | 0.0986 | 0.3356 | 6.1128 | [92.67271190348113, 85.5578689269597, 80.37916696032137, 76.71086200742904] | 0.0731 |
| 0.094 | 7.0 | 33404 | 0.1489 | 19.0 | 0.5984 | 0.1024 | 0.3388 | 6.1770 | [92.60841659561831, 85.6159908960634, 80.52775143703391, 76.7429609924408] | 0.0738 |
| 0.0875 | 8.0 | 38176 | 0.1496 | 19.0 | 0.5960 | 0.0976 | 0.3351 | 6.1421 | [92.6290822842547, 85.75971432797346, 80.81931219105543, 77.24221764177369] | 0.0732 |
| 0.0841 | 9.0 | 42948 | 0.1498 | 19.0 | 0.6019 | 0.1059 | 0.3424 | 6.2261 | [92.84100049795074, 86.14431816984929, 81.20480235905357, 77.4564647967041] | 0.0739 |
| 0.0777 | 10.0 | 47720 | 0.1505 | 19.0 | 0.5983 | 0.1002 | 0.3375 | 6.1735 | [92.82196987876635, 86.09309987961223, 81.16865589315682, 77.5936294965929] | 0.0733 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
ICT3214-Group5/MD5_gpt_neo_v1.1.3 | ICT3214-Group5 | 2024-11-10T15:56:44Z | 116 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"base_model:EleutherAI/gpt-neo-125m",
"base_model:finetune:EleutherAI/gpt-neo-125m",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-10T15:01:29Z | ---
library_name: transformers
license: mit
base_model: EleutherAI/gpt-neo-125M
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: MD5_gpt_neo_v1.1.3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MD5_gpt_neo_v1.1.3
This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0538
- Rouge1: 0.5076
- Rouge2: 0.2548
- Rougel: 0.4743
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 1.0 | 70 | 0.0628 | 0.4870 | 0.2269 | 0.4475 |
| No log | 2.0 | 140 | 0.0566 | 0.4913 | 0.2367 | 0.4607 |
| No log | 3.0 | 210 | 0.0545 | 0.4972 | 0.2484 | 0.4667 |
| No log | 4.0 | 280 | 0.0544 | 0.5023 | 0.2586 | 0.4749 |
| No log | 5.0 | 350 | 0.0538 | 0.5076 | 0.2548 | 0.4743 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.1
|
yam3333/mBART_Finetune_NagarGPT | yam3333 | 2024-11-10T15:55:45Z | 104 | 0 | transformers | [
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-11-10T15:54:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/opus-v0-70b-i1-GGUF | mradermacher | 2024-11-10T15:47:41Z | 76 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:dreamgen/opus-v0-70b",
"base_model:quantized:dreamgen/opus-v0-70b",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-11-10T11:46:52Z | ---
base_model: dreamgen/opus-v0-70b
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/dreamgen/opus-v0-70b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/opus-v0-70b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/opus-v0-70b-i1-GGUF/resolve/main/opus-v0-70b.i1-IQ1_S.gguf) | i1-IQ1_S | 14.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/opus-v0-70b-i1-GGUF/resolve/main/opus-v0-70b.i1-IQ1_M.gguf) | i1-IQ1_M | 16.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/opus-v0-70b-i1-GGUF/resolve/main/opus-v0-70b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.4 | |
| [GGUF](https://huggingface.co/mradermacher/opus-v0-70b-i1-GGUF/resolve/main/opus-v0-70b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.4 | |
| [GGUF](https://huggingface.co/mradermacher/opus-v0-70b-i1-GGUF/resolve/main/opus-v0-70b.i1-IQ2_S.gguf) | i1-IQ2_S | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/opus-v0-70b-i1-GGUF/resolve/main/opus-v0-70b.i1-IQ2_M.gguf) | i1-IQ2_M | 23.3 | |
| [GGUF](https://huggingface.co/mradermacher/opus-v0-70b-i1-GGUF/resolve/main/opus-v0-70b.i1-Q2_K.gguf) | i1-Q2_K | 25.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/opus-v0-70b-i1-GGUF/resolve/main/opus-v0-70b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 26.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/opus-v0-70b-i1-GGUF/resolve/main/opus-v0-70b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.4 | |
| [GGUF](https://huggingface.co/mradermacher/opus-v0-70b-i1-GGUF/resolve/main/opus-v0-70b.i1-IQ3_S.gguf) | i1-IQ3_S | 30.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/opus-v0-70b-i1-GGUF/resolve/main/opus-v0-70b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/opus-v0-70b-i1-GGUF/resolve/main/opus-v0-70b.i1-IQ3_M.gguf) | i1-IQ3_M | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/opus-v0-70b-i1-GGUF/resolve/main/opus-v0-70b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/opus-v0-70b-i1-GGUF/resolve/main/opus-v0-70b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/opus-v0-70b-i1-GGUF/resolve/main/opus-v0-70b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 36.9 | |
| [GGUF](https://huggingface.co/mradermacher/opus-v0-70b-i1-GGUF/resolve/main/opus-v0-70b.i1-Q4_0.gguf) | i1-Q4_0 | 39.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/opus-v0-70b-i1-GGUF/resolve/main/opus-v0-70b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/opus-v0-70b-i1-GGUF/resolve/main/opus-v0-70b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/opus-v0-70b-i1-GGUF/resolve/main/opus-v0-70b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/opus-v0-70b-i1-GGUF/resolve/main/opus-v0-70b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/opus-v0-70b-i1-GGUF/resolve/main/opus-v0-70b.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/opus-v0-70b-i1-GGUF/resolve/main/opus-v0-70b.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 56.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
KienT/sd-class-butterflies-32 | KienT | 2024-11-10T15:45:50Z | 47 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2024-11-10T15:45:31Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class π§¨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute π¦.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('KienT/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
Keltezaa/neon-environments | Keltezaa | 2024-11-10T15:40:59Z | 73 | 4 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"migrated",
"3d",
"bar",
"background",
"arcade",
"living room",
"train",
"bathroom",
"bowling",
"diner",
"pub",
"hallway",
"backgrounds",
"neons",
"escalator",
"disco bar",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-10-25T13:00:26Z | ---
license: other
license_name: bespoke-lora-trained-license
license_link: >-
https://multimodal.art/civitai-licenses?allowNoCredit=False&allowCommercialUse=RentCivit&allowDerivatives=False&allowDifferentLicense=True
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- migrated
- 3d
- bar
- background
- arcade
- living room
- train
- bathroom
- bowling
- diner
- pub
- hallway
- backgrounds
- neons
- escalator
- disco bar
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
widget:
- text: ' '
output:
url: 31343274.jpeg
- text: ' '
output:
url: 31343350.jpeg
- text: ' '
output:
url: 31343277.jpeg
- text: ' '
output:
url: 31343276.jpeg
- text: ' '
output:
url: 31343637.jpeg
- text: ' '
output:
url: 31343886.jpeg
- text: ' '
output:
url: 31343254.jpeg
- text: ' '
output:
url: 31343255.jpeg
- text: ' '
output:
url: 31343256.jpeg
- text: A pornstar woman holding a Neon sign "SLDR Flux NSFW v2 Studio"
output:
url: images/example_sfeoq5az5.png
- text: a Neon sign that show an illustration of (2 fingers and a pussy)
output:
url: images/example_hrthffo2b.png
---
# Neon Environments
<Gallery />
([CivitAI](https://civitai.com/models/))
## Model description
<p>Introducing Neon Environments Model: Illuminating Arcades and Pubs</p><p>Neon Environments Model, is designed to generate visually striking images inspired by arcades, pubs, and other premises adorned with neon lights.</p>
## Download model
Weights for this model are available in Safetensors format.
[Download](/Keltezaa/neon-environments/tree/main) them in the Files & versions tab.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to(device)
pipeline.load_lora_weights('Keltezaa/neon-environments', weight_name='Neon_Environments.safetensors')
image = pipeline('Your custom prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
theprint/ReWiz-Nemo-12B-Instruct | theprint | 2024-11-10T15:32:29Z | 8 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit",
"base_model:finetune:unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-31T02:01:46Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
base_model: unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit
model-index:
- name: ReWiz-Nemo-12B-Instruct
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 10.62
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=theprint/ReWiz-Nemo-12B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 29.93
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=theprint/ReWiz-Nemo-12B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 7.18
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=theprint/ReWiz-Nemo-12B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 9.84
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=theprint/ReWiz-Nemo-12B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 10.23
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=theprint/ReWiz-Nemo-12B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 25.99
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=theprint/ReWiz-Nemo-12B-Instruct
name: Open LLM Leaderboard
---
<img src="https://huggingface.co/theprint/ReWiz-Llama-3.2-3B/resolve/main/ReWiz_banner.png">
Half the data was geared towards better reasoning (EvolKit-20k and reasoning-base-20k), the other half will help to de-censor the model (WizardLM data set).
# Looking for GGUF?
There is a separate upload for that! Download [theprint/ReWiz-Nemo-12B-Instruct-GGUF](https://huggingface.co/theprint/ReWiz-Nemo-12B-Instruct-GGUF) instead.
# Uploaded model
- **Developed by:** theprint
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_theprint__ReWiz-Nemo-12B-Instruct)
| Metric |Value|
|-------------------|----:|
|Avg. |15.63|
|IFEval (0-Shot) |10.62|
|BBH (3-Shot) |29.93|
|MATH Lvl 5 (4-Shot)| 7.18|
|GPQA (0-shot) | 9.84|
|MuSR (0-shot) |10.23|
|MMLU-PRO (5-shot) |25.99|
|
Seyfelislem/afrispeech_large_A100 | Seyfelislem | 2024-11-10T15:26:56Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:afrispeech-200",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-04-03T21:27:00Z | ---
tags:
- generated_from_trainer
datasets:
- afrispeech-200
metrics:
- wer
model-index:
- name: afrispeech_large_A100
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: afrispeech-200
type: afrispeech-200
config: all
split: train
args: all
metrics:
- name: Wer
type: wer
value: 14.81
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afrispeech_large_A100
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the afrispeech-200 dataset.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
https://huggingface.co/Seyfelislem/afrispeech_large_A100/tensorboard
### Framework versions
- Transformers 4.29.1
- Pytorch 1.13.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Subsets and Splits