modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-15 00:43:56
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 521
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-15 00:40:56
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
featherless-ai-quants/grimjim-Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge-GGUF | featherless-ai-quants | 2024-11-10T19:50:22Z | 7 | 0 | null | [
"gguf",
"text-generation",
"base_model:grimjim/Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge",
"base_model:quantized:grimjim/Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-07T18:12:38Z | ---
base_model: grimjim/Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# grimjim/Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [grimjim-Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge-GGUF/blob/main/grimjim-Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [grimjim-Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge-GGUF/blob/main/grimjim-Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [grimjim-Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge-GGUF/blob/main/grimjim-Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [grimjim-Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge-GGUF/blob/main/grimjim-Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [grimjim-Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge-GGUF/blob/main/grimjim-Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [grimjim-Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge-GGUF/blob/main/grimjim-Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [grimjim-Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge-GGUF/blob/main/grimjim-Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [grimjim-Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge-GGUF/blob/main/grimjim-Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [grimjim-Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge-GGUF/blob/main/grimjim-Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [grimjim-Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge-GGUF/blob/main/grimjim-Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [grimjim-Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge-GGUF/blob/main/grimjim-Llama-3-Instruct-8B-SimPO-SPPO-Iter3-merge-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/TitleOS-EinsteinBagel-8B-GGUF | featherless-ai-quants | 2024-11-10T19:50:18Z | 20 | 0 | null | [
"gguf",
"text-generation",
"base_model:TitleOS/EinsteinBagel-8B",
"base_model:quantized:TitleOS/EinsteinBagel-8B",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-11-07T17:51:10Z | ---
base_model: TitleOS/EinsteinBagel-8B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# TitleOS/EinsteinBagel-8B GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [TitleOS-EinsteinBagel-8B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/TitleOS-EinsteinBagel-8B-GGUF/blob/main/TitleOS-EinsteinBagel-8B-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [TitleOS-EinsteinBagel-8B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/TitleOS-EinsteinBagel-8B-GGUF/blob/main/TitleOS-EinsteinBagel-8B-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [TitleOS-EinsteinBagel-8B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/TitleOS-EinsteinBagel-8B-GGUF/blob/main/TitleOS-EinsteinBagel-8B-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [TitleOS-EinsteinBagel-8B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/TitleOS-EinsteinBagel-8B-GGUF/blob/main/TitleOS-EinsteinBagel-8B-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [TitleOS-EinsteinBagel-8B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/TitleOS-EinsteinBagel-8B-GGUF/blob/main/TitleOS-EinsteinBagel-8B-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [TitleOS-EinsteinBagel-8B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/TitleOS-EinsteinBagel-8B-GGUF/blob/main/TitleOS-EinsteinBagel-8B-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [TitleOS-EinsteinBagel-8B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/TitleOS-EinsteinBagel-8B-GGUF/blob/main/TitleOS-EinsteinBagel-8B-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [TitleOS-EinsteinBagel-8B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/TitleOS-EinsteinBagel-8B-GGUF/blob/main/TitleOS-EinsteinBagel-8B-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [TitleOS-EinsteinBagel-8B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/TitleOS-EinsteinBagel-8B-GGUF/blob/main/TitleOS-EinsteinBagel-8B-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [TitleOS-EinsteinBagel-8B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/TitleOS-EinsteinBagel-8B-GGUF/blob/main/TitleOS-EinsteinBagel-8B-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [TitleOS-EinsteinBagel-8B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/TitleOS-EinsteinBagel-8B-GGUF/blob/main/TitleOS-EinsteinBagel-8B-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/macadeliccc-MBX-7B-v3-DPO-GGUF | featherless-ai-quants | 2024-11-10T19:50:09Z | 31 | 0 | null | [
"gguf",
"text-generation",
"base_model:macadeliccc/MBX-7B-v3-DPO",
"base_model:quantized:macadeliccc/MBX-7B-v3-DPO",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-11-07T17:30:15Z | ---
base_model: macadeliccc/MBX-7B-v3-DPO
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# macadeliccc/MBX-7B-v3-DPO GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [macadeliccc-MBX-7B-v3-DPO-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/macadeliccc-MBX-7B-v3-DPO-GGUF/blob/main/macadeliccc-MBX-7B-v3-DPO-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [macadeliccc-MBX-7B-v3-DPO-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/macadeliccc-MBX-7B-v3-DPO-GGUF/blob/main/macadeliccc-MBX-7B-v3-DPO-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [macadeliccc-MBX-7B-v3-DPO-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/macadeliccc-MBX-7B-v3-DPO-GGUF/blob/main/macadeliccc-MBX-7B-v3-DPO-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [macadeliccc-MBX-7B-v3-DPO-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/macadeliccc-MBX-7B-v3-DPO-GGUF/blob/main/macadeliccc-MBX-7B-v3-DPO-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [macadeliccc-MBX-7B-v3-DPO-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/macadeliccc-MBX-7B-v3-DPO-GGUF/blob/main/macadeliccc-MBX-7B-v3-DPO-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [macadeliccc-MBX-7B-v3-DPO-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/macadeliccc-MBX-7B-v3-DPO-GGUF/blob/main/macadeliccc-MBX-7B-v3-DPO-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [macadeliccc-MBX-7B-v3-DPO-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/macadeliccc-MBX-7B-v3-DPO-GGUF/blob/main/macadeliccc-MBX-7B-v3-DPO-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [macadeliccc-MBX-7B-v3-DPO-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/macadeliccc-MBX-7B-v3-DPO-GGUF/blob/main/macadeliccc-MBX-7B-v3-DPO-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [macadeliccc-MBX-7B-v3-DPO-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/macadeliccc-MBX-7B-v3-DPO-GGUF/blob/main/macadeliccc-MBX-7B-v3-DPO-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [macadeliccc-MBX-7B-v3-DPO-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/macadeliccc-MBX-7B-v3-DPO-GGUF/blob/main/macadeliccc-MBX-7B-v3-DPO-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [macadeliccc-MBX-7B-v3-DPO-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/macadeliccc-MBX-7B-v3-DPO-GGUF/blob/main/macadeliccc-MBX-7B-v3-DPO-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/vicgalle-Roleplay-Hermes-3-Llama-3.1-8B-GGUF | featherless-ai-quants | 2024-11-10T19:50:00Z | 12 | 0 | null | [
"gguf",
"text-generation",
"base_model:vicgalle/Roleplay-Hermes-3-Llama-3.1-8B",
"base_model:quantized:vicgalle/Roleplay-Hermes-3-Llama-3.1-8B",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-07T16:39:07Z | ---
base_model: vicgalle/Roleplay-Hermes-3-Llama-3.1-8B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# vicgalle/Roleplay-Hermes-3-Llama-3.1-8B GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [vicgalle-Roleplay-Hermes-3-Llama-3.1-8B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/vicgalle-Roleplay-Hermes-3-Llama-3.1-8B-GGUF/blob/main/vicgalle-Roleplay-Hermes-3-Llama-3.1-8B-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [vicgalle-Roleplay-Hermes-3-Llama-3.1-8B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/vicgalle-Roleplay-Hermes-3-Llama-3.1-8B-GGUF/blob/main/vicgalle-Roleplay-Hermes-3-Llama-3.1-8B-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [vicgalle-Roleplay-Hermes-3-Llama-3.1-8B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/vicgalle-Roleplay-Hermes-3-Llama-3.1-8B-GGUF/blob/main/vicgalle-Roleplay-Hermes-3-Llama-3.1-8B-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [vicgalle-Roleplay-Hermes-3-Llama-3.1-8B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/vicgalle-Roleplay-Hermes-3-Llama-3.1-8B-GGUF/blob/main/vicgalle-Roleplay-Hermes-3-Llama-3.1-8B-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [vicgalle-Roleplay-Hermes-3-Llama-3.1-8B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/vicgalle-Roleplay-Hermes-3-Llama-3.1-8B-GGUF/blob/main/vicgalle-Roleplay-Hermes-3-Llama-3.1-8B-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [vicgalle-Roleplay-Hermes-3-Llama-3.1-8B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/vicgalle-Roleplay-Hermes-3-Llama-3.1-8B-GGUF/blob/main/vicgalle-Roleplay-Hermes-3-Llama-3.1-8B-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [vicgalle-Roleplay-Hermes-3-Llama-3.1-8B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/vicgalle-Roleplay-Hermes-3-Llama-3.1-8B-GGUF/blob/main/vicgalle-Roleplay-Hermes-3-Llama-3.1-8B-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [vicgalle-Roleplay-Hermes-3-Llama-3.1-8B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/vicgalle-Roleplay-Hermes-3-Llama-3.1-8B-GGUF/blob/main/vicgalle-Roleplay-Hermes-3-Llama-3.1-8B-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [vicgalle-Roleplay-Hermes-3-Llama-3.1-8B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/vicgalle-Roleplay-Hermes-3-Llama-3.1-8B-GGUF/blob/main/vicgalle-Roleplay-Hermes-3-Llama-3.1-8B-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [vicgalle-Roleplay-Hermes-3-Llama-3.1-8B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/vicgalle-Roleplay-Hermes-3-Llama-3.1-8B-GGUF/blob/main/vicgalle-Roleplay-Hermes-3-Llama-3.1-8B-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [vicgalle-Roleplay-Hermes-3-Llama-3.1-8B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/vicgalle-Roleplay-Hermes-3-Llama-3.1-8B-GGUF/blob/main/vicgalle-Roleplay-Hermes-3-Llama-3.1-8B-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/OpenRLHF-Llama-3-8b-sft-mixture-GGUF | featherless-ai-quants | 2024-11-10T19:49:51Z | 13 | 0 | null | [
"gguf",
"text-generation",
"base_model:OpenRLHF/Llama-3-8b-sft-mixture",
"base_model:quantized:OpenRLHF/Llama-3-8b-sft-mixture",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-07T15:13:49Z | ---
base_model: OpenRLHF/Llama-3-8b-sft-mixture
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# OpenRLHF/Llama-3-8b-sft-mixture GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [OpenRLHF-Llama-3-8b-sft-mixture-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/OpenRLHF-Llama-3-8b-sft-mixture-GGUF/blob/main/OpenRLHF-Llama-3-8b-sft-mixture-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [OpenRLHF-Llama-3-8b-sft-mixture-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/OpenRLHF-Llama-3-8b-sft-mixture-GGUF/blob/main/OpenRLHF-Llama-3-8b-sft-mixture-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [OpenRLHF-Llama-3-8b-sft-mixture-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/OpenRLHF-Llama-3-8b-sft-mixture-GGUF/blob/main/OpenRLHF-Llama-3-8b-sft-mixture-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [OpenRLHF-Llama-3-8b-sft-mixture-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/OpenRLHF-Llama-3-8b-sft-mixture-GGUF/blob/main/OpenRLHF-Llama-3-8b-sft-mixture-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [OpenRLHF-Llama-3-8b-sft-mixture-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/OpenRLHF-Llama-3-8b-sft-mixture-GGUF/blob/main/OpenRLHF-Llama-3-8b-sft-mixture-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [OpenRLHF-Llama-3-8b-sft-mixture-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/OpenRLHF-Llama-3-8b-sft-mixture-GGUF/blob/main/OpenRLHF-Llama-3-8b-sft-mixture-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [OpenRLHF-Llama-3-8b-sft-mixture-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/OpenRLHF-Llama-3-8b-sft-mixture-GGUF/blob/main/OpenRLHF-Llama-3-8b-sft-mixture-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [OpenRLHF-Llama-3-8b-sft-mixture-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/OpenRLHF-Llama-3-8b-sft-mixture-GGUF/blob/main/OpenRLHF-Llama-3-8b-sft-mixture-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [OpenRLHF-Llama-3-8b-sft-mixture-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/OpenRLHF-Llama-3-8b-sft-mixture-GGUF/blob/main/OpenRLHF-Llama-3-8b-sft-mixture-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [OpenRLHF-Llama-3-8b-sft-mixture-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/OpenRLHF-Llama-3-8b-sft-mixture-GGUF/blob/main/OpenRLHF-Llama-3-8b-sft-mixture-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [OpenRLHF-Llama-3-8b-sft-mixture-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/OpenRLHF-Llama-3-8b-sft-mixture-GGUF/blob/main/OpenRLHF-Llama-3-8b-sft-mixture-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/wkshin89-mistral-7b-instruct-ko-test-v0.2-GGUF | featherless-ai-quants | 2024-11-10T19:49:50Z | 15 | 0 | null | [
"gguf",
"text-generation",
"base_model:wkshin89/mistral-7b-instruct-ko-test-v0.2",
"base_model:quantized:wkshin89/mistral-7b-instruct-ko-test-v0.2",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-11-07T15:03:17Z | ---
base_model: wkshin89/mistral-7b-instruct-ko-test-v0.2
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# wkshin89/mistral-7b-instruct-ko-test-v0.2 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [wkshin89-mistral-7b-instruct-ko-test-v0.2-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/wkshin89-mistral-7b-instruct-ko-test-v0.2-GGUF/blob/main/wkshin89-mistral-7b-instruct-ko-test-v0.2-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [wkshin89-mistral-7b-instruct-ko-test-v0.2-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/wkshin89-mistral-7b-instruct-ko-test-v0.2-GGUF/blob/main/wkshin89-mistral-7b-instruct-ko-test-v0.2-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [wkshin89-mistral-7b-instruct-ko-test-v0.2-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/wkshin89-mistral-7b-instruct-ko-test-v0.2-GGUF/blob/main/wkshin89-mistral-7b-instruct-ko-test-v0.2-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [wkshin89-mistral-7b-instruct-ko-test-v0.2-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/wkshin89-mistral-7b-instruct-ko-test-v0.2-GGUF/blob/main/wkshin89-mistral-7b-instruct-ko-test-v0.2-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [wkshin89-mistral-7b-instruct-ko-test-v0.2-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/wkshin89-mistral-7b-instruct-ko-test-v0.2-GGUF/blob/main/wkshin89-mistral-7b-instruct-ko-test-v0.2-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [wkshin89-mistral-7b-instruct-ko-test-v0.2-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/wkshin89-mistral-7b-instruct-ko-test-v0.2-GGUF/blob/main/wkshin89-mistral-7b-instruct-ko-test-v0.2-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [wkshin89-mistral-7b-instruct-ko-test-v0.2-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/wkshin89-mistral-7b-instruct-ko-test-v0.2-GGUF/blob/main/wkshin89-mistral-7b-instruct-ko-test-v0.2-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [wkshin89-mistral-7b-instruct-ko-test-v0.2-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/wkshin89-mistral-7b-instruct-ko-test-v0.2-GGUF/blob/main/wkshin89-mistral-7b-instruct-ko-test-v0.2-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [wkshin89-mistral-7b-instruct-ko-test-v0.2-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/wkshin89-mistral-7b-instruct-ko-test-v0.2-GGUF/blob/main/wkshin89-mistral-7b-instruct-ko-test-v0.2-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [wkshin89-mistral-7b-instruct-ko-test-v0.2-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/wkshin89-mistral-7b-instruct-ko-test-v0.2-GGUF/blob/main/wkshin89-mistral-7b-instruct-ko-test-v0.2-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [wkshin89-mistral-7b-instruct-ko-test-v0.2-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/wkshin89-mistral-7b-instruct-ko-test-v0.2-GGUF/blob/main/wkshin89-mistral-7b-instruct-ko-test-v0.2-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/imone-Llama-3-8B-fixed-special-embedding-GGUF | featherless-ai-quants | 2024-11-10T19:49:42Z | 12 | 0 | null | [
"gguf",
"text-generation",
"base_model:imone/Llama-3-8B-fixed-special-embedding",
"base_model:quantized:imone/Llama-3-8B-fixed-special-embedding",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-11-07T14:33:23Z | ---
base_model: imone/Llama-3-8B-fixed-special-embedding
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# imone/Llama-3-8B-fixed-special-embedding GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [imone-Llama-3-8B-fixed-special-embedding-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/imone-Llama-3-8B-fixed-special-embedding-GGUF/blob/main/imone-Llama-3-8B-fixed-special-embedding-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [imone-Llama-3-8B-fixed-special-embedding-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/imone-Llama-3-8B-fixed-special-embedding-GGUF/blob/main/imone-Llama-3-8B-fixed-special-embedding-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [imone-Llama-3-8B-fixed-special-embedding-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/imone-Llama-3-8B-fixed-special-embedding-GGUF/blob/main/imone-Llama-3-8B-fixed-special-embedding-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [imone-Llama-3-8B-fixed-special-embedding-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/imone-Llama-3-8B-fixed-special-embedding-GGUF/blob/main/imone-Llama-3-8B-fixed-special-embedding-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [imone-Llama-3-8B-fixed-special-embedding-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/imone-Llama-3-8B-fixed-special-embedding-GGUF/blob/main/imone-Llama-3-8B-fixed-special-embedding-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [imone-Llama-3-8B-fixed-special-embedding-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/imone-Llama-3-8B-fixed-special-embedding-GGUF/blob/main/imone-Llama-3-8B-fixed-special-embedding-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [imone-Llama-3-8B-fixed-special-embedding-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/imone-Llama-3-8B-fixed-special-embedding-GGUF/blob/main/imone-Llama-3-8B-fixed-special-embedding-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [imone-Llama-3-8B-fixed-special-embedding-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/imone-Llama-3-8B-fixed-special-embedding-GGUF/blob/main/imone-Llama-3-8B-fixed-special-embedding-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [imone-Llama-3-8B-fixed-special-embedding-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/imone-Llama-3-8B-fixed-special-embedding-GGUF/blob/main/imone-Llama-3-8B-fixed-special-embedding-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [imone-Llama-3-8B-fixed-special-embedding-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/imone-Llama-3-8B-fixed-special-embedding-GGUF/blob/main/imone-Llama-3-8B-fixed-special-embedding-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [imone-Llama-3-8B-fixed-special-embedding-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/imone-Llama-3-8B-fixed-special-embedding-GGUF/blob/main/imone-Llama-3-8B-fixed-special-embedding-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/BeaverAI-mistral-doryV2-12b-GGUF | featherless-ai-quants | 2024-11-10T19:49:27Z | 310 | 0 | null | [
"gguf",
"text-generation",
"base_model:BeaverAI/mistral-doryV2-12b",
"base_model:quantized:BeaverAI/mistral-doryV2-12b",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-07T13:13:59Z | ---
base_model: BeaverAI/mistral-doryV2-12b
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# BeaverAI/mistral-doryV2-12b GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [BeaverAI-mistral-doryV2-12b-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/BeaverAI-mistral-doryV2-12b-GGUF/blob/main/BeaverAI-mistral-doryV2-12b-IQ4_XS.gguf) | 6485.04 MB |
| Q2_K | [BeaverAI-mistral-doryV2-12b-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/BeaverAI-mistral-doryV2-12b-GGUF/blob/main/BeaverAI-mistral-doryV2-12b-Q2_K.gguf) | 4569.10 MB |
| Q3_K_L | [BeaverAI-mistral-doryV2-12b-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/BeaverAI-mistral-doryV2-12b-GGUF/blob/main/BeaverAI-mistral-doryV2-12b-Q3_K_L.gguf) | 6257.54 MB |
| Q3_K_M | [BeaverAI-mistral-doryV2-12b-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/BeaverAI-mistral-doryV2-12b-GGUF/blob/main/BeaverAI-mistral-doryV2-12b-Q3_K_M.gguf) | 5801.29 MB |
| Q3_K_S | [BeaverAI-mistral-doryV2-12b-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/BeaverAI-mistral-doryV2-12b-GGUF/blob/main/BeaverAI-mistral-doryV2-12b-Q3_K_S.gguf) | 5277.85 MB |
| Q4_K_M | [BeaverAI-mistral-doryV2-12b-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/BeaverAI-mistral-doryV2-12b-GGUF/blob/main/BeaverAI-mistral-doryV2-12b-Q4_K_M.gguf) | 7130.82 MB |
| Q4_K_S | [BeaverAI-mistral-doryV2-12b-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/BeaverAI-mistral-doryV2-12b-GGUF/blob/main/BeaverAI-mistral-doryV2-12b-Q4_K_S.gguf) | 6790.35 MB |
| Q5_K_M | [BeaverAI-mistral-doryV2-12b-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/BeaverAI-mistral-doryV2-12b-GGUF/blob/main/BeaverAI-mistral-doryV2-12b-Q5_K_M.gguf) | 8323.32 MB |
| Q5_K_S | [BeaverAI-mistral-doryV2-12b-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/BeaverAI-mistral-doryV2-12b-GGUF/blob/main/BeaverAI-mistral-doryV2-12b-Q5_K_S.gguf) | 8124.10 MB |
| Q6_K | [BeaverAI-mistral-doryV2-12b-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/BeaverAI-mistral-doryV2-12b-GGUF/blob/main/BeaverAI-mistral-doryV2-12b-Q6_K.gguf) | 9590.35 MB |
| Q8_0 | [BeaverAI-mistral-doryV2-12b-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/BeaverAI-mistral-doryV2-12b-GGUF/blob/main/BeaverAI-mistral-doryV2-12b-Q8_0.gguf) | 12419.10 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/Locutusque-Apollo-2.0-Llama-3.1-8B-GGUF | featherless-ai-quants | 2024-11-10T19:49:24Z | 5 | 0 | null | [
"gguf",
"text-generation",
"base_model:Locutusque/Apollo-2.0-Llama-3.1-8B",
"base_model:quantized:Locutusque/Apollo-2.0-Llama-3.1-8B",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-07T13:09:09Z | ---
base_model: Locutusque/Apollo-2.0-Llama-3.1-8B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# Locutusque/Apollo-2.0-Llama-3.1-8B GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [Locutusque-Apollo-2.0-Llama-3.1-8B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Apollo-2.0-Llama-3.1-8B-GGUF/blob/main/Locutusque-Apollo-2.0-Llama-3.1-8B-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [Locutusque-Apollo-2.0-Llama-3.1-8B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Apollo-2.0-Llama-3.1-8B-GGUF/blob/main/Locutusque-Apollo-2.0-Llama-3.1-8B-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [Locutusque-Apollo-2.0-Llama-3.1-8B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Apollo-2.0-Llama-3.1-8B-GGUF/blob/main/Locutusque-Apollo-2.0-Llama-3.1-8B-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [Locutusque-Apollo-2.0-Llama-3.1-8B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Apollo-2.0-Llama-3.1-8B-GGUF/blob/main/Locutusque-Apollo-2.0-Llama-3.1-8B-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [Locutusque-Apollo-2.0-Llama-3.1-8B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Apollo-2.0-Llama-3.1-8B-GGUF/blob/main/Locutusque-Apollo-2.0-Llama-3.1-8B-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [Locutusque-Apollo-2.0-Llama-3.1-8B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Apollo-2.0-Llama-3.1-8B-GGUF/blob/main/Locutusque-Apollo-2.0-Llama-3.1-8B-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [Locutusque-Apollo-2.0-Llama-3.1-8B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Apollo-2.0-Llama-3.1-8B-GGUF/blob/main/Locutusque-Apollo-2.0-Llama-3.1-8B-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [Locutusque-Apollo-2.0-Llama-3.1-8B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Apollo-2.0-Llama-3.1-8B-GGUF/blob/main/Locutusque-Apollo-2.0-Llama-3.1-8B-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [Locutusque-Apollo-2.0-Llama-3.1-8B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Apollo-2.0-Llama-3.1-8B-GGUF/blob/main/Locutusque-Apollo-2.0-Llama-3.1-8B-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [Locutusque-Apollo-2.0-Llama-3.1-8B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Apollo-2.0-Llama-3.1-8B-GGUF/blob/main/Locutusque-Apollo-2.0-Llama-3.1-8B-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [Locutusque-Apollo-2.0-Llama-3.1-8B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Apollo-2.0-Llama-3.1-8B-GGUF/blob/main/Locutusque-Apollo-2.0-Llama-3.1-8B-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/elinas-Llama-3-15B-Instruct-zeroed-GGUF | featherless-ai-quants | 2024-11-10T19:49:06Z | 31 | 0 | null | [
"gguf",
"text-generation",
"base_model:elinas/Llama-3-15B-Instruct-zeroed",
"base_model:quantized:elinas/Llama-3-15B-Instruct-zeroed",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-07T11:57:21Z | ---
base_model: elinas/Llama-3-15B-Instruct-zeroed
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# elinas/Llama-3-15B-Instruct-zeroed GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [elinas-Llama-3-15B-Instruct-zeroed-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/elinas-Llama-3-15B-Instruct-zeroed-GGUF/blob/main/elinas-Llama-3-15B-Instruct-zeroed-IQ4_XS.gguf) | 7868.64 MB |
| Q2_K | [elinas-Llama-3-15B-Instruct-zeroed-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/elinas-Llama-3-15B-Instruct-zeroed-GGUF/blob/main/elinas-Llama-3-15B-Instruct-zeroed-Q2_K.gguf) | 5480.87 MB |
| Q3_K_L | [elinas-Llama-3-15B-Instruct-zeroed-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/elinas-Llama-3-15B-Instruct-zeroed-GGUF/blob/main/elinas-Llama-3-15B-Instruct-zeroed-Q3_K_L.gguf) | 7609.76 MB |
| Q3_K_M | [elinas-Llama-3-15B-Instruct-zeroed-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/elinas-Llama-3-15B-Instruct-zeroed-GGUF/blob/main/elinas-Llama-3-15B-Instruct-zeroed-Q3_K_M.gguf) | 7030.76 MB |
| Q3_K_S | [elinas-Llama-3-15B-Instruct-zeroed-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/elinas-Llama-3-15B-Instruct-zeroed-GGUF/blob/main/elinas-Llama-3-15B-Instruct-zeroed-Q3_K_S.gguf) | 6355.76 MB |
| Q4_K_M | [elinas-Llama-3-15B-Instruct-zeroed-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/elinas-Llama-3-15B-Instruct-zeroed-GGUF/blob/main/elinas-Llama-3-15B-Instruct-zeroed-Q4_K_M.gguf) | 8685.29 MB |
| Q4_K_S | [elinas-Llama-3-15B-Instruct-zeroed-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/elinas-Llama-3-15B-Instruct-zeroed-GGUF/blob/main/elinas-Llama-3-15B-Instruct-zeroed-Q4_K_S.gguf) | 8248.29 MB |
| Q5_K_M | [elinas-Llama-3-15B-Instruct-zeroed-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/elinas-Llama-3-15B-Instruct-zeroed-GGUF/blob/main/elinas-Llama-3-15B-Instruct-zeroed-Q5_K_M.gguf) | 10171.92 MB |
| Q5_K_S | [elinas-Llama-3-15B-Instruct-zeroed-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/elinas-Llama-3-15B-Instruct-zeroed-GGUF/blob/main/elinas-Llama-3-15B-Instruct-zeroed-Q5_K_S.gguf) | 9916.92 MB |
| Q6_K | [elinas-Llama-3-15B-Instruct-zeroed-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/elinas-Llama-3-15B-Instruct-zeroed-GGUF/blob/main/elinas-Llama-3-15B-Instruct-zeroed-Q6_K.gguf) | 11751.46 MB |
| Q8_0 | [elinas-Llama-3-15B-Instruct-zeroed-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/elinas-Llama-3-15B-Instruct-zeroed-GGUF/blob/main/elinas-Llama-3-15B-Instruct-zeroed-Q8_0.gguf) | 15218.13 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-top25-GGUF | featherless-ai-quants | 2024-11-10T19:49:01Z | 8 | 0 | null | [
"gguf",
"text-generation",
"base_model:lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top25",
"base_model:quantized:lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top25",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-07T11:42:52Z | ---
base_model: lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top25
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top25 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [lightblue-suzume-llama-3-8B-multilingual-orpo-borda-top25-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-top25-GGUF/blob/main/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-top25-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [lightblue-suzume-llama-3-8B-multilingual-orpo-borda-top25-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-top25-GGUF/blob/main/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-top25-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [lightblue-suzume-llama-3-8B-multilingual-orpo-borda-top25-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-top25-GGUF/blob/main/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-top25-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [lightblue-suzume-llama-3-8B-multilingual-orpo-borda-top25-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-top25-GGUF/blob/main/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-top25-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [lightblue-suzume-llama-3-8B-multilingual-orpo-borda-top25-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-top25-GGUF/blob/main/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-top25-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [lightblue-suzume-llama-3-8B-multilingual-orpo-borda-top25-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-top25-GGUF/blob/main/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-top25-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [lightblue-suzume-llama-3-8B-multilingual-orpo-borda-top25-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-top25-GGUF/blob/main/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-top25-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [lightblue-suzume-llama-3-8B-multilingual-orpo-borda-top25-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-top25-GGUF/blob/main/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-top25-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [lightblue-suzume-llama-3-8B-multilingual-orpo-borda-top25-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-top25-GGUF/blob/main/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-top25-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [lightblue-suzume-llama-3-8B-multilingual-orpo-borda-top25-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-top25-GGUF/blob/main/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-top25-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [lightblue-suzume-llama-3-8B-multilingual-orpo-borda-top25-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-top25-GGUF/blob/main/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-top25-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/Gille-StrangeMerges_51-7B-dare_ties-GGUF | featherless-ai-quants | 2024-11-10T19:48:50Z | 6 | 0 | null | [
"gguf",
"text-generation",
"base_model:Gille/StrangeMerges_51-7B-dare_ties",
"base_model:quantized:Gille/StrangeMerges_51-7B-dare_ties",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-11-07T10:49:35Z | ---
base_model: Gille/StrangeMerges_51-7B-dare_ties
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# Gille/StrangeMerges_51-7B-dare_ties GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [Gille-StrangeMerges_51-7B-dare_ties-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Gille-StrangeMerges_51-7B-dare_ties-GGUF/blob/main/Gille-StrangeMerges_51-7B-dare_ties-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [Gille-StrangeMerges_51-7B-dare_ties-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Gille-StrangeMerges_51-7B-dare_ties-GGUF/blob/main/Gille-StrangeMerges_51-7B-dare_ties-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [Gille-StrangeMerges_51-7B-dare_ties-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Gille-StrangeMerges_51-7B-dare_ties-GGUF/blob/main/Gille-StrangeMerges_51-7B-dare_ties-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [Gille-StrangeMerges_51-7B-dare_ties-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Gille-StrangeMerges_51-7B-dare_ties-GGUF/blob/main/Gille-StrangeMerges_51-7B-dare_ties-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [Gille-StrangeMerges_51-7B-dare_ties-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Gille-StrangeMerges_51-7B-dare_ties-GGUF/blob/main/Gille-StrangeMerges_51-7B-dare_ties-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [Gille-StrangeMerges_51-7B-dare_ties-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Gille-StrangeMerges_51-7B-dare_ties-GGUF/blob/main/Gille-StrangeMerges_51-7B-dare_ties-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [Gille-StrangeMerges_51-7B-dare_ties-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Gille-StrangeMerges_51-7B-dare_ties-GGUF/blob/main/Gille-StrangeMerges_51-7B-dare_ties-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [Gille-StrangeMerges_51-7B-dare_ties-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Gille-StrangeMerges_51-7B-dare_ties-GGUF/blob/main/Gille-StrangeMerges_51-7B-dare_ties-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [Gille-StrangeMerges_51-7B-dare_ties-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Gille-StrangeMerges_51-7B-dare_ties-GGUF/blob/main/Gille-StrangeMerges_51-7B-dare_ties-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [Gille-StrangeMerges_51-7B-dare_ties-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Gille-StrangeMerges_51-7B-dare_ties-GGUF/blob/main/Gille-StrangeMerges_51-7B-dare_ties-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [Gille-StrangeMerges_51-7B-dare_ties-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Gille-StrangeMerges_51-7B-dare_ties-GGUF/blob/main/Gille-StrangeMerges_51-7B-dare_ties-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/aifeifei798-llama3-8B-DarkIdol-2.2-Uncensored-1048K-GGUF | featherless-ai-quants | 2024-11-10T19:48:48Z | 274 | 1 | null | [
"gguf",
"text-generation",
"base_model:aifeifei798/llama3-8B-DarkIdol-2.2-Uncensored-1048K",
"base_model:quantized:aifeifei798/llama3-8B-DarkIdol-2.2-Uncensored-1048K",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-11-07T10:42:41Z | ---
base_model: aifeifei798/llama3-8B-DarkIdol-2.2-Uncensored-1048K
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# aifeifei798/llama3-8B-DarkIdol-2.2-Uncensored-1048K GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [aifeifei798-llama3-8B-DarkIdol-2.2-Uncensored-1048K-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/aifeifei798-llama3-8B-DarkIdol-2.2-Uncensored-1048K-GGUF/blob/main/aifeifei798-llama3-8B-DarkIdol-2.2-Uncensored-1048K-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [aifeifei798-llama3-8B-DarkIdol-2.2-Uncensored-1048K-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/aifeifei798-llama3-8B-DarkIdol-2.2-Uncensored-1048K-GGUF/blob/main/aifeifei798-llama3-8B-DarkIdol-2.2-Uncensored-1048K-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [aifeifei798-llama3-8B-DarkIdol-2.2-Uncensored-1048K-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/aifeifei798-llama3-8B-DarkIdol-2.2-Uncensored-1048K-GGUF/blob/main/aifeifei798-llama3-8B-DarkIdol-2.2-Uncensored-1048K-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [aifeifei798-llama3-8B-DarkIdol-2.2-Uncensored-1048K-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/aifeifei798-llama3-8B-DarkIdol-2.2-Uncensored-1048K-GGUF/blob/main/aifeifei798-llama3-8B-DarkIdol-2.2-Uncensored-1048K-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [aifeifei798-llama3-8B-DarkIdol-2.2-Uncensored-1048K-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/aifeifei798-llama3-8B-DarkIdol-2.2-Uncensored-1048K-GGUF/blob/main/aifeifei798-llama3-8B-DarkIdol-2.2-Uncensored-1048K-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [aifeifei798-llama3-8B-DarkIdol-2.2-Uncensored-1048K-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/aifeifei798-llama3-8B-DarkIdol-2.2-Uncensored-1048K-GGUF/blob/main/aifeifei798-llama3-8B-DarkIdol-2.2-Uncensored-1048K-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [aifeifei798-llama3-8B-DarkIdol-2.2-Uncensored-1048K-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/aifeifei798-llama3-8B-DarkIdol-2.2-Uncensored-1048K-GGUF/blob/main/aifeifei798-llama3-8B-DarkIdol-2.2-Uncensored-1048K-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [aifeifei798-llama3-8B-DarkIdol-2.2-Uncensored-1048K-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/aifeifei798-llama3-8B-DarkIdol-2.2-Uncensored-1048K-GGUF/blob/main/aifeifei798-llama3-8B-DarkIdol-2.2-Uncensored-1048K-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [aifeifei798-llama3-8B-DarkIdol-2.2-Uncensored-1048K-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/aifeifei798-llama3-8B-DarkIdol-2.2-Uncensored-1048K-GGUF/blob/main/aifeifei798-llama3-8B-DarkIdol-2.2-Uncensored-1048K-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [aifeifei798-llama3-8B-DarkIdol-2.2-Uncensored-1048K-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/aifeifei798-llama3-8B-DarkIdol-2.2-Uncensored-1048K-GGUF/blob/main/aifeifei798-llama3-8B-DarkIdol-2.2-Uncensored-1048K-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [aifeifei798-llama3-8B-DarkIdol-2.2-Uncensored-1048K-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/aifeifei798-llama3-8B-DarkIdol-2.2-Uncensored-1048K-GGUF/blob/main/aifeifei798-llama3-8B-DarkIdol-2.2-Uncensored-1048K-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/chlee10-T3Q-Mistral-Orca-Math-dpo-v2.0-GGUF | featherless-ai-quants | 2024-11-10T19:48:46Z | 29 | 0 | null | [
"gguf",
"text-generation",
"base_model:chlee10/T3Q-Mistral-Orca-Math-dpo-v2.0",
"base_model:quantized:chlee10/T3Q-Mistral-Orca-Math-dpo-v2.0",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-11-07T10:30:56Z | ---
base_model: chlee10/T3Q-Mistral-Orca-Math-dpo-v2.0
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# chlee10/T3Q-Mistral-Orca-Math-dpo-v2.0 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [chlee10-T3Q-Mistral-Orca-Math-dpo-v2.0-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/chlee10-T3Q-Mistral-Orca-Math-dpo-v2.0-GGUF/blob/main/chlee10-T3Q-Mistral-Orca-Math-dpo-v2.0-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [chlee10-T3Q-Mistral-Orca-Math-dpo-v2.0-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/chlee10-T3Q-Mistral-Orca-Math-dpo-v2.0-GGUF/blob/main/chlee10-T3Q-Mistral-Orca-Math-dpo-v2.0-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [chlee10-T3Q-Mistral-Orca-Math-dpo-v2.0-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/chlee10-T3Q-Mistral-Orca-Math-dpo-v2.0-GGUF/blob/main/chlee10-T3Q-Mistral-Orca-Math-dpo-v2.0-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [chlee10-T3Q-Mistral-Orca-Math-dpo-v2.0-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/chlee10-T3Q-Mistral-Orca-Math-dpo-v2.0-GGUF/blob/main/chlee10-T3Q-Mistral-Orca-Math-dpo-v2.0-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [chlee10-T3Q-Mistral-Orca-Math-dpo-v2.0-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/chlee10-T3Q-Mistral-Orca-Math-dpo-v2.0-GGUF/blob/main/chlee10-T3Q-Mistral-Orca-Math-dpo-v2.0-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [chlee10-T3Q-Mistral-Orca-Math-dpo-v2.0-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/chlee10-T3Q-Mistral-Orca-Math-dpo-v2.0-GGUF/blob/main/chlee10-T3Q-Mistral-Orca-Math-dpo-v2.0-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [chlee10-T3Q-Mistral-Orca-Math-dpo-v2.0-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/chlee10-T3Q-Mistral-Orca-Math-dpo-v2.0-GGUF/blob/main/chlee10-T3Q-Mistral-Orca-Math-dpo-v2.0-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [chlee10-T3Q-Mistral-Orca-Math-dpo-v2.0-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/chlee10-T3Q-Mistral-Orca-Math-dpo-v2.0-GGUF/blob/main/chlee10-T3Q-Mistral-Orca-Math-dpo-v2.0-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [chlee10-T3Q-Mistral-Orca-Math-dpo-v2.0-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/chlee10-T3Q-Mistral-Orca-Math-dpo-v2.0-GGUF/blob/main/chlee10-T3Q-Mistral-Orca-Math-dpo-v2.0-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [chlee10-T3Q-Mistral-Orca-Math-dpo-v2.0-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/chlee10-T3Q-Mistral-Orca-Math-dpo-v2.0-GGUF/blob/main/chlee10-T3Q-Mistral-Orca-Math-dpo-v2.0-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [chlee10-T3Q-Mistral-Orca-Math-dpo-v2.0-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/chlee10-T3Q-Mistral-Orca-Math-dpo-v2.0-GGUF/blob/main/chlee10-T3Q-Mistral-Orca-Math-dpo-v2.0-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/flammenai-Mahou-1.1-mistral-7B-GGUF | featherless-ai-quants | 2024-11-10T19:48:41Z | 23 | 0 | null | [
"gguf",
"text-generation",
"base_model:flammenai/Mahou-1.1-mistral-7B",
"base_model:quantized:flammenai/Mahou-1.1-mistral-7B",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-11-07T10:15:08Z | ---
base_model: flammenai/Mahou-1.1-mistral-7B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# flammenai/Mahou-1.1-mistral-7B GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [flammenai-Mahou-1.1-mistral-7B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/flammenai-Mahou-1.1-mistral-7B-GGUF/blob/main/flammenai-Mahou-1.1-mistral-7B-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [flammenai-Mahou-1.1-mistral-7B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/flammenai-Mahou-1.1-mistral-7B-GGUF/blob/main/flammenai-Mahou-1.1-mistral-7B-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [flammenai-Mahou-1.1-mistral-7B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/flammenai-Mahou-1.1-mistral-7B-GGUF/blob/main/flammenai-Mahou-1.1-mistral-7B-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [flammenai-Mahou-1.1-mistral-7B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/flammenai-Mahou-1.1-mistral-7B-GGUF/blob/main/flammenai-Mahou-1.1-mistral-7B-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [flammenai-Mahou-1.1-mistral-7B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/flammenai-Mahou-1.1-mistral-7B-GGUF/blob/main/flammenai-Mahou-1.1-mistral-7B-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [flammenai-Mahou-1.1-mistral-7B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/flammenai-Mahou-1.1-mistral-7B-GGUF/blob/main/flammenai-Mahou-1.1-mistral-7B-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [flammenai-Mahou-1.1-mistral-7B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/flammenai-Mahou-1.1-mistral-7B-GGUF/blob/main/flammenai-Mahou-1.1-mistral-7B-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [flammenai-Mahou-1.1-mistral-7B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/flammenai-Mahou-1.1-mistral-7B-GGUF/blob/main/flammenai-Mahou-1.1-mistral-7B-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [flammenai-Mahou-1.1-mistral-7B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/flammenai-Mahou-1.1-mistral-7B-GGUF/blob/main/flammenai-Mahou-1.1-mistral-7B-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [flammenai-Mahou-1.1-mistral-7B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/flammenai-Mahou-1.1-mistral-7B-GGUF/blob/main/flammenai-Mahou-1.1-mistral-7B-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [flammenai-Mahou-1.1-mistral-7B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/flammenai-Mahou-1.1-mistral-7B-GGUF/blob/main/flammenai-Mahou-1.1-mistral-7B-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/h2m-mhm-7b-v1.3-DPO-1-GGUF | featherless-ai-quants | 2024-11-10T19:48:31Z | 7 | 0 | null | [
"gguf",
"text-generation",
"base_model:h2m/mhm-7b-v1.3-DPO-1",
"base_model:quantized:h2m/mhm-7b-v1.3-DPO-1",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-07T09:34:14Z | ---
base_model: h2m/mhm-7b-v1.3-DPO-1
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# h2m/mhm-7b-v1.3-DPO-1 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [h2m-mhm-7b-v1.3-DPO-1-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/h2m-mhm-7b-v1.3-DPO-1-GGUF/blob/main/h2m-mhm-7b-v1.3-DPO-1-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [h2m-mhm-7b-v1.3-DPO-1-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/h2m-mhm-7b-v1.3-DPO-1-GGUF/blob/main/h2m-mhm-7b-v1.3-DPO-1-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [h2m-mhm-7b-v1.3-DPO-1-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/h2m-mhm-7b-v1.3-DPO-1-GGUF/blob/main/h2m-mhm-7b-v1.3-DPO-1-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [h2m-mhm-7b-v1.3-DPO-1-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/h2m-mhm-7b-v1.3-DPO-1-GGUF/blob/main/h2m-mhm-7b-v1.3-DPO-1-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [h2m-mhm-7b-v1.3-DPO-1-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/h2m-mhm-7b-v1.3-DPO-1-GGUF/blob/main/h2m-mhm-7b-v1.3-DPO-1-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [h2m-mhm-7b-v1.3-DPO-1-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/h2m-mhm-7b-v1.3-DPO-1-GGUF/blob/main/h2m-mhm-7b-v1.3-DPO-1-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [h2m-mhm-7b-v1.3-DPO-1-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/h2m-mhm-7b-v1.3-DPO-1-GGUF/blob/main/h2m-mhm-7b-v1.3-DPO-1-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [h2m-mhm-7b-v1.3-DPO-1-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/h2m-mhm-7b-v1.3-DPO-1-GGUF/blob/main/h2m-mhm-7b-v1.3-DPO-1-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [h2m-mhm-7b-v1.3-DPO-1-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/h2m-mhm-7b-v1.3-DPO-1-GGUF/blob/main/h2m-mhm-7b-v1.3-DPO-1-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [h2m-mhm-7b-v1.3-DPO-1-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/h2m-mhm-7b-v1.3-DPO-1-GGUF/blob/main/h2m-mhm-7b-v1.3-DPO-1-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [h2m-mhm-7b-v1.3-DPO-1-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/h2m-mhm-7b-v1.3-DPO-1-GGUF/blob/main/h2m-mhm-7b-v1.3-DPO-1-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.5-GGUF | featherless-ai-quants | 2024-11-10T19:48:29Z | 11 | 0 | null | [
"gguf",
"text-generation",
"base_model:OwenArli/ArliAI-Llama-3-8B-Dolfin-v0.5",
"base_model:quantized:OwenArli/ArliAI-Llama-3-8B-Dolfin-v0.5",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-07T09:31:17Z | ---
base_model: OwenArli/ArliAI-Llama-3-8B-Dolfin-v0.5
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# OwenArli/ArliAI-Llama-3-8B-Dolfin-v0.5 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.5-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.5-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.5-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.5-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.5-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.5-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.5-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.5-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.5-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.5-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.5-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.5-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.5-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.5-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.5-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.5-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.5-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.5-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.5-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.5-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.5-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.5-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.5-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.5-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.5-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.5-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.5-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.5-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.5-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.5-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.5-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.5-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.5-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/GalrionSoftworks-Pleiades-12B-v1-GGUF | featherless-ai-quants | 2024-11-10T19:48:22Z | 5 | 0 | null | [
"gguf",
"text-generation",
"base_model:GalrionSoftworks/Pleiades-12B-v1",
"base_model:quantized:GalrionSoftworks/Pleiades-12B-v1",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-07T09:17:15Z | ---
base_model: GalrionSoftworks/Pleiades-12B-v1
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# GalrionSoftworks/Pleiades-12B-v1 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [GalrionSoftworks-Pleiades-12B-v1-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/GalrionSoftworks-Pleiades-12B-v1-GGUF/blob/main/GalrionSoftworks-Pleiades-12B-v1-IQ4_XS.gguf) | 6485.04 MB |
| Q2_K | [GalrionSoftworks-Pleiades-12B-v1-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/GalrionSoftworks-Pleiades-12B-v1-GGUF/blob/main/GalrionSoftworks-Pleiades-12B-v1-Q2_K.gguf) | 4569.10 MB |
| Q3_K_L | [GalrionSoftworks-Pleiades-12B-v1-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/GalrionSoftworks-Pleiades-12B-v1-GGUF/blob/main/GalrionSoftworks-Pleiades-12B-v1-Q3_K_L.gguf) | 6257.54 MB |
| Q3_K_M | [GalrionSoftworks-Pleiades-12B-v1-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/GalrionSoftworks-Pleiades-12B-v1-GGUF/blob/main/GalrionSoftworks-Pleiades-12B-v1-Q3_K_M.gguf) | 5801.29 MB |
| Q3_K_S | [GalrionSoftworks-Pleiades-12B-v1-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/GalrionSoftworks-Pleiades-12B-v1-GGUF/blob/main/GalrionSoftworks-Pleiades-12B-v1-Q3_K_S.gguf) | 5277.85 MB |
| Q4_K_M | [GalrionSoftworks-Pleiades-12B-v1-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/GalrionSoftworks-Pleiades-12B-v1-GGUF/blob/main/GalrionSoftworks-Pleiades-12B-v1-Q4_K_M.gguf) | 7130.82 MB |
| Q4_K_S | [GalrionSoftworks-Pleiades-12B-v1-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/GalrionSoftworks-Pleiades-12B-v1-GGUF/blob/main/GalrionSoftworks-Pleiades-12B-v1-Q4_K_S.gguf) | 6790.35 MB |
| Q5_K_M | [GalrionSoftworks-Pleiades-12B-v1-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/GalrionSoftworks-Pleiades-12B-v1-GGUF/blob/main/GalrionSoftworks-Pleiades-12B-v1-Q5_K_M.gguf) | 8323.32 MB |
| Q5_K_S | [GalrionSoftworks-Pleiades-12B-v1-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/GalrionSoftworks-Pleiades-12B-v1-GGUF/blob/main/GalrionSoftworks-Pleiades-12B-v1-Q5_K_S.gguf) | 8124.10 MB |
| Q6_K | [GalrionSoftworks-Pleiades-12B-v1-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/GalrionSoftworks-Pleiades-12B-v1-GGUF/blob/main/GalrionSoftworks-Pleiades-12B-v1-Q6_K.gguf) | 9590.35 MB |
| Q8_0 | [GalrionSoftworks-Pleiades-12B-v1-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/GalrionSoftworks-Pleiades-12B-v1-GGUF/blob/main/GalrionSoftworks-Pleiades-12B-v1-Q8_0.gguf) | 12419.10 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/sfairXC-FsfairX-Zephyr-Chat-v0.1-GGUF | featherless-ai-quants | 2024-11-10T19:48:21Z | 12 | 0 | null | [
"gguf",
"text-generation",
"base_model:sfairXC/FsfairX-Zephyr-Chat-v0.1",
"base_model:quantized:sfairXC/FsfairX-Zephyr-Chat-v0.1",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-07T09:07:19Z | ---
base_model: sfairXC/FsfairX-Zephyr-Chat-v0.1
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# sfairXC/FsfairX-Zephyr-Chat-v0.1 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [sfairXC-FsfairX-Zephyr-Chat-v0.1-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/sfairXC-FsfairX-Zephyr-Chat-v0.1-GGUF/blob/main/sfairXC-FsfairX-Zephyr-Chat-v0.1-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [sfairXC-FsfairX-Zephyr-Chat-v0.1-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/sfairXC-FsfairX-Zephyr-Chat-v0.1-GGUF/blob/main/sfairXC-FsfairX-Zephyr-Chat-v0.1-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [sfairXC-FsfairX-Zephyr-Chat-v0.1-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/sfairXC-FsfairX-Zephyr-Chat-v0.1-GGUF/blob/main/sfairXC-FsfairX-Zephyr-Chat-v0.1-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [sfairXC-FsfairX-Zephyr-Chat-v0.1-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/sfairXC-FsfairX-Zephyr-Chat-v0.1-GGUF/blob/main/sfairXC-FsfairX-Zephyr-Chat-v0.1-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [sfairXC-FsfairX-Zephyr-Chat-v0.1-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/sfairXC-FsfairX-Zephyr-Chat-v0.1-GGUF/blob/main/sfairXC-FsfairX-Zephyr-Chat-v0.1-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [sfairXC-FsfairX-Zephyr-Chat-v0.1-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/sfairXC-FsfairX-Zephyr-Chat-v0.1-GGUF/blob/main/sfairXC-FsfairX-Zephyr-Chat-v0.1-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [sfairXC-FsfairX-Zephyr-Chat-v0.1-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/sfairXC-FsfairX-Zephyr-Chat-v0.1-GGUF/blob/main/sfairXC-FsfairX-Zephyr-Chat-v0.1-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [sfairXC-FsfairX-Zephyr-Chat-v0.1-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/sfairXC-FsfairX-Zephyr-Chat-v0.1-GGUF/blob/main/sfairXC-FsfairX-Zephyr-Chat-v0.1-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [sfairXC-FsfairX-Zephyr-Chat-v0.1-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/sfairXC-FsfairX-Zephyr-Chat-v0.1-GGUF/blob/main/sfairXC-FsfairX-Zephyr-Chat-v0.1-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [sfairXC-FsfairX-Zephyr-Chat-v0.1-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/sfairXC-FsfairX-Zephyr-Chat-v0.1-GGUF/blob/main/sfairXC-FsfairX-Zephyr-Chat-v0.1-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [sfairXC-FsfairX-Zephyr-Chat-v0.1-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/sfairXC-FsfairX-Zephyr-Chat-v0.1-GGUF/blob/main/sfairXC-FsfairX-Zephyr-Chat-v0.1-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/mlabonne-TwinLlama-3.1-8B-DPO-GGUF | featherless-ai-quants | 2024-11-10T19:47:57Z | 5 | 0 | null | [
"gguf",
"text-generation",
"base_model:mlabonne/TwinLlama-3.1-8B-DPO",
"base_model:quantized:mlabonne/TwinLlama-3.1-8B-DPO",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-11-07T07:02:31Z | ---
base_model: mlabonne/TwinLlama-3.1-8B-DPO
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# mlabonne/TwinLlama-3.1-8B-DPO GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [mlabonne-TwinLlama-3.1-8B-DPO-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/mlabonne-TwinLlama-3.1-8B-DPO-GGUF/blob/main/mlabonne-TwinLlama-3.1-8B-DPO-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [mlabonne-TwinLlama-3.1-8B-DPO-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/mlabonne-TwinLlama-3.1-8B-DPO-GGUF/blob/main/mlabonne-TwinLlama-3.1-8B-DPO-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [mlabonne-TwinLlama-3.1-8B-DPO-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/mlabonne-TwinLlama-3.1-8B-DPO-GGUF/blob/main/mlabonne-TwinLlama-3.1-8B-DPO-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [mlabonne-TwinLlama-3.1-8B-DPO-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/mlabonne-TwinLlama-3.1-8B-DPO-GGUF/blob/main/mlabonne-TwinLlama-3.1-8B-DPO-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [mlabonne-TwinLlama-3.1-8B-DPO-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/mlabonne-TwinLlama-3.1-8B-DPO-GGUF/blob/main/mlabonne-TwinLlama-3.1-8B-DPO-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [mlabonne-TwinLlama-3.1-8B-DPO-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/mlabonne-TwinLlama-3.1-8B-DPO-GGUF/blob/main/mlabonne-TwinLlama-3.1-8B-DPO-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [mlabonne-TwinLlama-3.1-8B-DPO-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/mlabonne-TwinLlama-3.1-8B-DPO-GGUF/blob/main/mlabonne-TwinLlama-3.1-8B-DPO-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [mlabonne-TwinLlama-3.1-8B-DPO-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/mlabonne-TwinLlama-3.1-8B-DPO-GGUF/blob/main/mlabonne-TwinLlama-3.1-8B-DPO-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [mlabonne-TwinLlama-3.1-8B-DPO-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/mlabonne-TwinLlama-3.1-8B-DPO-GGUF/blob/main/mlabonne-TwinLlama-3.1-8B-DPO-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [mlabonne-TwinLlama-3.1-8B-DPO-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/mlabonne-TwinLlama-3.1-8B-DPO-GGUF/blob/main/mlabonne-TwinLlama-3.1-8B-DPO-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [mlabonne-TwinLlama-3.1-8B-DPO-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/mlabonne-TwinLlama-3.1-8B-DPO-GGUF/blob/main/mlabonne-TwinLlama-3.1-8B-DPO-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/netcat420-MFANN-llama3.1-abliterated-SLERP-v3.1-GGUF | featherless-ai-quants | 2024-11-10T19:47:53Z | 5 | 0 | null | [
"gguf",
"text-generation",
"base_model:netcat420/MFANN-llama3.1-abliterated-SLERP-v3.1",
"base_model:quantized:netcat420/MFANN-llama3.1-abliterated-SLERP-v3.1",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-07T06:35:53Z | ---
base_model: netcat420/MFANN-llama3.1-abliterated-SLERP-v3.1
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# netcat420/MFANN-llama3.1-abliterated-SLERP-v3.1 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [netcat420-MFANN-llama3.1-abliterated-SLERP-v3.1-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANN-llama3.1-abliterated-SLERP-v3.1-GGUF/blob/main/netcat420-MFANN-llama3.1-abliterated-SLERP-v3.1-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [netcat420-MFANN-llama3.1-abliterated-SLERP-v3.1-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANN-llama3.1-abliterated-SLERP-v3.1-GGUF/blob/main/netcat420-MFANN-llama3.1-abliterated-SLERP-v3.1-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [netcat420-MFANN-llama3.1-abliterated-SLERP-v3.1-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANN-llama3.1-abliterated-SLERP-v3.1-GGUF/blob/main/netcat420-MFANN-llama3.1-abliterated-SLERP-v3.1-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [netcat420-MFANN-llama3.1-abliterated-SLERP-v3.1-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANN-llama3.1-abliterated-SLERP-v3.1-GGUF/blob/main/netcat420-MFANN-llama3.1-abliterated-SLERP-v3.1-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [netcat420-MFANN-llama3.1-abliterated-SLERP-v3.1-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANN-llama3.1-abliterated-SLERP-v3.1-GGUF/blob/main/netcat420-MFANN-llama3.1-abliterated-SLERP-v3.1-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [netcat420-MFANN-llama3.1-abliterated-SLERP-v3.1-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANN-llama3.1-abliterated-SLERP-v3.1-GGUF/blob/main/netcat420-MFANN-llama3.1-abliterated-SLERP-v3.1-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [netcat420-MFANN-llama3.1-abliterated-SLERP-v3.1-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANN-llama3.1-abliterated-SLERP-v3.1-GGUF/blob/main/netcat420-MFANN-llama3.1-abliterated-SLERP-v3.1-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [netcat420-MFANN-llama3.1-abliterated-SLERP-v3.1-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANN-llama3.1-abliterated-SLERP-v3.1-GGUF/blob/main/netcat420-MFANN-llama3.1-abliterated-SLERP-v3.1-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [netcat420-MFANN-llama3.1-abliterated-SLERP-v3.1-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANN-llama3.1-abliterated-SLERP-v3.1-GGUF/blob/main/netcat420-MFANN-llama3.1-abliterated-SLERP-v3.1-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [netcat420-MFANN-llama3.1-abliterated-SLERP-v3.1-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANN-llama3.1-abliterated-SLERP-v3.1-GGUF/blob/main/netcat420-MFANN-llama3.1-abliterated-SLERP-v3.1-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [netcat420-MFANN-llama3.1-abliterated-SLERP-v3.1-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANN-llama3.1-abliterated-SLERP-v3.1-GGUF/blob/main/netcat420-MFANN-llama3.1-abliterated-SLERP-v3.1-Q8_0.gguf) | 8145.12 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/leekh7624-Llama-3-Open-Ko-8B-Instruct-sample-GGUF | featherless-ai-quants | 2024-11-10T19:47:43Z | 9 | 0 | null | [
"gguf",
"text-generation",
"base_model:leekh7624/Llama-3-Open-Ko-8B-Instruct-sample",
"base_model:quantized:leekh7624/Llama-3-Open-Ko-8B-Instruct-sample",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-07T05:52:15Z | ---
base_model: leekh7624/Llama-3-Open-Ko-8B-Instruct-sample
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# leekh7624/Llama-3-Open-Ko-8B-Instruct-sample GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [leekh7624-Llama-3-Open-Ko-8B-Instruct-sample-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/leekh7624-Llama-3-Open-Ko-8B-Instruct-sample-GGUF/blob/main/leekh7624-Llama-3-Open-Ko-8B-Instruct-sample-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [leekh7624-Llama-3-Open-Ko-8B-Instruct-sample-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/leekh7624-Llama-3-Open-Ko-8B-Instruct-sample-GGUF/blob/main/leekh7624-Llama-3-Open-Ko-8B-Instruct-sample-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [leekh7624-Llama-3-Open-Ko-8B-Instruct-sample-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/leekh7624-Llama-3-Open-Ko-8B-Instruct-sample-GGUF/blob/main/leekh7624-Llama-3-Open-Ko-8B-Instruct-sample-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [leekh7624-Llama-3-Open-Ko-8B-Instruct-sample-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/leekh7624-Llama-3-Open-Ko-8B-Instruct-sample-GGUF/blob/main/leekh7624-Llama-3-Open-Ko-8B-Instruct-sample-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [leekh7624-Llama-3-Open-Ko-8B-Instruct-sample-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/leekh7624-Llama-3-Open-Ko-8B-Instruct-sample-GGUF/blob/main/leekh7624-Llama-3-Open-Ko-8B-Instruct-sample-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [leekh7624-Llama-3-Open-Ko-8B-Instruct-sample-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/leekh7624-Llama-3-Open-Ko-8B-Instruct-sample-GGUF/blob/main/leekh7624-Llama-3-Open-Ko-8B-Instruct-sample-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [leekh7624-Llama-3-Open-Ko-8B-Instruct-sample-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/leekh7624-Llama-3-Open-Ko-8B-Instruct-sample-GGUF/blob/main/leekh7624-Llama-3-Open-Ko-8B-Instruct-sample-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [leekh7624-Llama-3-Open-Ko-8B-Instruct-sample-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/leekh7624-Llama-3-Open-Ko-8B-Instruct-sample-GGUF/blob/main/leekh7624-Llama-3-Open-Ko-8B-Instruct-sample-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [leekh7624-Llama-3-Open-Ko-8B-Instruct-sample-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/leekh7624-Llama-3-Open-Ko-8B-Instruct-sample-GGUF/blob/main/leekh7624-Llama-3-Open-Ko-8B-Instruct-sample-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [leekh7624-Llama-3-Open-Ko-8B-Instruct-sample-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/leekh7624-Llama-3-Open-Ko-8B-Instruct-sample-GGUF/blob/main/leekh7624-Llama-3-Open-Ko-8B-Instruct-sample-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [leekh7624-Llama-3-Open-Ko-8B-Instruct-sample-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/leekh7624-Llama-3-Open-Ko-8B-Instruct-sample-GGUF/blob/main/leekh7624-Llama-3-Open-Ko-8B-Instruct-sample-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/chujiezheng-Llama-3-Instruct-8B-SimPO-ExPO-GGUF | featherless-ai-quants | 2024-11-10T19:47:40Z | 81 | 0 | null | [
"gguf",
"text-generation",
"base_model:chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO",
"base_model:quantized:chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-07T05:35:19Z | ---
base_model: chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [chujiezheng-Llama-3-Instruct-8B-SimPO-ExPO-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/chujiezheng-Llama-3-Instruct-8B-SimPO-ExPO-GGUF/blob/main/chujiezheng-Llama-3-Instruct-8B-SimPO-ExPO-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [chujiezheng-Llama-3-Instruct-8B-SimPO-ExPO-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/chujiezheng-Llama-3-Instruct-8B-SimPO-ExPO-GGUF/blob/main/chujiezheng-Llama-3-Instruct-8B-SimPO-ExPO-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [chujiezheng-Llama-3-Instruct-8B-SimPO-ExPO-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/chujiezheng-Llama-3-Instruct-8B-SimPO-ExPO-GGUF/blob/main/chujiezheng-Llama-3-Instruct-8B-SimPO-ExPO-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [chujiezheng-Llama-3-Instruct-8B-SimPO-ExPO-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/chujiezheng-Llama-3-Instruct-8B-SimPO-ExPO-GGUF/blob/main/chujiezheng-Llama-3-Instruct-8B-SimPO-ExPO-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [chujiezheng-Llama-3-Instruct-8B-SimPO-ExPO-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/chujiezheng-Llama-3-Instruct-8B-SimPO-ExPO-GGUF/blob/main/chujiezheng-Llama-3-Instruct-8B-SimPO-ExPO-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [chujiezheng-Llama-3-Instruct-8B-SimPO-ExPO-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/chujiezheng-Llama-3-Instruct-8B-SimPO-ExPO-GGUF/blob/main/chujiezheng-Llama-3-Instruct-8B-SimPO-ExPO-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [chujiezheng-Llama-3-Instruct-8B-SimPO-ExPO-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/chujiezheng-Llama-3-Instruct-8B-SimPO-ExPO-GGUF/blob/main/chujiezheng-Llama-3-Instruct-8B-SimPO-ExPO-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [chujiezheng-Llama-3-Instruct-8B-SimPO-ExPO-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/chujiezheng-Llama-3-Instruct-8B-SimPO-ExPO-GGUF/blob/main/chujiezheng-Llama-3-Instruct-8B-SimPO-ExPO-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [chujiezheng-Llama-3-Instruct-8B-SimPO-ExPO-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/chujiezheng-Llama-3-Instruct-8B-SimPO-ExPO-GGUF/blob/main/chujiezheng-Llama-3-Instruct-8B-SimPO-ExPO-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [chujiezheng-Llama-3-Instruct-8B-SimPO-ExPO-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/chujiezheng-Llama-3-Instruct-8B-SimPO-ExPO-GGUF/blob/main/chujiezheng-Llama-3-Instruct-8B-SimPO-ExPO-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [chujiezheng-Llama-3-Instruct-8B-SimPO-ExPO-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/chujiezheng-Llama-3-Instruct-8B-SimPO-ExPO-GGUF/blob/main/chujiezheng-Llama-3-Instruct-8B-SimPO-ExPO-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/braindao-iq-code-evmind-v2-llama3-code-8b-instruct-GGUF | featherless-ai-quants | 2024-11-10T19:47:32Z | 6 | 0 | null | [
"gguf",
"text-generation",
"base_model:braindao/iq-code-evmind-v2-llama3-code-8b-instruct",
"base_model:quantized:braindao/iq-code-evmind-v2-llama3-code-8b-instruct",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-07T04:52:50Z | ---
base_model: braindao/iq-code-evmind-v2-llama3-code-8b-instruct
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# braindao/iq-code-evmind-v2-llama3-code-8b-instruct GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [braindao-iq-code-evmind-v2-llama3-code-8b-instruct-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/braindao-iq-code-evmind-v2-llama3-code-8b-instruct-GGUF/blob/main/braindao-iq-code-evmind-v2-llama3-code-8b-instruct-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [braindao-iq-code-evmind-v2-llama3-code-8b-instruct-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/braindao-iq-code-evmind-v2-llama3-code-8b-instruct-GGUF/blob/main/braindao-iq-code-evmind-v2-llama3-code-8b-instruct-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [braindao-iq-code-evmind-v2-llama3-code-8b-instruct-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/braindao-iq-code-evmind-v2-llama3-code-8b-instruct-GGUF/blob/main/braindao-iq-code-evmind-v2-llama3-code-8b-instruct-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [braindao-iq-code-evmind-v2-llama3-code-8b-instruct-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/braindao-iq-code-evmind-v2-llama3-code-8b-instruct-GGUF/blob/main/braindao-iq-code-evmind-v2-llama3-code-8b-instruct-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [braindao-iq-code-evmind-v2-llama3-code-8b-instruct-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/braindao-iq-code-evmind-v2-llama3-code-8b-instruct-GGUF/blob/main/braindao-iq-code-evmind-v2-llama3-code-8b-instruct-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [braindao-iq-code-evmind-v2-llama3-code-8b-instruct-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/braindao-iq-code-evmind-v2-llama3-code-8b-instruct-GGUF/blob/main/braindao-iq-code-evmind-v2-llama3-code-8b-instruct-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [braindao-iq-code-evmind-v2-llama3-code-8b-instruct-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/braindao-iq-code-evmind-v2-llama3-code-8b-instruct-GGUF/blob/main/braindao-iq-code-evmind-v2-llama3-code-8b-instruct-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [braindao-iq-code-evmind-v2-llama3-code-8b-instruct-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/braindao-iq-code-evmind-v2-llama3-code-8b-instruct-GGUF/blob/main/braindao-iq-code-evmind-v2-llama3-code-8b-instruct-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [braindao-iq-code-evmind-v2-llama3-code-8b-instruct-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/braindao-iq-code-evmind-v2-llama3-code-8b-instruct-GGUF/blob/main/braindao-iq-code-evmind-v2-llama3-code-8b-instruct-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [braindao-iq-code-evmind-v2-llama3-code-8b-instruct-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/braindao-iq-code-evmind-v2-llama3-code-8b-instruct-GGUF/blob/main/braindao-iq-code-evmind-v2-llama3-code-8b-instruct-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [braindao-iq-code-evmind-v2-llama3-code-8b-instruct-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/braindao-iq-code-evmind-v2-llama3-code-8b-instruct-GGUF/blob/main/braindao-iq-code-evmind-v2-llama3-code-8b-instruct-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/ArianAskari-SOLID-SFT-DPO-MixQV2-SOLIDRejected-SFTChosen-Zephyr-7b-beta-GGUF | featherless-ai-quants | 2024-11-10T19:47:10Z | 10 | 0 | null | [
"gguf",
"text-generation",
"base_model:ArianAskari/SOLID-SFT-DPO-MixQV2-SOLIDRejected-SFTChosen-Zephyr-7b-beta",
"base_model:quantized:ArianAskari/SOLID-SFT-DPO-MixQV2-SOLIDRejected-SFTChosen-Zephyr-7b-beta",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-07T03:10:09Z | ---
base_model: ArianAskari/SOLID-SFT-DPO-MixQV2-SOLIDRejected-SFTChosen-Zephyr-7b-beta
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# ArianAskari/SOLID-SFT-DPO-MixQV2-SOLIDRejected-SFTChosen-Zephyr-7b-beta GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [ArianAskari-SOLID-SFT-DPO-MixQV2-SOLIDRejected-SFTChosen-Zephyr-7b-beta-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/ArianAskari-SOLID-SFT-DPO-MixQV2-SOLIDRejected-SFTChosen-Zephyr-7b-beta-GGUF/blob/main/ArianAskari-SOLID-SFT-DPO-MixQV2-SOLIDRejected-SFTChosen-Zephyr-7b-beta-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [ArianAskari-SOLID-SFT-DPO-MixQV2-SOLIDRejected-SFTChosen-Zephyr-7b-beta-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/ArianAskari-SOLID-SFT-DPO-MixQV2-SOLIDRejected-SFTChosen-Zephyr-7b-beta-GGUF/blob/main/ArianAskari-SOLID-SFT-DPO-MixQV2-SOLIDRejected-SFTChosen-Zephyr-7b-beta-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [ArianAskari-SOLID-SFT-DPO-MixQV2-SOLIDRejected-SFTChosen-Zephyr-7b-beta-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/ArianAskari-SOLID-SFT-DPO-MixQV2-SOLIDRejected-SFTChosen-Zephyr-7b-beta-GGUF/blob/main/ArianAskari-SOLID-SFT-DPO-MixQV2-SOLIDRejected-SFTChosen-Zephyr-7b-beta-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [ArianAskari-SOLID-SFT-DPO-MixQV2-SOLIDRejected-SFTChosen-Zephyr-7b-beta-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/ArianAskari-SOLID-SFT-DPO-MixQV2-SOLIDRejected-SFTChosen-Zephyr-7b-beta-GGUF/blob/main/ArianAskari-SOLID-SFT-DPO-MixQV2-SOLIDRejected-SFTChosen-Zephyr-7b-beta-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [ArianAskari-SOLID-SFT-DPO-MixQV2-SOLIDRejected-SFTChosen-Zephyr-7b-beta-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/ArianAskari-SOLID-SFT-DPO-MixQV2-SOLIDRejected-SFTChosen-Zephyr-7b-beta-GGUF/blob/main/ArianAskari-SOLID-SFT-DPO-MixQV2-SOLIDRejected-SFTChosen-Zephyr-7b-beta-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [ArianAskari-SOLID-SFT-DPO-MixQV2-SOLIDRejected-SFTChosen-Zephyr-7b-beta-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/ArianAskari-SOLID-SFT-DPO-MixQV2-SOLIDRejected-SFTChosen-Zephyr-7b-beta-GGUF/blob/main/ArianAskari-SOLID-SFT-DPO-MixQV2-SOLIDRejected-SFTChosen-Zephyr-7b-beta-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [ArianAskari-SOLID-SFT-DPO-MixQV2-SOLIDRejected-SFTChosen-Zephyr-7b-beta-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/ArianAskari-SOLID-SFT-DPO-MixQV2-SOLIDRejected-SFTChosen-Zephyr-7b-beta-GGUF/blob/main/ArianAskari-SOLID-SFT-DPO-MixQV2-SOLIDRejected-SFTChosen-Zephyr-7b-beta-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [ArianAskari-SOLID-SFT-DPO-MixQV2-SOLIDRejected-SFTChosen-Zephyr-7b-beta-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/ArianAskari-SOLID-SFT-DPO-MixQV2-SOLIDRejected-SFTChosen-Zephyr-7b-beta-GGUF/blob/main/ArianAskari-SOLID-SFT-DPO-MixQV2-SOLIDRejected-SFTChosen-Zephyr-7b-beta-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [ArianAskari-SOLID-SFT-DPO-MixQV2-SOLIDRejected-SFTChosen-Zephyr-7b-beta-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/ArianAskari-SOLID-SFT-DPO-MixQV2-SOLIDRejected-SFTChosen-Zephyr-7b-beta-GGUF/blob/main/ArianAskari-SOLID-SFT-DPO-MixQV2-SOLIDRejected-SFTChosen-Zephyr-7b-beta-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [ArianAskari-SOLID-SFT-DPO-MixQV2-SOLIDRejected-SFTChosen-Zephyr-7b-beta-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/ArianAskari-SOLID-SFT-DPO-MixQV2-SOLIDRejected-SFTChosen-Zephyr-7b-beta-GGUF/blob/main/ArianAskari-SOLID-SFT-DPO-MixQV2-SOLIDRejected-SFTChosen-Zephyr-7b-beta-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [ArianAskari-SOLID-SFT-DPO-MixQV2-SOLIDRejected-SFTChosen-Zephyr-7b-beta-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/ArianAskari-SOLID-SFT-DPO-MixQV2-SOLIDRejected-SFTChosen-Zephyr-7b-beta-GGUF/blob/main/ArianAskari-SOLID-SFT-DPO-MixQV2-SOLIDRejected-SFTChosen-Zephyr-7b-beta-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/openchat-openchat-3.6-8b-20240522-GGUF | featherless-ai-quants | 2024-11-10T19:46:57Z | 9 | 0 | null | [
"gguf",
"text-generation",
"base_model:openchat/openchat-3.6-8b-20240522",
"base_model:quantized:openchat/openchat-3.6-8b-20240522",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-07T01:49:28Z | ---
base_model: openchat/openchat-3.6-8b-20240522
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# openchat/openchat-3.6-8b-20240522 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [openchat-openchat-3.6-8b-20240522-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/openchat-openchat-3.6-8b-20240522-GGUF/blob/main/openchat-openchat-3.6-8b-20240522-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [openchat-openchat-3.6-8b-20240522-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/openchat-openchat-3.6-8b-20240522-GGUF/blob/main/openchat-openchat-3.6-8b-20240522-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [openchat-openchat-3.6-8b-20240522-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/openchat-openchat-3.6-8b-20240522-GGUF/blob/main/openchat-openchat-3.6-8b-20240522-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [openchat-openchat-3.6-8b-20240522-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/openchat-openchat-3.6-8b-20240522-GGUF/blob/main/openchat-openchat-3.6-8b-20240522-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [openchat-openchat-3.6-8b-20240522-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/openchat-openchat-3.6-8b-20240522-GGUF/blob/main/openchat-openchat-3.6-8b-20240522-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [openchat-openchat-3.6-8b-20240522-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/openchat-openchat-3.6-8b-20240522-GGUF/blob/main/openchat-openchat-3.6-8b-20240522-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [openchat-openchat-3.6-8b-20240522-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/openchat-openchat-3.6-8b-20240522-GGUF/blob/main/openchat-openchat-3.6-8b-20240522-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [openchat-openchat-3.6-8b-20240522-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/openchat-openchat-3.6-8b-20240522-GGUF/blob/main/openchat-openchat-3.6-8b-20240522-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [openchat-openchat-3.6-8b-20240522-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/openchat-openchat-3.6-8b-20240522-GGUF/blob/main/openchat-openchat-3.6-8b-20240522-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [openchat-openchat-3.6-8b-20240522-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/openchat-openchat-3.6-8b-20240522-GGUF/blob/main/openchat-openchat-3.6-8b-20240522-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [openchat-openchat-3.6-8b-20240522-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/openchat-openchat-3.6-8b-20240522-GGUF/blob/main/openchat-openchat-3.6-8b-20240522-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/OpenBuddy-openbuddy-llama3.1-8b-v22.3-131k-GGUF | featherless-ai-quants | 2024-11-10T19:46:53Z | 21 | 0 | null | [
"gguf",
"text-generation",
"base_model:OpenBuddy/openbuddy-llama3.1-8b-v22.3-131k",
"base_model:quantized:OpenBuddy/openbuddy-llama3.1-8b-v22.3-131k",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-07T01:11:51Z | ---
base_model: OpenBuddy/openbuddy-llama3.1-8b-v22.3-131k
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# OpenBuddy/openbuddy-llama3.1-8b-v22.3-131k GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [OpenBuddy-openbuddy-llama3.1-8b-v22.3-131k-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/OpenBuddy-openbuddy-llama3.1-8b-v22.3-131k-GGUF/blob/main/OpenBuddy-openbuddy-llama3.1-8b-v22.3-131k-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [OpenBuddy-openbuddy-llama3.1-8b-v22.3-131k-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/OpenBuddy-openbuddy-llama3.1-8b-v22.3-131k-GGUF/blob/main/OpenBuddy-openbuddy-llama3.1-8b-v22.3-131k-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [OpenBuddy-openbuddy-llama3.1-8b-v22.3-131k-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/OpenBuddy-openbuddy-llama3.1-8b-v22.3-131k-GGUF/blob/main/OpenBuddy-openbuddy-llama3.1-8b-v22.3-131k-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [OpenBuddy-openbuddy-llama3.1-8b-v22.3-131k-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/OpenBuddy-openbuddy-llama3.1-8b-v22.3-131k-GGUF/blob/main/OpenBuddy-openbuddy-llama3.1-8b-v22.3-131k-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [OpenBuddy-openbuddy-llama3.1-8b-v22.3-131k-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/OpenBuddy-openbuddy-llama3.1-8b-v22.3-131k-GGUF/blob/main/OpenBuddy-openbuddy-llama3.1-8b-v22.3-131k-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [OpenBuddy-openbuddy-llama3.1-8b-v22.3-131k-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/OpenBuddy-openbuddy-llama3.1-8b-v22.3-131k-GGUF/blob/main/OpenBuddy-openbuddy-llama3.1-8b-v22.3-131k-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [OpenBuddy-openbuddy-llama3.1-8b-v22.3-131k-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/OpenBuddy-openbuddy-llama3.1-8b-v22.3-131k-GGUF/blob/main/OpenBuddy-openbuddy-llama3.1-8b-v22.3-131k-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [OpenBuddy-openbuddy-llama3.1-8b-v22.3-131k-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/OpenBuddy-openbuddy-llama3.1-8b-v22.3-131k-GGUF/blob/main/OpenBuddy-openbuddy-llama3.1-8b-v22.3-131k-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [OpenBuddy-openbuddy-llama3.1-8b-v22.3-131k-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/OpenBuddy-openbuddy-llama3.1-8b-v22.3-131k-GGUF/blob/main/OpenBuddy-openbuddy-llama3.1-8b-v22.3-131k-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [OpenBuddy-openbuddy-llama3.1-8b-v22.3-131k-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/OpenBuddy-openbuddy-llama3.1-8b-v22.3-131k-GGUF/blob/main/OpenBuddy-openbuddy-llama3.1-8b-v22.3-131k-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [OpenBuddy-openbuddy-llama3.1-8b-v22.3-131k-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/OpenBuddy-openbuddy-llama3.1-8b-v22.3-131k-GGUF/blob/main/OpenBuddy-openbuddy-llama3.1-8b-v22.3-131k-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/allknowingroger-NexusMistral2-7B-slerp-GGUF | featherless-ai-quants | 2024-11-10T19:46:33Z | 7 | 0 | null | [
"gguf",
"text-generation",
"base_model:allknowingroger/NexusMistral2-7B-slerp",
"base_model:quantized:allknowingroger/NexusMistral2-7B-slerp",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-07T00:18:41Z | ---
base_model: allknowingroger/NexusMistral2-7B-slerp
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# allknowingroger/NexusMistral2-7B-slerp GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [allknowingroger-NexusMistral2-7B-slerp-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/allknowingroger-NexusMistral2-7B-slerp-GGUF/blob/main/allknowingroger-NexusMistral2-7B-slerp-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [allknowingroger-NexusMistral2-7B-slerp-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/allknowingroger-NexusMistral2-7B-slerp-GGUF/blob/main/allknowingroger-NexusMistral2-7B-slerp-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [allknowingroger-NexusMistral2-7B-slerp-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/allknowingroger-NexusMistral2-7B-slerp-GGUF/blob/main/allknowingroger-NexusMistral2-7B-slerp-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [allknowingroger-NexusMistral2-7B-slerp-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/allknowingroger-NexusMistral2-7B-slerp-GGUF/blob/main/allknowingroger-NexusMistral2-7B-slerp-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [allknowingroger-NexusMistral2-7B-slerp-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/allknowingroger-NexusMistral2-7B-slerp-GGUF/blob/main/allknowingroger-NexusMistral2-7B-slerp-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [allknowingroger-NexusMistral2-7B-slerp-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/allknowingroger-NexusMistral2-7B-slerp-GGUF/blob/main/allknowingroger-NexusMistral2-7B-slerp-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [allknowingroger-NexusMistral2-7B-slerp-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/allknowingroger-NexusMistral2-7B-slerp-GGUF/blob/main/allknowingroger-NexusMistral2-7B-slerp-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [allknowingroger-NexusMistral2-7B-slerp-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/allknowingroger-NexusMistral2-7B-slerp-GGUF/blob/main/allknowingroger-NexusMistral2-7B-slerp-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [allknowingroger-NexusMistral2-7B-slerp-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/allknowingroger-NexusMistral2-7B-slerp-GGUF/blob/main/allknowingroger-NexusMistral2-7B-slerp-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [allknowingroger-NexusMistral2-7B-slerp-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/allknowingroger-NexusMistral2-7B-slerp-GGUF/blob/main/allknowingroger-NexusMistral2-7B-slerp-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [allknowingroger-NexusMistral2-7B-slerp-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/allknowingroger-NexusMistral2-7B-slerp-GGUF/blob/main/allknowingroger-NexusMistral2-7B-slerp-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/maldv-llama-3-fantasy-writer-8b-GGUF | featherless-ai-quants | 2024-11-10T19:46:30Z | 29 | 0 | null | [
"gguf",
"text-generation",
"base_model:maldv/llama-3-fantasy-writer-8b",
"base_model:quantized:maldv/llama-3-fantasy-writer-8b",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-06T23:57:35Z | ---
base_model: maldv/llama-3-fantasy-writer-8b
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# maldv/llama-3-fantasy-writer-8b GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [maldv-llama-3-fantasy-writer-8b-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/maldv-llama-3-fantasy-writer-8b-GGUF/blob/main/maldv-llama-3-fantasy-writer-8b-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [maldv-llama-3-fantasy-writer-8b-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/maldv-llama-3-fantasy-writer-8b-GGUF/blob/main/maldv-llama-3-fantasy-writer-8b-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [maldv-llama-3-fantasy-writer-8b-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/maldv-llama-3-fantasy-writer-8b-GGUF/blob/main/maldv-llama-3-fantasy-writer-8b-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [maldv-llama-3-fantasy-writer-8b-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/maldv-llama-3-fantasy-writer-8b-GGUF/blob/main/maldv-llama-3-fantasy-writer-8b-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [maldv-llama-3-fantasy-writer-8b-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/maldv-llama-3-fantasy-writer-8b-GGUF/blob/main/maldv-llama-3-fantasy-writer-8b-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [maldv-llama-3-fantasy-writer-8b-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/maldv-llama-3-fantasy-writer-8b-GGUF/blob/main/maldv-llama-3-fantasy-writer-8b-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [maldv-llama-3-fantasy-writer-8b-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/maldv-llama-3-fantasy-writer-8b-GGUF/blob/main/maldv-llama-3-fantasy-writer-8b-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [maldv-llama-3-fantasy-writer-8b-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/maldv-llama-3-fantasy-writer-8b-GGUF/blob/main/maldv-llama-3-fantasy-writer-8b-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [maldv-llama-3-fantasy-writer-8b-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/maldv-llama-3-fantasy-writer-8b-GGUF/blob/main/maldv-llama-3-fantasy-writer-8b-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [maldv-llama-3-fantasy-writer-8b-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/maldv-llama-3-fantasy-writer-8b-GGUF/blob/main/maldv-llama-3-fantasy-writer-8b-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [maldv-llama-3-fantasy-writer-8b-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/maldv-llama-3-fantasy-writer-8b-GGUF/blob/main/maldv-llama-3-fantasy-writer-8b-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/nothingiisreal-MN-12B-Celeste-V1.9-GGUF | featherless-ai-quants | 2024-11-10T19:46:27Z | 6 | 0 | null | [
"gguf",
"text-generation",
"base_model:nothingiisreal/MN-12B-Celeste-V1.9",
"base_model:quantized:nothingiisreal/MN-12B-Celeste-V1.9",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-06T23:50:43Z | ---
base_model: nothingiisreal/MN-12B-Celeste-V1.9
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# nothingiisreal/MN-12B-Celeste-V1.9 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [nothingiisreal-MN-12B-Celeste-V1.9-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/nothingiisreal-MN-12B-Celeste-V1.9-GGUF/blob/main/nothingiisreal-MN-12B-Celeste-V1.9-IQ4_XS.gguf) | 6485.04 MB |
| Q2_K | [nothingiisreal-MN-12B-Celeste-V1.9-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/nothingiisreal-MN-12B-Celeste-V1.9-GGUF/blob/main/nothingiisreal-MN-12B-Celeste-V1.9-Q2_K.gguf) | 4569.10 MB |
| Q3_K_L | [nothingiisreal-MN-12B-Celeste-V1.9-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/nothingiisreal-MN-12B-Celeste-V1.9-GGUF/blob/main/nothingiisreal-MN-12B-Celeste-V1.9-Q3_K_L.gguf) | 6257.54 MB |
| Q3_K_M | [nothingiisreal-MN-12B-Celeste-V1.9-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/nothingiisreal-MN-12B-Celeste-V1.9-GGUF/blob/main/nothingiisreal-MN-12B-Celeste-V1.9-Q3_K_M.gguf) | 5801.29 MB |
| Q3_K_S | [nothingiisreal-MN-12B-Celeste-V1.9-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/nothingiisreal-MN-12B-Celeste-V1.9-GGUF/blob/main/nothingiisreal-MN-12B-Celeste-V1.9-Q3_K_S.gguf) | 5277.85 MB |
| Q4_K_M | [nothingiisreal-MN-12B-Celeste-V1.9-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/nothingiisreal-MN-12B-Celeste-V1.9-GGUF/blob/main/nothingiisreal-MN-12B-Celeste-V1.9-Q4_K_M.gguf) | 7130.82 MB |
| Q4_K_S | [nothingiisreal-MN-12B-Celeste-V1.9-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/nothingiisreal-MN-12B-Celeste-V1.9-GGUF/blob/main/nothingiisreal-MN-12B-Celeste-V1.9-Q4_K_S.gguf) | 6790.35 MB |
| Q5_K_M | [nothingiisreal-MN-12B-Celeste-V1.9-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/nothingiisreal-MN-12B-Celeste-V1.9-GGUF/blob/main/nothingiisreal-MN-12B-Celeste-V1.9-Q5_K_M.gguf) | 8323.32 MB |
| Q5_K_S | [nothingiisreal-MN-12B-Celeste-V1.9-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/nothingiisreal-MN-12B-Celeste-V1.9-GGUF/blob/main/nothingiisreal-MN-12B-Celeste-V1.9-Q5_K_S.gguf) | 8124.10 MB |
| Q6_K | [nothingiisreal-MN-12B-Celeste-V1.9-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/nothingiisreal-MN-12B-Celeste-V1.9-GGUF/blob/main/nothingiisreal-MN-12B-Celeste-V1.9-Q6_K.gguf) | 9590.35 MB |
| Q8_0 | [nothingiisreal-MN-12B-Celeste-V1.9-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/nothingiisreal-MN-12B-Celeste-V1.9-GGUF/blob/main/nothingiisreal-MN-12B-Celeste-V1.9-Q8_0.gguf) | 12419.10 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v4-test-GGUF | featherless-ai-quants | 2024-11-10T19:46:24Z | 10 | 0 | null | [
"gguf",
"text-generation",
"base_model:eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v4-test",
"base_model:quantized:eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v4-test",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-11-06T23:25:03Z | ---
base_model: eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v4-test
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v4-test GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v4-test-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v4-test-GGUF/blob/main/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v4-test-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v4-test-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v4-test-GGUF/blob/main/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v4-test-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v4-test-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v4-test-GGUF/blob/main/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v4-test-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v4-test-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v4-test-GGUF/blob/main/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v4-test-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v4-test-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v4-test-GGUF/blob/main/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v4-test-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v4-test-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v4-test-GGUF/blob/main/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v4-test-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v4-test-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v4-test-GGUF/blob/main/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v4-test-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v4-test-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v4-test-GGUF/blob/main/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v4-test-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v4-test-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v4-test-GGUF/blob/main/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v4-test-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v4-test-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v4-test-GGUF/blob/main/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v4-test-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v4-test-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v4-test-GGUF/blob/main/eren23-ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v4-test-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/daneggertmoeller-CircularConstructionGPT-1-GGUF | featherless-ai-quants | 2024-11-10T19:46:21Z | 5 | 0 | null | [
"gguf",
"text-generation",
"base_model:daneggertmoeller/CircularConstructionGPT-1",
"base_model:quantized:daneggertmoeller/CircularConstructionGPT-1",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-06T23:18:29Z | ---
base_model: daneggertmoeller/CircularConstructionGPT-1
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# daneggertmoeller/CircularConstructionGPT-1 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [daneggertmoeller-CircularConstructionGPT-1-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/daneggertmoeller-CircularConstructionGPT-1-GGUF/blob/main/daneggertmoeller-CircularConstructionGPT-1-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [daneggertmoeller-CircularConstructionGPT-1-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/daneggertmoeller-CircularConstructionGPT-1-GGUF/blob/main/daneggertmoeller-CircularConstructionGPT-1-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [daneggertmoeller-CircularConstructionGPT-1-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/daneggertmoeller-CircularConstructionGPT-1-GGUF/blob/main/daneggertmoeller-CircularConstructionGPT-1-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [daneggertmoeller-CircularConstructionGPT-1-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/daneggertmoeller-CircularConstructionGPT-1-GGUF/blob/main/daneggertmoeller-CircularConstructionGPT-1-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [daneggertmoeller-CircularConstructionGPT-1-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/daneggertmoeller-CircularConstructionGPT-1-GGUF/blob/main/daneggertmoeller-CircularConstructionGPT-1-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [daneggertmoeller-CircularConstructionGPT-1-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/daneggertmoeller-CircularConstructionGPT-1-GGUF/blob/main/daneggertmoeller-CircularConstructionGPT-1-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [daneggertmoeller-CircularConstructionGPT-1-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/daneggertmoeller-CircularConstructionGPT-1-GGUF/blob/main/daneggertmoeller-CircularConstructionGPT-1-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [daneggertmoeller-CircularConstructionGPT-1-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/daneggertmoeller-CircularConstructionGPT-1-GGUF/blob/main/daneggertmoeller-CircularConstructionGPT-1-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [daneggertmoeller-CircularConstructionGPT-1-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/daneggertmoeller-CircularConstructionGPT-1-GGUF/blob/main/daneggertmoeller-CircularConstructionGPT-1-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [daneggertmoeller-CircularConstructionGPT-1-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/daneggertmoeller-CircularConstructionGPT-1-GGUF/blob/main/daneggertmoeller-CircularConstructionGPT-1-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [daneggertmoeller-CircularConstructionGPT-1-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/daneggertmoeller-CircularConstructionGPT-1-GGUF/blob/main/daneggertmoeller-CircularConstructionGPT-1-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/lllyasviel-omost-llama-3-8b-GGUF | featherless-ai-quants | 2024-11-10T19:46:06Z | 12 | 0 | null | [
"gguf",
"text-generation",
"base_model:lllyasviel/omost-llama-3-8b",
"base_model:quantized:lllyasviel/omost-llama-3-8b",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-06T22:26:30Z | ---
base_model: lllyasviel/omost-llama-3-8b
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# lllyasviel/omost-llama-3-8b GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [lllyasviel-omost-llama-3-8b-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/lllyasviel-omost-llama-3-8b-GGUF/blob/main/lllyasviel-omost-llama-3-8b-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [lllyasviel-omost-llama-3-8b-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/lllyasviel-omost-llama-3-8b-GGUF/blob/main/lllyasviel-omost-llama-3-8b-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [lllyasviel-omost-llama-3-8b-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/lllyasviel-omost-llama-3-8b-GGUF/blob/main/lllyasviel-omost-llama-3-8b-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [lllyasviel-omost-llama-3-8b-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/lllyasviel-omost-llama-3-8b-GGUF/blob/main/lllyasviel-omost-llama-3-8b-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [lllyasviel-omost-llama-3-8b-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/lllyasviel-omost-llama-3-8b-GGUF/blob/main/lllyasviel-omost-llama-3-8b-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [lllyasviel-omost-llama-3-8b-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/lllyasviel-omost-llama-3-8b-GGUF/blob/main/lllyasviel-omost-llama-3-8b-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [lllyasviel-omost-llama-3-8b-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/lllyasviel-omost-llama-3-8b-GGUF/blob/main/lllyasviel-omost-llama-3-8b-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [lllyasviel-omost-llama-3-8b-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/lllyasviel-omost-llama-3-8b-GGUF/blob/main/lllyasviel-omost-llama-3-8b-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [lllyasviel-omost-llama-3-8b-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/lllyasviel-omost-llama-3-8b-GGUF/blob/main/lllyasviel-omost-llama-3-8b-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [lllyasviel-omost-llama-3-8b-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/lllyasviel-omost-llama-3-8b-GGUF/blob/main/lllyasviel-omost-llama-3-8b-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [lllyasviel-omost-llama-3-8b-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/lllyasviel-omost-llama-3-8b-GGUF/blob/main/lllyasviel-omost-llama-3-8b-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/tuneai-Meta-Llama-3-8B-GGUF | featherless-ai-quants | 2024-11-10T19:46:03Z | 6 | 0 | null | [
"gguf",
"text-generation",
"base_model:tuneai/Meta-Llama-3-8B",
"base_model:quantized:tuneai/Meta-Llama-3-8B",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-11-06T22:24:55Z | ---
base_model: tuneai/Meta-Llama-3-8B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# tuneai/Meta-Llama-3-8B GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [tuneai-Meta-Llama-3-8B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/tuneai-Meta-Llama-3-8B-GGUF/blob/main/tuneai-Meta-Llama-3-8B-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [tuneai-Meta-Llama-3-8B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/tuneai-Meta-Llama-3-8B-GGUF/blob/main/tuneai-Meta-Llama-3-8B-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [tuneai-Meta-Llama-3-8B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/tuneai-Meta-Llama-3-8B-GGUF/blob/main/tuneai-Meta-Llama-3-8B-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [tuneai-Meta-Llama-3-8B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/tuneai-Meta-Llama-3-8B-GGUF/blob/main/tuneai-Meta-Llama-3-8B-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [tuneai-Meta-Llama-3-8B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/tuneai-Meta-Llama-3-8B-GGUF/blob/main/tuneai-Meta-Llama-3-8B-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [tuneai-Meta-Llama-3-8B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/tuneai-Meta-Llama-3-8B-GGUF/blob/main/tuneai-Meta-Llama-3-8B-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [tuneai-Meta-Llama-3-8B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/tuneai-Meta-Llama-3-8B-GGUF/blob/main/tuneai-Meta-Llama-3-8B-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [tuneai-Meta-Llama-3-8B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/tuneai-Meta-Llama-3-8B-GGUF/blob/main/tuneai-Meta-Llama-3-8B-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [tuneai-Meta-Llama-3-8B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/tuneai-Meta-Llama-3-8B-GGUF/blob/main/tuneai-Meta-Llama-3-8B-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [tuneai-Meta-Llama-3-8B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/tuneai-Meta-Llama-3-8B-GGUF/blob/main/tuneai-Meta-Llama-3-8B-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [tuneai-Meta-Llama-3-8B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/tuneai-Meta-Llama-3-8B-GGUF/blob/main/tuneai-Meta-Llama-3-8B-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/HODACHI-Llama-3-EZO-8b-Common-it-GGUF | featherless-ai-quants | 2024-11-10T19:45:58Z | 6 | 0 | null | [
"gguf",
"text-generation",
"base_model:AXCXEPT/Llama-3-EZO-8b-Common-it",
"base_model:quantized:AXCXEPT/Llama-3-EZO-8b-Common-it",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-06T22:19:07Z | ---
base_model: HODACHI/Llama-3-EZO-8b-Common-it
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# HODACHI/Llama-3-EZO-8b-Common-it GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [HODACHI-Llama-3-EZO-8b-Common-it-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/HODACHI-Llama-3-EZO-8b-Common-it-GGUF/blob/main/HODACHI-Llama-3-EZO-8b-Common-it-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [HODACHI-Llama-3-EZO-8b-Common-it-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/HODACHI-Llama-3-EZO-8b-Common-it-GGUF/blob/main/HODACHI-Llama-3-EZO-8b-Common-it-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [HODACHI-Llama-3-EZO-8b-Common-it-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/HODACHI-Llama-3-EZO-8b-Common-it-GGUF/blob/main/HODACHI-Llama-3-EZO-8b-Common-it-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [HODACHI-Llama-3-EZO-8b-Common-it-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/HODACHI-Llama-3-EZO-8b-Common-it-GGUF/blob/main/HODACHI-Llama-3-EZO-8b-Common-it-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [HODACHI-Llama-3-EZO-8b-Common-it-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/HODACHI-Llama-3-EZO-8b-Common-it-GGUF/blob/main/HODACHI-Llama-3-EZO-8b-Common-it-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [HODACHI-Llama-3-EZO-8b-Common-it-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/HODACHI-Llama-3-EZO-8b-Common-it-GGUF/blob/main/HODACHI-Llama-3-EZO-8b-Common-it-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [HODACHI-Llama-3-EZO-8b-Common-it-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/HODACHI-Llama-3-EZO-8b-Common-it-GGUF/blob/main/HODACHI-Llama-3-EZO-8b-Common-it-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [HODACHI-Llama-3-EZO-8b-Common-it-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/HODACHI-Llama-3-EZO-8b-Common-it-GGUF/blob/main/HODACHI-Llama-3-EZO-8b-Common-it-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [HODACHI-Llama-3-EZO-8b-Common-it-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/HODACHI-Llama-3-EZO-8b-Common-it-GGUF/blob/main/HODACHI-Llama-3-EZO-8b-Common-it-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [HODACHI-Llama-3-EZO-8b-Common-it-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/HODACHI-Llama-3-EZO-8b-Common-it-GGUF/blob/main/HODACHI-Llama-3-EZO-8b-Common-it-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [HODACHI-Llama-3-EZO-8b-Common-it-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/HODACHI-Llama-3-EZO-8b-Common-it-GGUF/blob/main/HODACHI-Llama-3-EZO-8b-Common-it-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/nasiruddin15-Neural-grok-dolphin-Mistral-7B-GGUF | featherless-ai-quants | 2024-11-10T19:45:45Z | 15 | 0 | null | [
"gguf",
"text-generation",
"base_model:nasiruddin15/Neural-grok-dolphin-Mistral-7B",
"base_model:quantized:nasiruddin15/Neural-grok-dolphin-Mistral-7B",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-06T20:47:43Z | ---
base_model: nasiruddin15/Neural-grok-dolphin-Mistral-7B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# nasiruddin15/Neural-grok-dolphin-Mistral-7B GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [nasiruddin15-Neural-grok-dolphin-Mistral-7B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/nasiruddin15-Neural-grok-dolphin-Mistral-7B-GGUF/blob/main/nasiruddin15-Neural-grok-dolphin-Mistral-7B-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [nasiruddin15-Neural-grok-dolphin-Mistral-7B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/nasiruddin15-Neural-grok-dolphin-Mistral-7B-GGUF/blob/main/nasiruddin15-Neural-grok-dolphin-Mistral-7B-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [nasiruddin15-Neural-grok-dolphin-Mistral-7B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/nasiruddin15-Neural-grok-dolphin-Mistral-7B-GGUF/blob/main/nasiruddin15-Neural-grok-dolphin-Mistral-7B-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [nasiruddin15-Neural-grok-dolphin-Mistral-7B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/nasiruddin15-Neural-grok-dolphin-Mistral-7B-GGUF/blob/main/nasiruddin15-Neural-grok-dolphin-Mistral-7B-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [nasiruddin15-Neural-grok-dolphin-Mistral-7B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/nasiruddin15-Neural-grok-dolphin-Mistral-7B-GGUF/blob/main/nasiruddin15-Neural-grok-dolphin-Mistral-7B-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [nasiruddin15-Neural-grok-dolphin-Mistral-7B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/nasiruddin15-Neural-grok-dolphin-Mistral-7B-GGUF/blob/main/nasiruddin15-Neural-grok-dolphin-Mistral-7B-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [nasiruddin15-Neural-grok-dolphin-Mistral-7B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/nasiruddin15-Neural-grok-dolphin-Mistral-7B-GGUF/blob/main/nasiruddin15-Neural-grok-dolphin-Mistral-7B-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [nasiruddin15-Neural-grok-dolphin-Mistral-7B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/nasiruddin15-Neural-grok-dolphin-Mistral-7B-GGUF/blob/main/nasiruddin15-Neural-grok-dolphin-Mistral-7B-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [nasiruddin15-Neural-grok-dolphin-Mistral-7B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/nasiruddin15-Neural-grok-dolphin-Mistral-7B-GGUF/blob/main/nasiruddin15-Neural-grok-dolphin-Mistral-7B-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [nasiruddin15-Neural-grok-dolphin-Mistral-7B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/nasiruddin15-Neural-grok-dolphin-Mistral-7B-GGUF/blob/main/nasiruddin15-Neural-grok-dolphin-Mistral-7B-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [nasiruddin15-Neural-grok-dolphin-Mistral-7B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/nasiruddin15-Neural-grok-dolphin-Mistral-7B-GGUF/blob/main/nasiruddin15-Neural-grok-dolphin-Mistral-7B-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/nbeerbower-mistral-nemo-gutenberg-12B-v4-GGUF | featherless-ai-quants | 2024-11-10T19:45:33Z | 10 | 0 | null | [
"gguf",
"text-generation",
"base_model:nbeerbower/mistral-nemo-gutenberg-12B-v4",
"base_model:quantized:nbeerbower/mistral-nemo-gutenberg-12B-v4",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-06T20:30:05Z | ---
base_model: nbeerbower/mistral-nemo-gutenberg-12B-v4
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# nbeerbower/mistral-nemo-gutenberg-12B-v4 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [nbeerbower-mistral-nemo-gutenberg-12B-v4-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-gutenberg-12B-v4-GGUF/blob/main/nbeerbower-mistral-nemo-gutenberg-12B-v4-IQ4_XS.gguf) | 6485.04 MB |
| Q2_K | [nbeerbower-mistral-nemo-gutenberg-12B-v4-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-gutenberg-12B-v4-GGUF/blob/main/nbeerbower-mistral-nemo-gutenberg-12B-v4-Q2_K.gguf) | 4569.10 MB |
| Q3_K_L | [nbeerbower-mistral-nemo-gutenberg-12B-v4-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-gutenberg-12B-v4-GGUF/blob/main/nbeerbower-mistral-nemo-gutenberg-12B-v4-Q3_K_L.gguf) | 6257.54 MB |
| Q3_K_M | [nbeerbower-mistral-nemo-gutenberg-12B-v4-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-gutenberg-12B-v4-GGUF/blob/main/nbeerbower-mistral-nemo-gutenberg-12B-v4-Q3_K_M.gguf) | 5801.29 MB |
| Q3_K_S | [nbeerbower-mistral-nemo-gutenberg-12B-v4-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-gutenberg-12B-v4-GGUF/blob/main/nbeerbower-mistral-nemo-gutenberg-12B-v4-Q3_K_S.gguf) | 5277.85 MB |
| Q4_K_M | [nbeerbower-mistral-nemo-gutenberg-12B-v4-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-gutenberg-12B-v4-GGUF/blob/main/nbeerbower-mistral-nemo-gutenberg-12B-v4-Q4_K_M.gguf) | 7130.82 MB |
| Q4_K_S | [nbeerbower-mistral-nemo-gutenberg-12B-v4-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-gutenberg-12B-v4-GGUF/blob/main/nbeerbower-mistral-nemo-gutenberg-12B-v4-Q4_K_S.gguf) | 6790.35 MB |
| Q5_K_M | [nbeerbower-mistral-nemo-gutenberg-12B-v4-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-gutenberg-12B-v4-GGUF/blob/main/nbeerbower-mistral-nemo-gutenberg-12B-v4-Q5_K_M.gguf) | 8323.32 MB |
| Q5_K_S | [nbeerbower-mistral-nemo-gutenberg-12B-v4-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-gutenberg-12B-v4-GGUF/blob/main/nbeerbower-mistral-nemo-gutenberg-12B-v4-Q5_K_S.gguf) | 8124.10 MB |
| Q6_K | [nbeerbower-mistral-nemo-gutenberg-12B-v4-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-gutenberg-12B-v4-GGUF/blob/main/nbeerbower-mistral-nemo-gutenberg-12B-v4-Q6_K.gguf) | 9590.35 MB |
| Q8_0 | [nbeerbower-mistral-nemo-gutenberg-12B-v4-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-gutenberg-12B-v4-GGUF/blob/main/nbeerbower-mistral-nemo-gutenberg-12B-v4-Q8_0.gguf) | 12419.10 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/VAGOsolutions-SauerkrautLM-Nemo-12b-Instruct-GGUF | featherless-ai-quants | 2024-11-10T19:45:18Z | 28 | 0 | null | [
"gguf",
"text-generation",
"base_model:VAGOsolutions/SauerkrautLM-Nemo-12b-Instruct",
"base_model:quantized:VAGOsolutions/SauerkrautLM-Nemo-12b-Instruct",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-06T19:35:49Z | ---
base_model: VAGOsolutions/SauerkrautLM-Nemo-12b-Instruct
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# VAGOsolutions/SauerkrautLM-Nemo-12b-Instruct GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [VAGOsolutions-SauerkrautLM-Nemo-12b-Instruct-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/VAGOsolutions-SauerkrautLM-Nemo-12b-Instruct-GGUF/blob/main/VAGOsolutions-SauerkrautLM-Nemo-12b-Instruct-IQ4_XS.gguf) | 6485.04 MB |
| Q2_K | [VAGOsolutions-SauerkrautLM-Nemo-12b-Instruct-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/VAGOsolutions-SauerkrautLM-Nemo-12b-Instruct-GGUF/blob/main/VAGOsolutions-SauerkrautLM-Nemo-12b-Instruct-Q2_K.gguf) | 4569.10 MB |
| Q3_K_L | [VAGOsolutions-SauerkrautLM-Nemo-12b-Instruct-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/VAGOsolutions-SauerkrautLM-Nemo-12b-Instruct-GGUF/blob/main/VAGOsolutions-SauerkrautLM-Nemo-12b-Instruct-Q3_K_L.gguf) | 6257.54 MB |
| Q3_K_M | [VAGOsolutions-SauerkrautLM-Nemo-12b-Instruct-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/VAGOsolutions-SauerkrautLM-Nemo-12b-Instruct-GGUF/blob/main/VAGOsolutions-SauerkrautLM-Nemo-12b-Instruct-Q3_K_M.gguf) | 5801.29 MB |
| Q3_K_S | [VAGOsolutions-SauerkrautLM-Nemo-12b-Instruct-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/VAGOsolutions-SauerkrautLM-Nemo-12b-Instruct-GGUF/blob/main/VAGOsolutions-SauerkrautLM-Nemo-12b-Instruct-Q3_K_S.gguf) | 5277.85 MB |
| Q4_K_M | [VAGOsolutions-SauerkrautLM-Nemo-12b-Instruct-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/VAGOsolutions-SauerkrautLM-Nemo-12b-Instruct-GGUF/blob/main/VAGOsolutions-SauerkrautLM-Nemo-12b-Instruct-Q4_K_M.gguf) | 7130.82 MB |
| Q4_K_S | [VAGOsolutions-SauerkrautLM-Nemo-12b-Instruct-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/VAGOsolutions-SauerkrautLM-Nemo-12b-Instruct-GGUF/blob/main/VAGOsolutions-SauerkrautLM-Nemo-12b-Instruct-Q4_K_S.gguf) | 6790.35 MB |
| Q5_K_M | [VAGOsolutions-SauerkrautLM-Nemo-12b-Instruct-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/VAGOsolutions-SauerkrautLM-Nemo-12b-Instruct-GGUF/blob/main/VAGOsolutions-SauerkrautLM-Nemo-12b-Instruct-Q5_K_M.gguf) | 8323.32 MB |
| Q5_K_S | [VAGOsolutions-SauerkrautLM-Nemo-12b-Instruct-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/VAGOsolutions-SauerkrautLM-Nemo-12b-Instruct-GGUF/blob/main/VAGOsolutions-SauerkrautLM-Nemo-12b-Instruct-Q5_K_S.gguf) | 8124.10 MB |
| Q6_K | [VAGOsolutions-SauerkrautLM-Nemo-12b-Instruct-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/VAGOsolutions-SauerkrautLM-Nemo-12b-Instruct-GGUF/blob/main/VAGOsolutions-SauerkrautLM-Nemo-12b-Instruct-Q6_K.gguf) | 9590.35 MB |
| Q8_0 | [VAGOsolutions-SauerkrautLM-Nemo-12b-Instruct-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/VAGOsolutions-SauerkrautLM-Nemo-12b-Instruct-GGUF/blob/main/VAGOsolutions-SauerkrautLM-Nemo-12b-Instruct-Q8_0.gguf) | 12419.10 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/ProdeusUnity-Stellar-Odyssey-12b-v0.0-GGUF | featherless-ai-quants | 2024-11-10T19:45:07Z | 10 | 0 | null | [
"gguf",
"text-generation",
"base_model:ProdeusUnity/Stellar-Odyssey-12b-v0.0",
"base_model:quantized:ProdeusUnity/Stellar-Odyssey-12b-v0.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-06T18:33:46Z | ---
base_model: ProdeusUnity/Stellar-Odyssey-12b-v0.0
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# ProdeusUnity/Stellar-Odyssey-12b-v0.0 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [ProdeusUnity-Stellar-Odyssey-12b-v0.0-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/ProdeusUnity-Stellar-Odyssey-12b-v0.0-GGUF/blob/main/ProdeusUnity-Stellar-Odyssey-12b-v0.0-IQ4_XS.gguf) | 6485.04 MB |
| Q2_K | [ProdeusUnity-Stellar-Odyssey-12b-v0.0-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/ProdeusUnity-Stellar-Odyssey-12b-v0.0-GGUF/blob/main/ProdeusUnity-Stellar-Odyssey-12b-v0.0-Q2_K.gguf) | 4569.10 MB |
| Q3_K_L | [ProdeusUnity-Stellar-Odyssey-12b-v0.0-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/ProdeusUnity-Stellar-Odyssey-12b-v0.0-GGUF/blob/main/ProdeusUnity-Stellar-Odyssey-12b-v0.0-Q3_K_L.gguf) | 6257.54 MB |
| Q3_K_M | [ProdeusUnity-Stellar-Odyssey-12b-v0.0-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/ProdeusUnity-Stellar-Odyssey-12b-v0.0-GGUF/blob/main/ProdeusUnity-Stellar-Odyssey-12b-v0.0-Q3_K_M.gguf) | 5801.29 MB |
| Q3_K_S | [ProdeusUnity-Stellar-Odyssey-12b-v0.0-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/ProdeusUnity-Stellar-Odyssey-12b-v0.0-GGUF/blob/main/ProdeusUnity-Stellar-Odyssey-12b-v0.0-Q3_K_S.gguf) | 5277.85 MB |
| Q4_K_M | [ProdeusUnity-Stellar-Odyssey-12b-v0.0-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/ProdeusUnity-Stellar-Odyssey-12b-v0.0-GGUF/blob/main/ProdeusUnity-Stellar-Odyssey-12b-v0.0-Q4_K_M.gguf) | 7130.82 MB |
| Q4_K_S | [ProdeusUnity-Stellar-Odyssey-12b-v0.0-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/ProdeusUnity-Stellar-Odyssey-12b-v0.0-GGUF/blob/main/ProdeusUnity-Stellar-Odyssey-12b-v0.0-Q4_K_S.gguf) | 6790.35 MB |
| Q5_K_M | [ProdeusUnity-Stellar-Odyssey-12b-v0.0-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/ProdeusUnity-Stellar-Odyssey-12b-v0.0-GGUF/blob/main/ProdeusUnity-Stellar-Odyssey-12b-v0.0-Q5_K_M.gguf) | 8323.32 MB |
| Q5_K_S | [ProdeusUnity-Stellar-Odyssey-12b-v0.0-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/ProdeusUnity-Stellar-Odyssey-12b-v0.0-GGUF/blob/main/ProdeusUnity-Stellar-Odyssey-12b-v0.0-Q5_K_S.gguf) | 8124.10 MB |
| Q6_K | [ProdeusUnity-Stellar-Odyssey-12b-v0.0-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/ProdeusUnity-Stellar-Odyssey-12b-v0.0-GGUF/blob/main/ProdeusUnity-Stellar-Odyssey-12b-v0.0-Q6_K.gguf) | 9590.35 MB |
| Q8_0 | [ProdeusUnity-Stellar-Odyssey-12b-v0.0-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/ProdeusUnity-Stellar-Odyssey-12b-v0.0-GGUF/blob/main/ProdeusUnity-Stellar-Odyssey-12b-v0.0-Q8_0.gguf) | 12419.10 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/Yuma42-KangalKhan-Alpha-Sapphiroid-7B-Fixed-GGUF | featherless-ai-quants | 2024-11-10T19:45:03Z | 15 | 0 | null | [
"gguf",
"text-generation",
"base_model:Yuma42/KangalKhan-Alpha-Sapphiroid-7B-Fixed",
"base_model:quantized:Yuma42/KangalKhan-Alpha-Sapphiroid-7B-Fixed",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-06T18:22:49Z | ---
base_model: Yuma42/KangalKhan-Alpha-Sapphiroid-7B-Fixed
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# Yuma42/KangalKhan-Alpha-Sapphiroid-7B-Fixed GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [Yuma42-KangalKhan-Alpha-Sapphiroid-7B-Fixed-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Yuma42-KangalKhan-Alpha-Sapphiroid-7B-Fixed-GGUF/blob/main/Yuma42-KangalKhan-Alpha-Sapphiroid-7B-Fixed-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [Yuma42-KangalKhan-Alpha-Sapphiroid-7B-Fixed-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Yuma42-KangalKhan-Alpha-Sapphiroid-7B-Fixed-GGUF/blob/main/Yuma42-KangalKhan-Alpha-Sapphiroid-7B-Fixed-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [Yuma42-KangalKhan-Alpha-Sapphiroid-7B-Fixed-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Yuma42-KangalKhan-Alpha-Sapphiroid-7B-Fixed-GGUF/blob/main/Yuma42-KangalKhan-Alpha-Sapphiroid-7B-Fixed-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [Yuma42-KangalKhan-Alpha-Sapphiroid-7B-Fixed-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Yuma42-KangalKhan-Alpha-Sapphiroid-7B-Fixed-GGUF/blob/main/Yuma42-KangalKhan-Alpha-Sapphiroid-7B-Fixed-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [Yuma42-KangalKhan-Alpha-Sapphiroid-7B-Fixed-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Yuma42-KangalKhan-Alpha-Sapphiroid-7B-Fixed-GGUF/blob/main/Yuma42-KangalKhan-Alpha-Sapphiroid-7B-Fixed-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [Yuma42-KangalKhan-Alpha-Sapphiroid-7B-Fixed-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Yuma42-KangalKhan-Alpha-Sapphiroid-7B-Fixed-GGUF/blob/main/Yuma42-KangalKhan-Alpha-Sapphiroid-7B-Fixed-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [Yuma42-KangalKhan-Alpha-Sapphiroid-7B-Fixed-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Yuma42-KangalKhan-Alpha-Sapphiroid-7B-Fixed-GGUF/blob/main/Yuma42-KangalKhan-Alpha-Sapphiroid-7B-Fixed-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [Yuma42-KangalKhan-Alpha-Sapphiroid-7B-Fixed-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Yuma42-KangalKhan-Alpha-Sapphiroid-7B-Fixed-GGUF/blob/main/Yuma42-KangalKhan-Alpha-Sapphiroid-7B-Fixed-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [Yuma42-KangalKhan-Alpha-Sapphiroid-7B-Fixed-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Yuma42-KangalKhan-Alpha-Sapphiroid-7B-Fixed-GGUF/blob/main/Yuma42-KangalKhan-Alpha-Sapphiroid-7B-Fixed-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [Yuma42-KangalKhan-Alpha-Sapphiroid-7B-Fixed-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Yuma42-KangalKhan-Alpha-Sapphiroid-7B-Fixed-GGUF/blob/main/Yuma42-KangalKhan-Alpha-Sapphiroid-7B-Fixed-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [Yuma42-KangalKhan-Alpha-Sapphiroid-7B-Fixed-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Yuma42-KangalKhan-Alpha-Sapphiroid-7B-Fixed-GGUF/blob/main/Yuma42-KangalKhan-Alpha-Sapphiroid-7B-Fixed-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/nayohan-llama3-instrucTrans-enko-8b-GGUF | featherless-ai-quants | 2024-11-10T19:44:58Z | 8 | 0 | null | [
"gguf",
"text-generation",
"base_model:nayohan/llama3-instrucTrans-enko-8b",
"base_model:quantized:nayohan/llama3-instrucTrans-enko-8b",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-06T18:07:03Z | ---
base_model: nayohan/llama3-instrucTrans-enko-8b
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# nayohan/llama3-instrucTrans-enko-8b GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [nayohan-llama3-instrucTrans-enko-8b-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/nayohan-llama3-instrucTrans-enko-8b-GGUF/blob/main/nayohan-llama3-instrucTrans-enko-8b-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [nayohan-llama3-instrucTrans-enko-8b-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/nayohan-llama3-instrucTrans-enko-8b-GGUF/blob/main/nayohan-llama3-instrucTrans-enko-8b-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [nayohan-llama3-instrucTrans-enko-8b-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/nayohan-llama3-instrucTrans-enko-8b-GGUF/blob/main/nayohan-llama3-instrucTrans-enko-8b-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [nayohan-llama3-instrucTrans-enko-8b-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/nayohan-llama3-instrucTrans-enko-8b-GGUF/blob/main/nayohan-llama3-instrucTrans-enko-8b-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [nayohan-llama3-instrucTrans-enko-8b-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/nayohan-llama3-instrucTrans-enko-8b-GGUF/blob/main/nayohan-llama3-instrucTrans-enko-8b-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [nayohan-llama3-instrucTrans-enko-8b-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/nayohan-llama3-instrucTrans-enko-8b-GGUF/blob/main/nayohan-llama3-instrucTrans-enko-8b-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [nayohan-llama3-instrucTrans-enko-8b-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/nayohan-llama3-instrucTrans-enko-8b-GGUF/blob/main/nayohan-llama3-instrucTrans-enko-8b-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [nayohan-llama3-instrucTrans-enko-8b-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/nayohan-llama3-instrucTrans-enko-8b-GGUF/blob/main/nayohan-llama3-instrucTrans-enko-8b-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [nayohan-llama3-instrucTrans-enko-8b-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/nayohan-llama3-instrucTrans-enko-8b-GGUF/blob/main/nayohan-llama3-instrucTrans-enko-8b-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [nayohan-llama3-instrucTrans-enko-8b-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/nayohan-llama3-instrucTrans-enko-8b-GGUF/blob/main/nayohan-llama3-instrucTrans-enko-8b-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [nayohan-llama3-instrucTrans-enko-8b-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/nayohan-llama3-instrucTrans-enko-8b-GGUF/blob/main/nayohan-llama3-instrucTrans-enko-8b-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Instruct-DPO-v0.1-GGUF | featherless-ai-quants | 2024-11-10T19:44:44Z | 15 | 0 | null | [
"gguf",
"text-generation",
"base_model:OwenArli/ArliAI-Llama-3-8B-Instruct-DPO-v0.1",
"base_model:quantized:OwenArli/ArliAI-Llama-3-8B-Instruct-DPO-v0.1",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-06T17:00:17Z | ---
base_model: OwenArli/ArliAI-Llama-3-8B-Instruct-DPO-v0.1
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# OwenArli/ArliAI-Llama-3-8B-Instruct-DPO-v0.1 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [OwenArli-ArliAI-Llama-3-8B-Instruct-DPO-v0.1-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Instruct-DPO-v0.1-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Instruct-DPO-v0.1-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [OwenArli-ArliAI-Llama-3-8B-Instruct-DPO-v0.1-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Instruct-DPO-v0.1-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Instruct-DPO-v0.1-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [OwenArli-ArliAI-Llama-3-8B-Instruct-DPO-v0.1-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Instruct-DPO-v0.1-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Instruct-DPO-v0.1-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [OwenArli-ArliAI-Llama-3-8B-Instruct-DPO-v0.1-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Instruct-DPO-v0.1-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Instruct-DPO-v0.1-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [OwenArli-ArliAI-Llama-3-8B-Instruct-DPO-v0.1-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Instruct-DPO-v0.1-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Instruct-DPO-v0.1-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [OwenArli-ArliAI-Llama-3-8B-Instruct-DPO-v0.1-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Instruct-DPO-v0.1-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Instruct-DPO-v0.1-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [OwenArli-ArliAI-Llama-3-8B-Instruct-DPO-v0.1-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Instruct-DPO-v0.1-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Instruct-DPO-v0.1-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [OwenArli-ArliAI-Llama-3-8B-Instruct-DPO-v0.1-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Instruct-DPO-v0.1-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Instruct-DPO-v0.1-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [OwenArli-ArliAI-Llama-3-8B-Instruct-DPO-v0.1-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Instruct-DPO-v0.1-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Instruct-DPO-v0.1-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [OwenArli-ArliAI-Llama-3-8B-Instruct-DPO-v0.1-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Instruct-DPO-v0.1-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Instruct-DPO-v0.1-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [OwenArli-ArliAI-Llama-3-8B-Instruct-DPO-v0.1-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Instruct-DPO-v0.1-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Instruct-DPO-v0.1-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/Vivacem-Mistral-7B-MMIQC-GGUF | featherless-ai-quants | 2024-11-10T19:44:43Z | 20 | 0 | null | [
"gguf",
"text-generation",
"base_model:Vivacem/Mistral-7B-MMIQC",
"base_model:quantized:Vivacem/Mistral-7B-MMIQC",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-11-06T16:54:19Z | ---
base_model: Vivacem/Mistral-7B-MMIQC
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# Vivacem/Mistral-7B-MMIQC GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [Vivacem-Mistral-7B-MMIQC-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Vivacem-Mistral-7B-MMIQC-GGUF/blob/main/Vivacem-Mistral-7B-MMIQC-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [Vivacem-Mistral-7B-MMIQC-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Vivacem-Mistral-7B-MMIQC-GGUF/blob/main/Vivacem-Mistral-7B-MMIQC-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [Vivacem-Mistral-7B-MMIQC-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Vivacem-Mistral-7B-MMIQC-GGUF/blob/main/Vivacem-Mistral-7B-MMIQC-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [Vivacem-Mistral-7B-MMIQC-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Vivacem-Mistral-7B-MMIQC-GGUF/blob/main/Vivacem-Mistral-7B-MMIQC-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [Vivacem-Mistral-7B-MMIQC-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Vivacem-Mistral-7B-MMIQC-GGUF/blob/main/Vivacem-Mistral-7B-MMIQC-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [Vivacem-Mistral-7B-MMIQC-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Vivacem-Mistral-7B-MMIQC-GGUF/blob/main/Vivacem-Mistral-7B-MMIQC-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [Vivacem-Mistral-7B-MMIQC-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Vivacem-Mistral-7B-MMIQC-GGUF/blob/main/Vivacem-Mistral-7B-MMIQC-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [Vivacem-Mistral-7B-MMIQC-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Vivacem-Mistral-7B-MMIQC-GGUF/blob/main/Vivacem-Mistral-7B-MMIQC-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [Vivacem-Mistral-7B-MMIQC-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Vivacem-Mistral-7B-MMIQC-GGUF/blob/main/Vivacem-Mistral-7B-MMIQC-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [Vivacem-Mistral-7B-MMIQC-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Vivacem-Mistral-7B-MMIQC-GGUF/blob/main/Vivacem-Mistral-7B-MMIQC-Q6_K.gguf) | 5666.79 MB |
| Q8_0 | [Vivacem-Mistral-7B-MMIQC-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Vivacem-Mistral-7B-MMIQC-GGUF/blob/main/Vivacem-Mistral-7B-MMIQC-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/GalrionSoftworks-Canidori-12B-v1-GGUF | featherless-ai-quants | 2024-11-10T19:44:31Z | 26 | 1 | null | [
"gguf",
"text-generation",
"base_model:GalrionSoftworks/Canidori-12B-v1",
"base_model:quantized:GalrionSoftworks/Canidori-12B-v1",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-06T16:26:29Z | ---
base_model: GalrionSoftworks/Canidori-12B-v1
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# GalrionSoftworks/Canidori-12B-v1 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [GalrionSoftworks-Canidori-12B-v1-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/GalrionSoftworks-Canidori-12B-v1-GGUF/blob/main/GalrionSoftworks-Canidori-12B-v1-IQ4_XS.gguf) | 6485.04 MB |
| Q2_K | [GalrionSoftworks-Canidori-12B-v1-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/GalrionSoftworks-Canidori-12B-v1-GGUF/blob/main/GalrionSoftworks-Canidori-12B-v1-Q2_K.gguf) | 4569.10 MB |
| Q3_K_L | [GalrionSoftworks-Canidori-12B-v1-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/GalrionSoftworks-Canidori-12B-v1-GGUF/blob/main/GalrionSoftworks-Canidori-12B-v1-Q3_K_L.gguf) | 6257.54 MB |
| Q3_K_M | [GalrionSoftworks-Canidori-12B-v1-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/GalrionSoftworks-Canidori-12B-v1-GGUF/blob/main/GalrionSoftworks-Canidori-12B-v1-Q3_K_M.gguf) | 5801.29 MB |
| Q3_K_S | [GalrionSoftworks-Canidori-12B-v1-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/GalrionSoftworks-Canidori-12B-v1-GGUF/blob/main/GalrionSoftworks-Canidori-12B-v1-Q3_K_S.gguf) | 5277.85 MB |
| Q4_K_M | [GalrionSoftworks-Canidori-12B-v1-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/GalrionSoftworks-Canidori-12B-v1-GGUF/blob/main/GalrionSoftworks-Canidori-12B-v1-Q4_K_M.gguf) | 7130.82 MB |
| Q4_K_S | [GalrionSoftworks-Canidori-12B-v1-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/GalrionSoftworks-Canidori-12B-v1-GGUF/blob/main/GalrionSoftworks-Canidori-12B-v1-Q4_K_S.gguf) | 6790.35 MB |
| Q5_K_M | [GalrionSoftworks-Canidori-12B-v1-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/GalrionSoftworks-Canidori-12B-v1-GGUF/blob/main/GalrionSoftworks-Canidori-12B-v1-Q5_K_M.gguf) | 8323.32 MB |
| Q5_K_S | [GalrionSoftworks-Canidori-12B-v1-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/GalrionSoftworks-Canidori-12B-v1-GGUF/blob/main/GalrionSoftworks-Canidori-12B-v1-Q5_K_S.gguf) | 8124.10 MB |
| Q6_K | [GalrionSoftworks-Canidori-12B-v1-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/GalrionSoftworks-Canidori-12B-v1-GGUF/blob/main/GalrionSoftworks-Canidori-12B-v1-Q6_K.gguf) | 9590.35 MB |
| Q8_0 | [GalrionSoftworks-Canidori-12B-v1-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/GalrionSoftworks-Canidori-12B-v1-GGUF/blob/main/GalrionSoftworks-Canidori-12B-v1-Q8_0.gguf) | 12419.10 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/flammenai-Mahou-1.2a-llama3-8B-GGUF | featherless-ai-quants | 2024-11-10T19:44:28Z | 8 | 0 | null | [
"gguf",
"text-generation",
"base_model:flammenai/Mahou-1.2a-llama3-8B",
"base_model:quantized:flammenai/Mahou-1.2a-llama3-8B",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-06T16:02:37Z | ---
base_model: flammenai/Mahou-1.2a-llama3-8B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# flammenai/Mahou-1.2a-llama3-8B GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [flammenai-Mahou-1.2a-llama3-8B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/flammenai-Mahou-1.2a-llama3-8B-GGUF/blob/main/flammenai-Mahou-1.2a-llama3-8B-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [flammenai-Mahou-1.2a-llama3-8B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/flammenai-Mahou-1.2a-llama3-8B-GGUF/blob/main/flammenai-Mahou-1.2a-llama3-8B-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [flammenai-Mahou-1.2a-llama3-8B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/flammenai-Mahou-1.2a-llama3-8B-GGUF/blob/main/flammenai-Mahou-1.2a-llama3-8B-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [flammenai-Mahou-1.2a-llama3-8B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/flammenai-Mahou-1.2a-llama3-8B-GGUF/blob/main/flammenai-Mahou-1.2a-llama3-8B-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [flammenai-Mahou-1.2a-llama3-8B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/flammenai-Mahou-1.2a-llama3-8B-GGUF/blob/main/flammenai-Mahou-1.2a-llama3-8B-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [flammenai-Mahou-1.2a-llama3-8B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/flammenai-Mahou-1.2a-llama3-8B-GGUF/blob/main/flammenai-Mahou-1.2a-llama3-8B-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [flammenai-Mahou-1.2a-llama3-8B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/flammenai-Mahou-1.2a-llama3-8B-GGUF/blob/main/flammenai-Mahou-1.2a-llama3-8B-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [flammenai-Mahou-1.2a-llama3-8B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/flammenai-Mahou-1.2a-llama3-8B-GGUF/blob/main/flammenai-Mahou-1.2a-llama3-8B-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [flammenai-Mahou-1.2a-llama3-8B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/flammenai-Mahou-1.2a-llama3-8B-GGUF/blob/main/flammenai-Mahou-1.2a-llama3-8B-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [flammenai-Mahou-1.2a-llama3-8B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/flammenai-Mahou-1.2a-llama3-8B-GGUF/blob/main/flammenai-Mahou-1.2a-llama3-8B-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [flammenai-Mahou-1.2a-llama3-8B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/flammenai-Mahou-1.2a-llama3-8B-GGUF/blob/main/flammenai-Mahou-1.2a-llama3-8B-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/FelixChao-Sectumsempra-7B-DPO-GGUF | featherless-ai-quants | 2024-11-10T19:44:06Z | 5 | 0 | null | [
"gguf",
"text-generation",
"base_model:FelixChao/Sectumsempra-7B-DPO",
"base_model:quantized:FelixChao/Sectumsempra-7B-DPO",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-11-06T14:11:04Z | ---
base_model: FelixChao/Sectumsempra-7B-DPO
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# FelixChao/Sectumsempra-7B-DPO GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [FelixChao-Sectumsempra-7B-DPO-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/FelixChao-Sectumsempra-7B-DPO-GGUF/blob/main/FelixChao-Sectumsempra-7B-DPO-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [FelixChao-Sectumsempra-7B-DPO-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/FelixChao-Sectumsempra-7B-DPO-GGUF/blob/main/FelixChao-Sectumsempra-7B-DPO-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [FelixChao-Sectumsempra-7B-DPO-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/FelixChao-Sectumsempra-7B-DPO-GGUF/blob/main/FelixChao-Sectumsempra-7B-DPO-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [FelixChao-Sectumsempra-7B-DPO-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/FelixChao-Sectumsempra-7B-DPO-GGUF/blob/main/FelixChao-Sectumsempra-7B-DPO-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [FelixChao-Sectumsempra-7B-DPO-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/FelixChao-Sectumsempra-7B-DPO-GGUF/blob/main/FelixChao-Sectumsempra-7B-DPO-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [FelixChao-Sectumsempra-7B-DPO-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/FelixChao-Sectumsempra-7B-DPO-GGUF/blob/main/FelixChao-Sectumsempra-7B-DPO-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [FelixChao-Sectumsempra-7B-DPO-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/FelixChao-Sectumsempra-7B-DPO-GGUF/blob/main/FelixChao-Sectumsempra-7B-DPO-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [FelixChao-Sectumsempra-7B-DPO-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/FelixChao-Sectumsempra-7B-DPO-GGUF/blob/main/FelixChao-Sectumsempra-7B-DPO-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [FelixChao-Sectumsempra-7B-DPO-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/FelixChao-Sectumsempra-7B-DPO-GGUF/blob/main/FelixChao-Sectumsempra-7B-DPO-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [FelixChao-Sectumsempra-7B-DPO-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/FelixChao-Sectumsempra-7B-DPO-GGUF/blob/main/FelixChao-Sectumsempra-7B-DPO-Q6_K.gguf) | 5666.79 MB |
| Q8_0 | [FelixChao-Sectumsempra-7B-DPO-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/FelixChao-Sectumsempra-7B-DPO-GGUF/blob/main/FelixChao-Sectumsempra-7B-DPO-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/elinas-Chronos-Gold-12B-1.0-GGUF | featherless-ai-quants | 2024-11-10T19:44:01Z | 7 | 0 | null | [
"gguf",
"text-generation",
"base_model:elinas/Chronos-Gold-12B-1.0",
"base_model:quantized:elinas/Chronos-Gold-12B-1.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-06T13:56:09Z | ---
base_model: elinas/Chronos-Gold-12B-1.0
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# elinas/Chronos-Gold-12B-1.0 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [elinas-Chronos-Gold-12B-1.0-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/elinas-Chronos-Gold-12B-1.0-GGUF/blob/main/elinas-Chronos-Gold-12B-1.0-IQ4_XS.gguf) | 6485.04 MB |
| Q2_K | [elinas-Chronos-Gold-12B-1.0-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/elinas-Chronos-Gold-12B-1.0-GGUF/blob/main/elinas-Chronos-Gold-12B-1.0-Q2_K.gguf) | 4569.10 MB |
| Q3_K_L | [elinas-Chronos-Gold-12B-1.0-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/elinas-Chronos-Gold-12B-1.0-GGUF/blob/main/elinas-Chronos-Gold-12B-1.0-Q3_K_L.gguf) | 6257.54 MB |
| Q3_K_M | [elinas-Chronos-Gold-12B-1.0-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/elinas-Chronos-Gold-12B-1.0-GGUF/blob/main/elinas-Chronos-Gold-12B-1.0-Q3_K_M.gguf) | 5801.29 MB |
| Q3_K_S | [elinas-Chronos-Gold-12B-1.0-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/elinas-Chronos-Gold-12B-1.0-GGUF/blob/main/elinas-Chronos-Gold-12B-1.0-Q3_K_S.gguf) | 5277.85 MB |
| Q4_K_M | [elinas-Chronos-Gold-12B-1.0-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/elinas-Chronos-Gold-12B-1.0-GGUF/blob/main/elinas-Chronos-Gold-12B-1.0-Q4_K_M.gguf) | 7130.82 MB |
| Q4_K_S | [elinas-Chronos-Gold-12B-1.0-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/elinas-Chronos-Gold-12B-1.0-GGUF/blob/main/elinas-Chronos-Gold-12B-1.0-Q4_K_S.gguf) | 6790.35 MB |
| Q5_K_M | [elinas-Chronos-Gold-12B-1.0-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/elinas-Chronos-Gold-12B-1.0-GGUF/blob/main/elinas-Chronos-Gold-12B-1.0-Q5_K_M.gguf) | 8323.32 MB |
| Q5_K_S | [elinas-Chronos-Gold-12B-1.0-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/elinas-Chronos-Gold-12B-1.0-GGUF/blob/main/elinas-Chronos-Gold-12B-1.0-Q5_K_S.gguf) | 8124.10 MB |
| Q6_K | [elinas-Chronos-Gold-12B-1.0-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/elinas-Chronos-Gold-12B-1.0-GGUF/blob/main/elinas-Chronos-Gold-12B-1.0-Q6_K.gguf) | 9590.35 MB |
| Q8_0 | [elinas-Chronos-Gold-12B-1.0-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/elinas-Chronos-Gold-12B-1.0-GGUF/blob/main/elinas-Chronos-Gold-12B-1.0-Q8_0.gguf) | 12419.10 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/bunnycore-Chimera-Apex-7B-GGUF | featherless-ai-quants | 2024-11-10T19:43:59Z | 68 | 0 | null | [
"gguf",
"text-generation",
"base_model:bunnycore/Chimera-Apex-7B",
"base_model:quantized:bunnycore/Chimera-Apex-7B",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-06T13:54:41Z | ---
base_model: bunnycore/Chimera-Apex-7B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# bunnycore/Chimera-Apex-7B GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [bunnycore-Chimera-Apex-7B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-Chimera-Apex-7B-GGUF/blob/main/bunnycore-Chimera-Apex-7B-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [bunnycore-Chimera-Apex-7B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-Chimera-Apex-7B-GGUF/blob/main/bunnycore-Chimera-Apex-7B-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [bunnycore-Chimera-Apex-7B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-Chimera-Apex-7B-GGUF/blob/main/bunnycore-Chimera-Apex-7B-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [bunnycore-Chimera-Apex-7B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-Chimera-Apex-7B-GGUF/blob/main/bunnycore-Chimera-Apex-7B-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [bunnycore-Chimera-Apex-7B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-Chimera-Apex-7B-GGUF/blob/main/bunnycore-Chimera-Apex-7B-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [bunnycore-Chimera-Apex-7B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-Chimera-Apex-7B-GGUF/blob/main/bunnycore-Chimera-Apex-7B-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [bunnycore-Chimera-Apex-7B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-Chimera-Apex-7B-GGUF/blob/main/bunnycore-Chimera-Apex-7B-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [bunnycore-Chimera-Apex-7B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-Chimera-Apex-7B-GGUF/blob/main/bunnycore-Chimera-Apex-7B-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [bunnycore-Chimera-Apex-7B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-Chimera-Apex-7B-GGUF/blob/main/bunnycore-Chimera-Apex-7B-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [bunnycore-Chimera-Apex-7B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-Chimera-Apex-7B-GGUF/blob/main/bunnycore-Chimera-Apex-7B-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [bunnycore-Chimera-Apex-7B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-Chimera-Apex-7B-GGUF/blob/main/bunnycore-Chimera-Apex-7B-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/chargoddard-prometheus-2-llama-3-8b-GGUF | featherless-ai-quants | 2024-11-10T19:43:54Z | 22 | 0 | null | [
"gguf",
"text-generation",
"base_model:chargoddard/prometheus-2-llama-3-8b",
"base_model:quantized:chargoddard/prometheus-2-llama-3-8b",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-06T13:33:02Z | ---
base_model: chargoddard/prometheus-2-llama-3-8b
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# chargoddard/prometheus-2-llama-3-8b GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [chargoddard-prometheus-2-llama-3-8b-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/chargoddard-prometheus-2-llama-3-8b-GGUF/blob/main/chargoddard-prometheus-2-llama-3-8b-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [chargoddard-prometheus-2-llama-3-8b-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/chargoddard-prometheus-2-llama-3-8b-GGUF/blob/main/chargoddard-prometheus-2-llama-3-8b-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [chargoddard-prometheus-2-llama-3-8b-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/chargoddard-prometheus-2-llama-3-8b-GGUF/blob/main/chargoddard-prometheus-2-llama-3-8b-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [chargoddard-prometheus-2-llama-3-8b-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/chargoddard-prometheus-2-llama-3-8b-GGUF/blob/main/chargoddard-prometheus-2-llama-3-8b-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [chargoddard-prometheus-2-llama-3-8b-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/chargoddard-prometheus-2-llama-3-8b-GGUF/blob/main/chargoddard-prometheus-2-llama-3-8b-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [chargoddard-prometheus-2-llama-3-8b-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/chargoddard-prometheus-2-llama-3-8b-GGUF/blob/main/chargoddard-prometheus-2-llama-3-8b-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [chargoddard-prometheus-2-llama-3-8b-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/chargoddard-prometheus-2-llama-3-8b-GGUF/blob/main/chargoddard-prometheus-2-llama-3-8b-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [chargoddard-prometheus-2-llama-3-8b-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/chargoddard-prometheus-2-llama-3-8b-GGUF/blob/main/chargoddard-prometheus-2-llama-3-8b-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [chargoddard-prometheus-2-llama-3-8b-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/chargoddard-prometheus-2-llama-3-8b-GGUF/blob/main/chargoddard-prometheus-2-llama-3-8b-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [chargoddard-prometheus-2-llama-3-8b-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/chargoddard-prometheus-2-llama-3-8b-GGUF/blob/main/chargoddard-prometheus-2-llama-3-8b-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [chargoddard-prometheus-2-llama-3-8b-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/chargoddard-prometheus-2-llama-3-8b-GGUF/blob/main/chargoddard-prometheus-2-llama-3-8b-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/VongolaChouko-Starcannon-Unleashed-12B-v1.0-GGUF | featherless-ai-quants | 2024-11-10T19:43:46Z | 19 | 0 | null | [
"gguf",
"text-generation",
"base_model:VongolaChouko/Starcannon-Unleashed-12B-v1.0",
"base_model:quantized:VongolaChouko/Starcannon-Unleashed-12B-v1.0",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-11-06T12:46:25Z | ---
base_model: VongolaChouko/Starcannon-Unleashed-12B-v1.0
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# VongolaChouko/Starcannon-Unleashed-12B-v1.0 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [VongolaChouko-Starcannon-Unleashed-12B-v1.0-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/VongolaChouko-Starcannon-Unleashed-12B-v1.0-GGUF/blob/main/VongolaChouko-Starcannon-Unleashed-12B-v1.0-IQ4_XS.gguf) | 6485.04 MB |
| Q2_K | [VongolaChouko-Starcannon-Unleashed-12B-v1.0-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/VongolaChouko-Starcannon-Unleashed-12B-v1.0-GGUF/blob/main/VongolaChouko-Starcannon-Unleashed-12B-v1.0-Q2_K.gguf) | 4569.10 MB |
| Q3_K_L | [VongolaChouko-Starcannon-Unleashed-12B-v1.0-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/VongolaChouko-Starcannon-Unleashed-12B-v1.0-GGUF/blob/main/VongolaChouko-Starcannon-Unleashed-12B-v1.0-Q3_K_L.gguf) | 6257.54 MB |
| Q3_K_M | [VongolaChouko-Starcannon-Unleashed-12B-v1.0-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/VongolaChouko-Starcannon-Unleashed-12B-v1.0-GGUF/blob/main/VongolaChouko-Starcannon-Unleashed-12B-v1.0-Q3_K_M.gguf) | 5801.29 MB |
| Q3_K_S | [VongolaChouko-Starcannon-Unleashed-12B-v1.0-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/VongolaChouko-Starcannon-Unleashed-12B-v1.0-GGUF/blob/main/VongolaChouko-Starcannon-Unleashed-12B-v1.0-Q3_K_S.gguf) | 5277.85 MB |
| Q4_K_M | [VongolaChouko-Starcannon-Unleashed-12B-v1.0-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/VongolaChouko-Starcannon-Unleashed-12B-v1.0-GGUF/blob/main/VongolaChouko-Starcannon-Unleashed-12B-v1.0-Q4_K_M.gguf) | 7130.82 MB |
| Q4_K_S | [VongolaChouko-Starcannon-Unleashed-12B-v1.0-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/VongolaChouko-Starcannon-Unleashed-12B-v1.0-GGUF/blob/main/VongolaChouko-Starcannon-Unleashed-12B-v1.0-Q4_K_S.gguf) | 6790.35 MB |
| Q5_K_M | [VongolaChouko-Starcannon-Unleashed-12B-v1.0-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/VongolaChouko-Starcannon-Unleashed-12B-v1.0-GGUF/blob/main/VongolaChouko-Starcannon-Unleashed-12B-v1.0-Q5_K_M.gguf) | 8323.32 MB |
| Q5_K_S | [VongolaChouko-Starcannon-Unleashed-12B-v1.0-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/VongolaChouko-Starcannon-Unleashed-12B-v1.0-GGUF/blob/main/VongolaChouko-Starcannon-Unleashed-12B-v1.0-Q5_K_S.gguf) | 8124.10 MB |
| Q6_K | [VongolaChouko-Starcannon-Unleashed-12B-v1.0-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/VongolaChouko-Starcannon-Unleashed-12B-v1.0-GGUF/blob/main/VongolaChouko-Starcannon-Unleashed-12B-v1.0-Q6_K.gguf) | 9590.35 MB |
| Q8_0 | [VongolaChouko-Starcannon-Unleashed-12B-v1.0-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/VongolaChouko-Starcannon-Unleashed-12B-v1.0-GGUF/blob/main/VongolaChouko-Starcannon-Unleashed-12B-v1.0-Q8_0.gguf) | 12419.10 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/nk2t-Llama-3-8B-Instruct-japanese-nk2t-v0.2-GGUF | featherless-ai-quants | 2024-11-10T19:43:38Z | 14 | 0 | null | [
"gguf",
"text-generation",
"base_model:nk2t/Llama-3-8B-Instruct-japanese-nk2t-v0.2",
"base_model:quantized:nk2t/Llama-3-8B-Instruct-japanese-nk2t-v0.2",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-06T12:17:24Z | ---
base_model: nk2t/Llama-3-8B-Instruct-japanese-nk2t-v0.2
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# nk2t/Llama-3-8B-Instruct-japanese-nk2t-v0.2 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [nk2t-Llama-3-8B-Instruct-japanese-nk2t-v0.2-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/nk2t-Llama-3-8B-Instruct-japanese-nk2t-v0.2-GGUF/blob/main/nk2t-Llama-3-8B-Instruct-japanese-nk2t-v0.2-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [nk2t-Llama-3-8B-Instruct-japanese-nk2t-v0.2-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/nk2t-Llama-3-8B-Instruct-japanese-nk2t-v0.2-GGUF/blob/main/nk2t-Llama-3-8B-Instruct-japanese-nk2t-v0.2-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [nk2t-Llama-3-8B-Instruct-japanese-nk2t-v0.2-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/nk2t-Llama-3-8B-Instruct-japanese-nk2t-v0.2-GGUF/blob/main/nk2t-Llama-3-8B-Instruct-japanese-nk2t-v0.2-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [nk2t-Llama-3-8B-Instruct-japanese-nk2t-v0.2-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/nk2t-Llama-3-8B-Instruct-japanese-nk2t-v0.2-GGUF/blob/main/nk2t-Llama-3-8B-Instruct-japanese-nk2t-v0.2-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [nk2t-Llama-3-8B-Instruct-japanese-nk2t-v0.2-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/nk2t-Llama-3-8B-Instruct-japanese-nk2t-v0.2-GGUF/blob/main/nk2t-Llama-3-8B-Instruct-japanese-nk2t-v0.2-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [nk2t-Llama-3-8B-Instruct-japanese-nk2t-v0.2-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/nk2t-Llama-3-8B-Instruct-japanese-nk2t-v0.2-GGUF/blob/main/nk2t-Llama-3-8B-Instruct-japanese-nk2t-v0.2-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [nk2t-Llama-3-8B-Instruct-japanese-nk2t-v0.2-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/nk2t-Llama-3-8B-Instruct-japanese-nk2t-v0.2-GGUF/blob/main/nk2t-Llama-3-8B-Instruct-japanese-nk2t-v0.2-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [nk2t-Llama-3-8B-Instruct-japanese-nk2t-v0.2-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/nk2t-Llama-3-8B-Instruct-japanese-nk2t-v0.2-GGUF/blob/main/nk2t-Llama-3-8B-Instruct-japanese-nk2t-v0.2-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [nk2t-Llama-3-8B-Instruct-japanese-nk2t-v0.2-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/nk2t-Llama-3-8B-Instruct-japanese-nk2t-v0.2-GGUF/blob/main/nk2t-Llama-3-8B-Instruct-japanese-nk2t-v0.2-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [nk2t-Llama-3-8B-Instruct-japanese-nk2t-v0.2-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/nk2t-Llama-3-8B-Instruct-japanese-nk2t-v0.2-GGUF/blob/main/nk2t-Llama-3-8B-Instruct-japanese-nk2t-v0.2-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [nk2t-Llama-3-8B-Instruct-japanese-nk2t-v0.2-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/nk2t-Llama-3-8B-Instruct-japanese-nk2t-v0.2-GGUF/blob/main/nk2t-Llama-3-8B-Instruct-japanese-nk2t-v0.2-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/PetroGPT-WestSeverus-7B-DPO-v2-GGUF | featherless-ai-quants | 2024-11-10T19:43:36Z | 6 | 0 | null | [
"gguf",
"text-generation",
"base_model:PetroGPT/WestSeverus-7B-DPO-v2",
"base_model:quantized:PetroGPT/WestSeverus-7B-DPO-v2",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-11-06T12:13:58Z | ---
base_model: PetroGPT/WestSeverus-7B-DPO-v2
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# PetroGPT/WestSeverus-7B-DPO-v2 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [PetroGPT-WestSeverus-7B-DPO-v2-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/PetroGPT-WestSeverus-7B-DPO-v2-GGUF/blob/main/PetroGPT-WestSeverus-7B-DPO-v2-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [PetroGPT-WestSeverus-7B-DPO-v2-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/PetroGPT-WestSeverus-7B-DPO-v2-GGUF/blob/main/PetroGPT-WestSeverus-7B-DPO-v2-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [PetroGPT-WestSeverus-7B-DPO-v2-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/PetroGPT-WestSeverus-7B-DPO-v2-GGUF/blob/main/PetroGPT-WestSeverus-7B-DPO-v2-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [PetroGPT-WestSeverus-7B-DPO-v2-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/PetroGPT-WestSeverus-7B-DPO-v2-GGUF/blob/main/PetroGPT-WestSeverus-7B-DPO-v2-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [PetroGPT-WestSeverus-7B-DPO-v2-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/PetroGPT-WestSeverus-7B-DPO-v2-GGUF/blob/main/PetroGPT-WestSeverus-7B-DPO-v2-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [PetroGPT-WestSeverus-7B-DPO-v2-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/PetroGPT-WestSeverus-7B-DPO-v2-GGUF/blob/main/PetroGPT-WestSeverus-7B-DPO-v2-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [PetroGPT-WestSeverus-7B-DPO-v2-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/PetroGPT-WestSeverus-7B-DPO-v2-GGUF/blob/main/PetroGPT-WestSeverus-7B-DPO-v2-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [PetroGPT-WestSeverus-7B-DPO-v2-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/PetroGPT-WestSeverus-7B-DPO-v2-GGUF/blob/main/PetroGPT-WestSeverus-7B-DPO-v2-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [PetroGPT-WestSeverus-7B-DPO-v2-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/PetroGPT-WestSeverus-7B-DPO-v2-GGUF/blob/main/PetroGPT-WestSeverus-7B-DPO-v2-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [PetroGPT-WestSeverus-7B-DPO-v2-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/PetroGPT-WestSeverus-7B-DPO-v2-GGUF/blob/main/PetroGPT-WestSeverus-7B-DPO-v2-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [PetroGPT-WestSeverus-7B-DPO-v2-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/PetroGPT-WestSeverus-7B-DPO-v2-GGUF/blob/main/PetroGPT-WestSeverus-7B-DPO-v2-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/aifeifei798-DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-GGUF | featherless-ai-quants | 2024-11-10T19:43:33Z | 30 | 0 | null | [
"gguf",
"text-generation",
"base_model:aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored",
"base_model:quantized:aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-06T12:04:29Z | ---
base_model: aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [aifeifei798-DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/aifeifei798-DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-GGUF/blob/main/aifeifei798-DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [aifeifei798-DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/aifeifei798-DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-GGUF/blob/main/aifeifei798-DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [aifeifei798-DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/aifeifei798-DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-GGUF/blob/main/aifeifei798-DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [aifeifei798-DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/aifeifei798-DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-GGUF/blob/main/aifeifei798-DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [aifeifei798-DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/aifeifei798-DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-GGUF/blob/main/aifeifei798-DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [aifeifei798-DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/aifeifei798-DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-GGUF/blob/main/aifeifei798-DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [aifeifei798-DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/aifeifei798-DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-GGUF/blob/main/aifeifei798-DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [aifeifei798-DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/aifeifei798-DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-GGUF/blob/main/aifeifei798-DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [aifeifei798-DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/aifeifei798-DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-GGUF/blob/main/aifeifei798-DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [aifeifei798-DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/aifeifei798-DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-GGUF/blob/main/aifeifei798-DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [aifeifei798-DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/aifeifei798-DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-GGUF/blob/main/aifeifei798-DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/MaziyarPanahi-calme-2.3-legalkit-8b-GGUF | featherless-ai-quants | 2024-11-10T19:43:30Z | 10 | 0 | null | [
"gguf",
"text-generation",
"base_model:MaziyarPanahi/calme-2.3-legalkit-8b",
"base_model:quantized:MaziyarPanahi/calme-2.3-legalkit-8b",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-06T11:51:28Z | ---
base_model: MaziyarPanahi/calme-2.3-legalkit-8b
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# MaziyarPanahi/calme-2.3-legalkit-8b GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [MaziyarPanahi-calme-2.3-legalkit-8b-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/MaziyarPanahi-calme-2.3-legalkit-8b-GGUF/blob/main/MaziyarPanahi-calme-2.3-legalkit-8b-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [MaziyarPanahi-calme-2.3-legalkit-8b-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/MaziyarPanahi-calme-2.3-legalkit-8b-GGUF/blob/main/MaziyarPanahi-calme-2.3-legalkit-8b-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [MaziyarPanahi-calme-2.3-legalkit-8b-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/MaziyarPanahi-calme-2.3-legalkit-8b-GGUF/blob/main/MaziyarPanahi-calme-2.3-legalkit-8b-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [MaziyarPanahi-calme-2.3-legalkit-8b-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/MaziyarPanahi-calme-2.3-legalkit-8b-GGUF/blob/main/MaziyarPanahi-calme-2.3-legalkit-8b-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [MaziyarPanahi-calme-2.3-legalkit-8b-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/MaziyarPanahi-calme-2.3-legalkit-8b-GGUF/blob/main/MaziyarPanahi-calme-2.3-legalkit-8b-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [MaziyarPanahi-calme-2.3-legalkit-8b-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/MaziyarPanahi-calme-2.3-legalkit-8b-GGUF/blob/main/MaziyarPanahi-calme-2.3-legalkit-8b-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [MaziyarPanahi-calme-2.3-legalkit-8b-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/MaziyarPanahi-calme-2.3-legalkit-8b-GGUF/blob/main/MaziyarPanahi-calme-2.3-legalkit-8b-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [MaziyarPanahi-calme-2.3-legalkit-8b-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/MaziyarPanahi-calme-2.3-legalkit-8b-GGUF/blob/main/MaziyarPanahi-calme-2.3-legalkit-8b-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [MaziyarPanahi-calme-2.3-legalkit-8b-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/MaziyarPanahi-calme-2.3-legalkit-8b-GGUF/blob/main/MaziyarPanahi-calme-2.3-legalkit-8b-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [MaziyarPanahi-calme-2.3-legalkit-8b-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/MaziyarPanahi-calme-2.3-legalkit-8b-GGUF/blob/main/MaziyarPanahi-calme-2.3-legalkit-8b-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [MaziyarPanahi-calme-2.3-legalkit-8b-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/MaziyarPanahi-calme-2.3-legalkit-8b-GGUF/blob/main/MaziyarPanahi-calme-2.3-legalkit-8b-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Instruct-Dolfin-v0.1-GGUF | featherless-ai-quants | 2024-11-10T19:43:21Z | 7 | 0 | null | [
"gguf",
"text-generation",
"base_model:OwenArli/ArliAI-Llama-3-8B-Instruct-Dolfin-v0.1",
"base_model:quantized:OwenArli/ArliAI-Llama-3-8B-Instruct-Dolfin-v0.1",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-06T11:10:58Z | ---
base_model: OwenArli/ArliAI-Llama-3-8B-Instruct-Dolfin-v0.1
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# OwenArli/ArliAI-Llama-3-8B-Instruct-Dolfin-v0.1 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [OwenArli-ArliAI-Llama-3-8B-Instruct-Dolfin-v0.1-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Instruct-Dolfin-v0.1-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Instruct-Dolfin-v0.1-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [OwenArli-ArliAI-Llama-3-8B-Instruct-Dolfin-v0.1-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Instruct-Dolfin-v0.1-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Instruct-Dolfin-v0.1-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [OwenArli-ArliAI-Llama-3-8B-Instruct-Dolfin-v0.1-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Instruct-Dolfin-v0.1-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Instruct-Dolfin-v0.1-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [OwenArli-ArliAI-Llama-3-8B-Instruct-Dolfin-v0.1-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Instruct-Dolfin-v0.1-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Instruct-Dolfin-v0.1-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [OwenArli-ArliAI-Llama-3-8B-Instruct-Dolfin-v0.1-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Instruct-Dolfin-v0.1-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Instruct-Dolfin-v0.1-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [OwenArli-ArliAI-Llama-3-8B-Instruct-Dolfin-v0.1-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Instruct-Dolfin-v0.1-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Instruct-Dolfin-v0.1-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [OwenArli-ArliAI-Llama-3-8B-Instruct-Dolfin-v0.1-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Instruct-Dolfin-v0.1-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Instruct-Dolfin-v0.1-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [OwenArli-ArliAI-Llama-3-8B-Instruct-Dolfin-v0.1-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Instruct-Dolfin-v0.1-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Instruct-Dolfin-v0.1-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [OwenArli-ArliAI-Llama-3-8B-Instruct-Dolfin-v0.1-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Instruct-Dolfin-v0.1-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Instruct-Dolfin-v0.1-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [OwenArli-ArliAI-Llama-3-8B-Instruct-Dolfin-v0.1-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Instruct-Dolfin-v0.1-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Instruct-Dolfin-v0.1-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [OwenArli-ArliAI-Llama-3-8B-Instruct-Dolfin-v0.1-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Instruct-Dolfin-v0.1-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Instruct-Dolfin-v0.1-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/haqishen-h2o-Llama-3-8B-Japanese-Instruct-GGUF | featherless-ai-quants | 2024-11-10T19:43:16Z | 5 | 0 | null | [
"gguf",
"text-generation",
"base_model:haqishen/h2o-Llama-3-8B-Japanese-Instruct",
"base_model:quantized:haqishen/h2o-Llama-3-8B-Japanese-Instruct",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-06T10:55:32Z | ---
base_model: haqishen/h2o-Llama-3-8B-Japanese-Instruct
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# haqishen/h2o-Llama-3-8B-Japanese-Instruct GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [haqishen-h2o-Llama-3-8B-Japanese-Instruct-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/haqishen-h2o-Llama-3-8B-Japanese-Instruct-GGUF/blob/main/haqishen-h2o-Llama-3-8B-Japanese-Instruct-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [haqishen-h2o-Llama-3-8B-Japanese-Instruct-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/haqishen-h2o-Llama-3-8B-Japanese-Instruct-GGUF/blob/main/haqishen-h2o-Llama-3-8B-Japanese-Instruct-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [haqishen-h2o-Llama-3-8B-Japanese-Instruct-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/haqishen-h2o-Llama-3-8B-Japanese-Instruct-GGUF/blob/main/haqishen-h2o-Llama-3-8B-Japanese-Instruct-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [haqishen-h2o-Llama-3-8B-Japanese-Instruct-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/haqishen-h2o-Llama-3-8B-Japanese-Instruct-GGUF/blob/main/haqishen-h2o-Llama-3-8B-Japanese-Instruct-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [haqishen-h2o-Llama-3-8B-Japanese-Instruct-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/haqishen-h2o-Llama-3-8B-Japanese-Instruct-GGUF/blob/main/haqishen-h2o-Llama-3-8B-Japanese-Instruct-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [haqishen-h2o-Llama-3-8B-Japanese-Instruct-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/haqishen-h2o-Llama-3-8B-Japanese-Instruct-GGUF/blob/main/haqishen-h2o-Llama-3-8B-Japanese-Instruct-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [haqishen-h2o-Llama-3-8B-Japanese-Instruct-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/haqishen-h2o-Llama-3-8B-Japanese-Instruct-GGUF/blob/main/haqishen-h2o-Llama-3-8B-Japanese-Instruct-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [haqishen-h2o-Llama-3-8B-Japanese-Instruct-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/haqishen-h2o-Llama-3-8B-Japanese-Instruct-GGUF/blob/main/haqishen-h2o-Llama-3-8B-Japanese-Instruct-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [haqishen-h2o-Llama-3-8B-Japanese-Instruct-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/haqishen-h2o-Llama-3-8B-Japanese-Instruct-GGUF/blob/main/haqishen-h2o-Llama-3-8B-Japanese-Instruct-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [haqishen-h2o-Llama-3-8B-Japanese-Instruct-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/haqishen-h2o-Llama-3-8B-Japanese-Instruct-GGUF/blob/main/haqishen-h2o-Llama-3-8B-Japanese-Instruct-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [haqishen-h2o-Llama-3-8B-Japanese-Instruct-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/haqishen-h2o-Llama-3-8B-Japanese-Instruct-GGUF/blob/main/haqishen-h2o-Llama-3-8B-Japanese-Instruct-Q8_0.gguf) | 8145.12 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/nazimali-Mistral-Nemo-Kurdish-Instruct-GGUF | featherless-ai-quants | 2024-11-10T19:43:12Z | 10 | 0 | null | [
"gguf",
"text-generation",
"base_model:nazimali/Mistral-Nemo-Kurdish-Instruct",
"base_model:quantized:nazimali/Mistral-Nemo-Kurdish-Instruct",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-06T10:52:21Z | ---
base_model: nazimali/Mistral-Nemo-Kurdish-Instruct
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# nazimali/Mistral-Nemo-Kurdish-Instruct GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [nazimali-Mistral-Nemo-Kurdish-Instruct-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/nazimali-Mistral-Nemo-Kurdish-Instruct-GGUF/blob/main/nazimali-Mistral-Nemo-Kurdish-Instruct-IQ4_XS.gguf) | 6485.04 MB |
| Q2_K | [nazimali-Mistral-Nemo-Kurdish-Instruct-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/nazimali-Mistral-Nemo-Kurdish-Instruct-GGUF/blob/main/nazimali-Mistral-Nemo-Kurdish-Instruct-Q2_K.gguf) | 4569.10 MB |
| Q3_K_L | [nazimali-Mistral-Nemo-Kurdish-Instruct-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/nazimali-Mistral-Nemo-Kurdish-Instruct-GGUF/blob/main/nazimali-Mistral-Nemo-Kurdish-Instruct-Q3_K_L.gguf) | 6257.54 MB |
| Q3_K_M | [nazimali-Mistral-Nemo-Kurdish-Instruct-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/nazimali-Mistral-Nemo-Kurdish-Instruct-GGUF/blob/main/nazimali-Mistral-Nemo-Kurdish-Instruct-Q3_K_M.gguf) | 5801.29 MB |
| Q3_K_S | [nazimali-Mistral-Nemo-Kurdish-Instruct-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/nazimali-Mistral-Nemo-Kurdish-Instruct-GGUF/blob/main/nazimali-Mistral-Nemo-Kurdish-Instruct-Q3_K_S.gguf) | 5277.85 MB |
| Q4_K_M | [nazimali-Mistral-Nemo-Kurdish-Instruct-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/nazimali-Mistral-Nemo-Kurdish-Instruct-GGUF/blob/main/nazimali-Mistral-Nemo-Kurdish-Instruct-Q4_K_M.gguf) | 7130.82 MB |
| Q4_K_S | [nazimali-Mistral-Nemo-Kurdish-Instruct-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/nazimali-Mistral-Nemo-Kurdish-Instruct-GGUF/blob/main/nazimali-Mistral-Nemo-Kurdish-Instruct-Q4_K_S.gguf) | 6790.35 MB |
| Q5_K_M | [nazimali-Mistral-Nemo-Kurdish-Instruct-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/nazimali-Mistral-Nemo-Kurdish-Instruct-GGUF/blob/main/nazimali-Mistral-Nemo-Kurdish-Instruct-Q5_K_M.gguf) | 8323.32 MB |
| Q5_K_S | [nazimali-Mistral-Nemo-Kurdish-Instruct-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/nazimali-Mistral-Nemo-Kurdish-Instruct-GGUF/blob/main/nazimali-Mistral-Nemo-Kurdish-Instruct-Q5_K_S.gguf) | 8124.10 MB |
| Q6_K | [nazimali-Mistral-Nemo-Kurdish-Instruct-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/nazimali-Mistral-Nemo-Kurdish-Instruct-GGUF/blob/main/nazimali-Mistral-Nemo-Kurdish-Instruct-Q6_K.gguf) | 9590.35 MB |
| Q8_0 | [nazimali-Mistral-Nemo-Kurdish-Instruct-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/nazimali-Mistral-Nemo-Kurdish-Instruct-GGUF/blob/main/nazimali-Mistral-Nemo-Kurdish-Instruct-Q8_0.gguf) | 12419.10 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/ankhamun-xxxI-Ixxx-GGUF | featherless-ai-quants | 2024-11-10T19:43:09Z | 5 | 0 | null | [
"gguf",
"text-generation",
"base_model:ankhamun/xxxI-Ixxx",
"base_model:quantized:ankhamun/xxxI-Ixxx",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-06T10:18:01Z | ---
base_model: ankhamun/xxxI-Ixxx
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# ankhamun/xxxI-Ixxx GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [ankhamun-xxxI-Ixxx-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/ankhamun-xxxI-Ixxx-GGUF/blob/main/ankhamun-xxxI-Ixxx-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [ankhamun-xxxI-Ixxx-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/ankhamun-xxxI-Ixxx-GGUF/blob/main/ankhamun-xxxI-Ixxx-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [ankhamun-xxxI-Ixxx-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/ankhamun-xxxI-Ixxx-GGUF/blob/main/ankhamun-xxxI-Ixxx-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [ankhamun-xxxI-Ixxx-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/ankhamun-xxxI-Ixxx-GGUF/blob/main/ankhamun-xxxI-Ixxx-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [ankhamun-xxxI-Ixxx-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/ankhamun-xxxI-Ixxx-GGUF/blob/main/ankhamun-xxxI-Ixxx-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [ankhamun-xxxI-Ixxx-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/ankhamun-xxxI-Ixxx-GGUF/blob/main/ankhamun-xxxI-Ixxx-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [ankhamun-xxxI-Ixxx-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/ankhamun-xxxI-Ixxx-GGUF/blob/main/ankhamun-xxxI-Ixxx-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [ankhamun-xxxI-Ixxx-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/ankhamun-xxxI-Ixxx-GGUF/blob/main/ankhamun-xxxI-Ixxx-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [ankhamun-xxxI-Ixxx-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/ankhamun-xxxI-Ixxx-GGUF/blob/main/ankhamun-xxxI-Ixxx-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [ankhamun-xxxI-Ixxx-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/ankhamun-xxxI-Ixxx-GGUF/blob/main/ankhamun-xxxI-Ixxx-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [ankhamun-xxxI-Ixxx-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/ankhamun-xxxI-Ixxx-GGUF/blob/main/ankhamun-xxxI-Ixxx-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/Muhammad2003-TriMistral-7B-TIES-GGUF | featherless-ai-quants | 2024-11-10T19:43:05Z | 10 | 0 | null | [
"gguf",
"text-generation",
"base_model:Muhammad2003/TriMistral-7B-TIES",
"base_model:quantized:Muhammad2003/TriMistral-7B-TIES",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-06T10:06:41Z | ---
base_model: Muhammad2003/TriMistral-7B-TIES
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# Muhammad2003/TriMistral-7B-TIES GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [Muhammad2003-TriMistral-7B-TIES-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Muhammad2003-TriMistral-7B-TIES-GGUF/blob/main/Muhammad2003-TriMistral-7B-TIES-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [Muhammad2003-TriMistral-7B-TIES-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Muhammad2003-TriMistral-7B-TIES-GGUF/blob/main/Muhammad2003-TriMistral-7B-TIES-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [Muhammad2003-TriMistral-7B-TIES-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Muhammad2003-TriMistral-7B-TIES-GGUF/blob/main/Muhammad2003-TriMistral-7B-TIES-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [Muhammad2003-TriMistral-7B-TIES-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Muhammad2003-TriMistral-7B-TIES-GGUF/blob/main/Muhammad2003-TriMistral-7B-TIES-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [Muhammad2003-TriMistral-7B-TIES-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Muhammad2003-TriMistral-7B-TIES-GGUF/blob/main/Muhammad2003-TriMistral-7B-TIES-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [Muhammad2003-TriMistral-7B-TIES-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Muhammad2003-TriMistral-7B-TIES-GGUF/blob/main/Muhammad2003-TriMistral-7B-TIES-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [Muhammad2003-TriMistral-7B-TIES-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Muhammad2003-TriMistral-7B-TIES-GGUF/blob/main/Muhammad2003-TriMistral-7B-TIES-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [Muhammad2003-TriMistral-7B-TIES-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Muhammad2003-TriMistral-7B-TIES-GGUF/blob/main/Muhammad2003-TriMistral-7B-TIES-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [Muhammad2003-TriMistral-7B-TIES-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Muhammad2003-TriMistral-7B-TIES-GGUF/blob/main/Muhammad2003-TriMistral-7B-TIES-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [Muhammad2003-TriMistral-7B-TIES-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Muhammad2003-TriMistral-7B-TIES-GGUF/blob/main/Muhammad2003-TriMistral-7B-TIES-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [Muhammad2003-TriMistral-7B-TIES-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Muhammad2003-TriMistral-7B-TIES-GGUF/blob/main/Muhammad2003-TriMistral-7B-TIES-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/FPHam-L3-8B-Everything-COT-GGUF | featherless-ai-quants | 2024-11-10T19:42:56Z | 32 | 0 | null | [
"gguf",
"text-generation",
"base_model:FPHam/L3-8B-Everything-COT",
"base_model:quantized:FPHam/L3-8B-Everything-COT",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-06T09:26:52Z | ---
base_model: FPHam/L3-8B-Everything-COT
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# FPHam/L3-8B-Everything-COT GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [FPHam-L3-8B-Everything-COT-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/FPHam-L3-8B-Everything-COT-GGUF/blob/main/FPHam-L3-8B-Everything-COT-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [FPHam-L3-8B-Everything-COT-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/FPHam-L3-8B-Everything-COT-GGUF/blob/main/FPHam-L3-8B-Everything-COT-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [FPHam-L3-8B-Everything-COT-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/FPHam-L3-8B-Everything-COT-GGUF/blob/main/FPHam-L3-8B-Everything-COT-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [FPHam-L3-8B-Everything-COT-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/FPHam-L3-8B-Everything-COT-GGUF/blob/main/FPHam-L3-8B-Everything-COT-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [FPHam-L3-8B-Everything-COT-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/FPHam-L3-8B-Everything-COT-GGUF/blob/main/FPHam-L3-8B-Everything-COT-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [FPHam-L3-8B-Everything-COT-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/FPHam-L3-8B-Everything-COT-GGUF/blob/main/FPHam-L3-8B-Everything-COT-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [FPHam-L3-8B-Everything-COT-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/FPHam-L3-8B-Everything-COT-GGUF/blob/main/FPHam-L3-8B-Everything-COT-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [FPHam-L3-8B-Everything-COT-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/FPHam-L3-8B-Everything-COT-GGUF/blob/main/FPHam-L3-8B-Everything-COT-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [FPHam-L3-8B-Everything-COT-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/FPHam-L3-8B-Everything-COT-GGUF/blob/main/FPHam-L3-8B-Everything-COT-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [FPHam-L3-8B-Everything-COT-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/FPHam-L3-8B-Everything-COT-GGUF/blob/main/FPHam-L3-8B-Everything-COT-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [FPHam-L3-8B-Everything-COT-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/FPHam-L3-8B-Everything-COT-GGUF/blob/main/FPHam-L3-8B-Everything-COT-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/FreedomIntelligence-AceGPT-v2-8B-Chat-GGUF | featherless-ai-quants | 2024-11-10T19:42:53Z | 105 | 0 | null | [
"gguf",
"text-generation",
"base_model:FreedomIntelligence/AceGPT-v2-8B-Chat",
"base_model:quantized:FreedomIntelligence/AceGPT-v2-8B-Chat",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-11-06T08:52:49Z | ---
base_model: FreedomIntelligence/AceGPT-v2-8B-Chat
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# FreedomIntelligence/AceGPT-v2-8B-Chat GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [FreedomIntelligence-AceGPT-v2-8B-Chat-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/FreedomIntelligence-AceGPT-v2-8B-Chat-GGUF/blob/main/FreedomIntelligence-AceGPT-v2-8B-Chat-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [FreedomIntelligence-AceGPT-v2-8B-Chat-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/FreedomIntelligence-AceGPT-v2-8B-Chat-GGUF/blob/main/FreedomIntelligence-AceGPT-v2-8B-Chat-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [FreedomIntelligence-AceGPT-v2-8B-Chat-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/FreedomIntelligence-AceGPT-v2-8B-Chat-GGUF/blob/main/FreedomIntelligence-AceGPT-v2-8B-Chat-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [FreedomIntelligence-AceGPT-v2-8B-Chat-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/FreedomIntelligence-AceGPT-v2-8B-Chat-GGUF/blob/main/FreedomIntelligence-AceGPT-v2-8B-Chat-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [FreedomIntelligence-AceGPT-v2-8B-Chat-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/FreedomIntelligence-AceGPT-v2-8B-Chat-GGUF/blob/main/FreedomIntelligence-AceGPT-v2-8B-Chat-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [FreedomIntelligence-AceGPT-v2-8B-Chat-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/FreedomIntelligence-AceGPT-v2-8B-Chat-GGUF/blob/main/FreedomIntelligence-AceGPT-v2-8B-Chat-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [FreedomIntelligence-AceGPT-v2-8B-Chat-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/FreedomIntelligence-AceGPT-v2-8B-Chat-GGUF/blob/main/FreedomIntelligence-AceGPT-v2-8B-Chat-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [FreedomIntelligence-AceGPT-v2-8B-Chat-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/FreedomIntelligence-AceGPT-v2-8B-Chat-GGUF/blob/main/FreedomIntelligence-AceGPT-v2-8B-Chat-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [FreedomIntelligence-AceGPT-v2-8B-Chat-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/FreedomIntelligence-AceGPT-v2-8B-Chat-GGUF/blob/main/FreedomIntelligence-AceGPT-v2-8B-Chat-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [FreedomIntelligence-AceGPT-v2-8B-Chat-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/FreedomIntelligence-AceGPT-v2-8B-Chat-GGUF/blob/main/FreedomIntelligence-AceGPT-v2-8B-Chat-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [FreedomIntelligence-AceGPT-v2-8B-Chat-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/FreedomIntelligence-AceGPT-v2-8B-Chat-GGUF/blob/main/FreedomIntelligence-AceGPT-v2-8B-Chat-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/flammenai-flammen24-mistral-7B-GGUF | featherless-ai-quants | 2024-11-10T19:42:49Z | 7 | 0 | null | [
"gguf",
"text-generation",
"base_model:flammenai/flammen24-mistral-7B",
"base_model:quantized:flammenai/flammen24-mistral-7B",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-11-06T07:57:34Z | ---
base_model: flammenai/flammen24-mistral-7B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# flammenai/flammen24-mistral-7B GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [flammenai-flammen24-mistral-7B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/flammenai-flammen24-mistral-7B-GGUF/blob/main/flammenai-flammen24-mistral-7B-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [flammenai-flammen24-mistral-7B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/flammenai-flammen24-mistral-7B-GGUF/blob/main/flammenai-flammen24-mistral-7B-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [flammenai-flammen24-mistral-7B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/flammenai-flammen24-mistral-7B-GGUF/blob/main/flammenai-flammen24-mistral-7B-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [flammenai-flammen24-mistral-7B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/flammenai-flammen24-mistral-7B-GGUF/blob/main/flammenai-flammen24-mistral-7B-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [flammenai-flammen24-mistral-7B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/flammenai-flammen24-mistral-7B-GGUF/blob/main/flammenai-flammen24-mistral-7B-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [flammenai-flammen24-mistral-7B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/flammenai-flammen24-mistral-7B-GGUF/blob/main/flammenai-flammen24-mistral-7B-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [flammenai-flammen24-mistral-7B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/flammenai-flammen24-mistral-7B-GGUF/blob/main/flammenai-flammen24-mistral-7B-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [flammenai-flammen24-mistral-7B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/flammenai-flammen24-mistral-7B-GGUF/blob/main/flammenai-flammen24-mistral-7B-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [flammenai-flammen24-mistral-7B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/flammenai-flammen24-mistral-7B-GGUF/blob/main/flammenai-flammen24-mistral-7B-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [flammenai-flammen24-mistral-7B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/flammenai-flammen24-mistral-7B-GGUF/blob/main/flammenai-flammen24-mistral-7B-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [flammenai-flammen24-mistral-7B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/flammenai-flammen24-mistral-7B-GGUF/blob/main/flammenai-flammen24-mistral-7B-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/mlabonne-UltraMerge-7B-GGUF | featherless-ai-quants | 2024-11-10T19:42:45Z | 10 | 0 | null | [
"gguf",
"text-generation",
"base_model:mlabonne/UltraMerge-7B",
"base_model:quantized:mlabonne/UltraMerge-7B",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-11-06T07:31:33Z | ---
base_model: mlabonne/UltraMerge-7B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# mlabonne/UltraMerge-7B GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [mlabonne-UltraMerge-7B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/mlabonne-UltraMerge-7B-GGUF/blob/main/mlabonne-UltraMerge-7B-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [mlabonne-UltraMerge-7B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/mlabonne-UltraMerge-7B-GGUF/blob/main/mlabonne-UltraMerge-7B-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [mlabonne-UltraMerge-7B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/mlabonne-UltraMerge-7B-GGUF/blob/main/mlabonne-UltraMerge-7B-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [mlabonne-UltraMerge-7B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/mlabonne-UltraMerge-7B-GGUF/blob/main/mlabonne-UltraMerge-7B-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [mlabonne-UltraMerge-7B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/mlabonne-UltraMerge-7B-GGUF/blob/main/mlabonne-UltraMerge-7B-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [mlabonne-UltraMerge-7B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/mlabonne-UltraMerge-7B-GGUF/blob/main/mlabonne-UltraMerge-7B-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [mlabonne-UltraMerge-7B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/mlabonne-UltraMerge-7B-GGUF/blob/main/mlabonne-UltraMerge-7B-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [mlabonne-UltraMerge-7B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/mlabonne-UltraMerge-7B-GGUF/blob/main/mlabonne-UltraMerge-7B-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [mlabonne-UltraMerge-7B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/mlabonne-UltraMerge-7B-GGUF/blob/main/mlabonne-UltraMerge-7B-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [mlabonne-UltraMerge-7B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/mlabonne-UltraMerge-7B-GGUF/blob/main/mlabonne-UltraMerge-7B-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [mlabonne-UltraMerge-7B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/mlabonne-UltraMerge-7B-GGUF/blob/main/mlabonne-UltraMerge-7B-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/allknowingroger-StarlingMaxLimmy2-7B-slerp-GGUF | featherless-ai-quants | 2024-11-10T19:42:31Z | 39 | 0 | null | [
"gguf",
"text-generation",
"base_model:allknowingroger/StarlingMaxLimmy2-7B-slerp",
"base_model:quantized:allknowingroger/StarlingMaxLimmy2-7B-slerp",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-11-06T05:41:01Z | ---
base_model: allknowingroger/StarlingMaxLimmy2-7B-slerp
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# allknowingroger/StarlingMaxLimmy2-7B-slerp GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [allknowingroger-StarlingMaxLimmy2-7B-slerp-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/allknowingroger-StarlingMaxLimmy2-7B-slerp-GGUF/blob/main/allknowingroger-StarlingMaxLimmy2-7B-slerp-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [allknowingroger-StarlingMaxLimmy2-7B-slerp-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/allknowingroger-StarlingMaxLimmy2-7B-slerp-GGUF/blob/main/allknowingroger-StarlingMaxLimmy2-7B-slerp-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [allknowingroger-StarlingMaxLimmy2-7B-slerp-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/allknowingroger-StarlingMaxLimmy2-7B-slerp-GGUF/blob/main/allknowingroger-StarlingMaxLimmy2-7B-slerp-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [allknowingroger-StarlingMaxLimmy2-7B-slerp-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/allknowingroger-StarlingMaxLimmy2-7B-slerp-GGUF/blob/main/allknowingroger-StarlingMaxLimmy2-7B-slerp-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [allknowingroger-StarlingMaxLimmy2-7B-slerp-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/allknowingroger-StarlingMaxLimmy2-7B-slerp-GGUF/blob/main/allknowingroger-StarlingMaxLimmy2-7B-slerp-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [allknowingroger-StarlingMaxLimmy2-7B-slerp-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/allknowingroger-StarlingMaxLimmy2-7B-slerp-GGUF/blob/main/allknowingroger-StarlingMaxLimmy2-7B-slerp-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [allknowingroger-StarlingMaxLimmy2-7B-slerp-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/allknowingroger-StarlingMaxLimmy2-7B-slerp-GGUF/blob/main/allknowingroger-StarlingMaxLimmy2-7B-slerp-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [allknowingroger-StarlingMaxLimmy2-7B-slerp-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/allknowingroger-StarlingMaxLimmy2-7B-slerp-GGUF/blob/main/allknowingroger-StarlingMaxLimmy2-7B-slerp-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [allknowingroger-StarlingMaxLimmy2-7B-slerp-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/allknowingroger-StarlingMaxLimmy2-7B-slerp-GGUF/blob/main/allknowingroger-StarlingMaxLimmy2-7B-slerp-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [allknowingroger-StarlingMaxLimmy2-7B-slerp-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/allknowingroger-StarlingMaxLimmy2-7B-slerp-GGUF/blob/main/allknowingroger-StarlingMaxLimmy2-7B-slerp-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [allknowingroger-StarlingMaxLimmy2-7B-slerp-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/allknowingroger-StarlingMaxLimmy2-7B-slerp-GGUF/blob/main/allknowingroger-StarlingMaxLimmy2-7B-slerp-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/Locutusque-OpenCerebrum-2.0-7B-GGUF | featherless-ai-quants | 2024-11-10T19:42:30Z | 8 | 0 | null | [
"gguf",
"text-generation",
"base_model:Locutusque/OpenCerebrum-2.0-7B",
"base_model:quantized:Locutusque/OpenCerebrum-2.0-7B",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-11-06T05:33:25Z | ---
base_model: Locutusque/OpenCerebrum-2.0-7B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# Locutusque/OpenCerebrum-2.0-7B GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [Locutusque-OpenCerebrum-2.0-7B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-OpenCerebrum-2.0-7B-GGUF/blob/main/Locutusque-OpenCerebrum-2.0-7B-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [Locutusque-OpenCerebrum-2.0-7B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-OpenCerebrum-2.0-7B-GGUF/blob/main/Locutusque-OpenCerebrum-2.0-7B-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [Locutusque-OpenCerebrum-2.0-7B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-OpenCerebrum-2.0-7B-GGUF/blob/main/Locutusque-OpenCerebrum-2.0-7B-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [Locutusque-OpenCerebrum-2.0-7B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-OpenCerebrum-2.0-7B-GGUF/blob/main/Locutusque-OpenCerebrum-2.0-7B-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [Locutusque-OpenCerebrum-2.0-7B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-OpenCerebrum-2.0-7B-GGUF/blob/main/Locutusque-OpenCerebrum-2.0-7B-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [Locutusque-OpenCerebrum-2.0-7B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-OpenCerebrum-2.0-7B-GGUF/blob/main/Locutusque-OpenCerebrum-2.0-7B-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [Locutusque-OpenCerebrum-2.0-7B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-OpenCerebrum-2.0-7B-GGUF/blob/main/Locutusque-OpenCerebrum-2.0-7B-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [Locutusque-OpenCerebrum-2.0-7B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-OpenCerebrum-2.0-7B-GGUF/blob/main/Locutusque-OpenCerebrum-2.0-7B-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [Locutusque-OpenCerebrum-2.0-7B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-OpenCerebrum-2.0-7B-GGUF/blob/main/Locutusque-OpenCerebrum-2.0-7B-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [Locutusque-OpenCerebrum-2.0-7B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-OpenCerebrum-2.0-7B-GGUF/blob/main/Locutusque-OpenCerebrum-2.0-7B-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [Locutusque-OpenCerebrum-2.0-7B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-OpenCerebrum-2.0-7B-GGUF/blob/main/Locutusque-OpenCerebrum-2.0-7B-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/uygarkurt-llama-3-merged-linear-GGUF | featherless-ai-quants | 2024-11-10T19:42:24Z | 6 | 0 | null | [
"gguf",
"text-generation",
"base_model:uygarkurt/llama-3-merged-linear",
"base_model:quantized:uygarkurt/llama-3-merged-linear",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-06T04:49:12Z | ---
base_model: uygarkurt/llama-3-merged-linear
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# uygarkurt/llama-3-merged-linear GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [uygarkurt-llama-3-merged-linear-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/uygarkurt-llama-3-merged-linear-GGUF/blob/main/uygarkurt-llama-3-merged-linear-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [uygarkurt-llama-3-merged-linear-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/uygarkurt-llama-3-merged-linear-GGUF/blob/main/uygarkurt-llama-3-merged-linear-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [uygarkurt-llama-3-merged-linear-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/uygarkurt-llama-3-merged-linear-GGUF/blob/main/uygarkurt-llama-3-merged-linear-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [uygarkurt-llama-3-merged-linear-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/uygarkurt-llama-3-merged-linear-GGUF/blob/main/uygarkurt-llama-3-merged-linear-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [uygarkurt-llama-3-merged-linear-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/uygarkurt-llama-3-merged-linear-GGUF/blob/main/uygarkurt-llama-3-merged-linear-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [uygarkurt-llama-3-merged-linear-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/uygarkurt-llama-3-merged-linear-GGUF/blob/main/uygarkurt-llama-3-merged-linear-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [uygarkurt-llama-3-merged-linear-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/uygarkurt-llama-3-merged-linear-GGUF/blob/main/uygarkurt-llama-3-merged-linear-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [uygarkurt-llama-3-merged-linear-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/uygarkurt-llama-3-merged-linear-GGUF/blob/main/uygarkurt-llama-3-merged-linear-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [uygarkurt-llama-3-merged-linear-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/uygarkurt-llama-3-merged-linear-GGUF/blob/main/uygarkurt-llama-3-merged-linear-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [uygarkurt-llama-3-merged-linear-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/uygarkurt-llama-3-merged-linear-GGUF/blob/main/uygarkurt-llama-3-merged-linear-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [uygarkurt-llama-3-merged-linear-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/uygarkurt-llama-3-merged-linear-GGUF/blob/main/uygarkurt-llama-3-merged-linear-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/shleeeee-mistral-ko-7b-wiki-neft-GGUF | featherless-ai-quants | 2024-11-10T19:42:12Z | 6 | 0 | null | [
"gguf",
"text-generation",
"base_model:shleeeee/mistral-ko-7b-wiki-neft",
"base_model:quantized:shleeeee/mistral-ko-7b-wiki-neft",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-11-06T03:34:59Z | ---
base_model: shleeeee/mistral-ko-7b-wiki-neft
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# shleeeee/mistral-ko-7b-wiki-neft GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [shleeeee-mistral-ko-7b-wiki-neft-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/shleeeee-mistral-ko-7b-wiki-neft-GGUF/blob/main/shleeeee-mistral-ko-7b-wiki-neft-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [shleeeee-mistral-ko-7b-wiki-neft-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/shleeeee-mistral-ko-7b-wiki-neft-GGUF/blob/main/shleeeee-mistral-ko-7b-wiki-neft-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [shleeeee-mistral-ko-7b-wiki-neft-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/shleeeee-mistral-ko-7b-wiki-neft-GGUF/blob/main/shleeeee-mistral-ko-7b-wiki-neft-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [shleeeee-mistral-ko-7b-wiki-neft-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/shleeeee-mistral-ko-7b-wiki-neft-GGUF/blob/main/shleeeee-mistral-ko-7b-wiki-neft-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [shleeeee-mistral-ko-7b-wiki-neft-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/shleeeee-mistral-ko-7b-wiki-neft-GGUF/blob/main/shleeeee-mistral-ko-7b-wiki-neft-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [shleeeee-mistral-ko-7b-wiki-neft-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/shleeeee-mistral-ko-7b-wiki-neft-GGUF/blob/main/shleeeee-mistral-ko-7b-wiki-neft-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [shleeeee-mistral-ko-7b-wiki-neft-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/shleeeee-mistral-ko-7b-wiki-neft-GGUF/blob/main/shleeeee-mistral-ko-7b-wiki-neft-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [shleeeee-mistral-ko-7b-wiki-neft-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/shleeeee-mistral-ko-7b-wiki-neft-GGUF/blob/main/shleeeee-mistral-ko-7b-wiki-neft-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [shleeeee-mistral-ko-7b-wiki-neft-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/shleeeee-mistral-ko-7b-wiki-neft-GGUF/blob/main/shleeeee-mistral-ko-7b-wiki-neft-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [shleeeee-mistral-ko-7b-wiki-neft-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/shleeeee-mistral-ko-7b-wiki-neft-GGUF/blob/main/shleeeee-mistral-ko-7b-wiki-neft-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [shleeeee-mistral-ko-7b-wiki-neft-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/shleeeee-mistral-ko-7b-wiki-neft-GGUF/blob/main/shleeeee-mistral-ko-7b-wiki-neft-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/KoboldAI-Mistral-7B-Erebus-v3-GGUF | featherless-ai-quants | 2024-11-10T19:41:56Z | 84 | 0 | null | [
"gguf",
"text-generation",
"base_model:KoboldAI/Mistral-7B-Erebus-v3",
"base_model:quantized:KoboldAI/Mistral-7B-Erebus-v3",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-11-05T22:42:59Z | ---
base_model: KoboldAI/Mistral-7B-Erebus-v3
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# KoboldAI/Mistral-7B-Erebus-v3 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [KoboldAI-Mistral-7B-Erebus-v3-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/KoboldAI-Mistral-7B-Erebus-v3-GGUF/blob/main/KoboldAI-Mistral-7B-Erebus-v3-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [KoboldAI-Mistral-7B-Erebus-v3-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/KoboldAI-Mistral-7B-Erebus-v3-GGUF/blob/main/KoboldAI-Mistral-7B-Erebus-v3-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [KoboldAI-Mistral-7B-Erebus-v3-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/KoboldAI-Mistral-7B-Erebus-v3-GGUF/blob/main/KoboldAI-Mistral-7B-Erebus-v3-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [KoboldAI-Mistral-7B-Erebus-v3-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/KoboldAI-Mistral-7B-Erebus-v3-GGUF/blob/main/KoboldAI-Mistral-7B-Erebus-v3-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [KoboldAI-Mistral-7B-Erebus-v3-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/KoboldAI-Mistral-7B-Erebus-v3-GGUF/blob/main/KoboldAI-Mistral-7B-Erebus-v3-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [KoboldAI-Mistral-7B-Erebus-v3-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/KoboldAI-Mistral-7B-Erebus-v3-GGUF/blob/main/KoboldAI-Mistral-7B-Erebus-v3-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [KoboldAI-Mistral-7B-Erebus-v3-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/KoboldAI-Mistral-7B-Erebus-v3-GGUF/blob/main/KoboldAI-Mistral-7B-Erebus-v3-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [KoboldAI-Mistral-7B-Erebus-v3-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/KoboldAI-Mistral-7B-Erebus-v3-GGUF/blob/main/KoboldAI-Mistral-7B-Erebus-v3-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [KoboldAI-Mistral-7B-Erebus-v3-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/KoboldAI-Mistral-7B-Erebus-v3-GGUF/blob/main/KoboldAI-Mistral-7B-Erebus-v3-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [KoboldAI-Mistral-7B-Erebus-v3-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/KoboldAI-Mistral-7B-Erebus-v3-GGUF/blob/main/KoboldAI-Mistral-7B-Erebus-v3-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [KoboldAI-Mistral-7B-Erebus-v3-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/KoboldAI-Mistral-7B-Erebus-v3-GGUF/blob/main/KoboldAI-Mistral-7B-Erebus-v3-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-GGUF | featherless-ai-quants | 2024-11-10T19:41:46Z | 41 | 0 | null | [
"gguf",
"text-generation",
"base_model:grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B",
"base_model:quantized:grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-05T22:25:46Z | ---
base_model: grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-GGUF/blob/main/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-GGUF/blob/main/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-GGUF/blob/main/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-GGUF/blob/main/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-GGUF/blob/main/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-GGUF/blob/main/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-GGUF/blob/main/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-GGUF/blob/main/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-GGUF/blob/main/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-GGUF/blob/main/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-GGUF/blob/main/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/Liangmingxin-ThetaWave-7B-sft-GGUF | featherless-ai-quants | 2024-11-10T19:41:40Z | 12 | 0 | null | [
"gguf",
"text-generation",
"base_model:Liangmingxin/ThetaWave-7B-sft",
"base_model:quantized:Liangmingxin/ThetaWave-7B-sft",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-05T21:36:47Z | ---
base_model: Liangmingxin/ThetaWave-7B-sft
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# Liangmingxin/ThetaWave-7B-sft GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [Liangmingxin-ThetaWave-7B-sft-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Liangmingxin-ThetaWave-7B-sft-GGUF/blob/main/Liangmingxin-ThetaWave-7B-sft-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [Liangmingxin-ThetaWave-7B-sft-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Liangmingxin-ThetaWave-7B-sft-GGUF/blob/main/Liangmingxin-ThetaWave-7B-sft-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [Liangmingxin-ThetaWave-7B-sft-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Liangmingxin-ThetaWave-7B-sft-GGUF/blob/main/Liangmingxin-ThetaWave-7B-sft-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [Liangmingxin-ThetaWave-7B-sft-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Liangmingxin-ThetaWave-7B-sft-GGUF/blob/main/Liangmingxin-ThetaWave-7B-sft-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [Liangmingxin-ThetaWave-7B-sft-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Liangmingxin-ThetaWave-7B-sft-GGUF/blob/main/Liangmingxin-ThetaWave-7B-sft-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [Liangmingxin-ThetaWave-7B-sft-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Liangmingxin-ThetaWave-7B-sft-GGUF/blob/main/Liangmingxin-ThetaWave-7B-sft-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [Liangmingxin-ThetaWave-7B-sft-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Liangmingxin-ThetaWave-7B-sft-GGUF/blob/main/Liangmingxin-ThetaWave-7B-sft-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [Liangmingxin-ThetaWave-7B-sft-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Liangmingxin-ThetaWave-7B-sft-GGUF/blob/main/Liangmingxin-ThetaWave-7B-sft-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [Liangmingxin-ThetaWave-7B-sft-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Liangmingxin-ThetaWave-7B-sft-GGUF/blob/main/Liangmingxin-ThetaWave-7B-sft-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [Liangmingxin-ThetaWave-7B-sft-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Liangmingxin-ThetaWave-7B-sft-GGUF/blob/main/Liangmingxin-ThetaWave-7B-sft-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [Liangmingxin-ThetaWave-7B-sft-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Liangmingxin-ThetaWave-7B-sft-GGUF/blob/main/Liangmingxin-ThetaWave-7B-sft-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/nbeerbower-mistral-nemo-gutades-12B-GGUF | featherless-ai-quants | 2024-11-10T19:41:37Z | 68 | 0 | null | [
"gguf",
"text-generation",
"base_model:nbeerbower/mistral-nemo-gutades-12B",
"base_model:quantized:nbeerbower/mistral-nemo-gutades-12B",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-05T21:19:05Z | ---
base_model: nbeerbower/mistral-nemo-gutades-12B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# nbeerbower/mistral-nemo-gutades-12B GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [nbeerbower-mistral-nemo-gutades-12B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-gutades-12B-GGUF/blob/main/nbeerbower-mistral-nemo-gutades-12B-IQ4_XS.gguf) | 6485.04 MB |
| Q2_K | [nbeerbower-mistral-nemo-gutades-12B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-gutades-12B-GGUF/blob/main/nbeerbower-mistral-nemo-gutades-12B-Q2_K.gguf) | 4569.10 MB |
| Q3_K_L | [nbeerbower-mistral-nemo-gutades-12B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-gutades-12B-GGUF/blob/main/nbeerbower-mistral-nemo-gutades-12B-Q3_K_L.gguf) | 6257.54 MB |
| Q3_K_M | [nbeerbower-mistral-nemo-gutades-12B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-gutades-12B-GGUF/blob/main/nbeerbower-mistral-nemo-gutades-12B-Q3_K_M.gguf) | 5801.29 MB |
| Q3_K_S | [nbeerbower-mistral-nemo-gutades-12B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-gutades-12B-GGUF/blob/main/nbeerbower-mistral-nemo-gutades-12B-Q3_K_S.gguf) | 5277.85 MB |
| Q4_K_M | [nbeerbower-mistral-nemo-gutades-12B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-gutades-12B-GGUF/blob/main/nbeerbower-mistral-nemo-gutades-12B-Q4_K_M.gguf) | 7130.82 MB |
| Q4_K_S | [nbeerbower-mistral-nemo-gutades-12B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-gutades-12B-GGUF/blob/main/nbeerbower-mistral-nemo-gutades-12B-Q4_K_S.gguf) | 6790.35 MB |
| Q5_K_M | [nbeerbower-mistral-nemo-gutades-12B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-gutades-12B-GGUF/blob/main/nbeerbower-mistral-nemo-gutades-12B-Q5_K_M.gguf) | 8323.32 MB |
| Q5_K_S | [nbeerbower-mistral-nemo-gutades-12B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-gutades-12B-GGUF/blob/main/nbeerbower-mistral-nemo-gutades-12B-Q5_K_S.gguf) | 8124.10 MB |
| Q6_K | [nbeerbower-mistral-nemo-gutades-12B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-gutades-12B-GGUF/blob/main/nbeerbower-mistral-nemo-gutades-12B-Q6_K.gguf) | 9590.35 MB |
| Q8_0 | [nbeerbower-mistral-nemo-gutades-12B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-mistral-nemo-gutades-12B-GGUF/blob/main/nbeerbower-mistral-nemo-gutades-12B-Q8_0.gguf) | 12419.10 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/Locutusque-Hyperion-1.5-Mistral-7B-GGUF | featherless-ai-quants | 2024-11-10T19:41:07Z | 5 | 0 | null | [
"gguf",
"text-generation",
"base_model:Locutusque/Hyperion-1.5-Mistral-7B",
"base_model:quantized:Locutusque/Hyperion-1.5-Mistral-7B",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-11-05T16:13:32Z | ---
base_model: Locutusque/Hyperion-1.5-Mistral-7B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# Locutusque/Hyperion-1.5-Mistral-7B GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [Locutusque-Hyperion-1.5-Mistral-7B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-1.5-Mistral-7B-GGUF/blob/main/Locutusque-Hyperion-1.5-Mistral-7B-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [Locutusque-Hyperion-1.5-Mistral-7B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-1.5-Mistral-7B-GGUF/blob/main/Locutusque-Hyperion-1.5-Mistral-7B-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [Locutusque-Hyperion-1.5-Mistral-7B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-1.5-Mistral-7B-GGUF/blob/main/Locutusque-Hyperion-1.5-Mistral-7B-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [Locutusque-Hyperion-1.5-Mistral-7B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-1.5-Mistral-7B-GGUF/blob/main/Locutusque-Hyperion-1.5-Mistral-7B-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [Locutusque-Hyperion-1.5-Mistral-7B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-1.5-Mistral-7B-GGUF/blob/main/Locutusque-Hyperion-1.5-Mistral-7B-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [Locutusque-Hyperion-1.5-Mistral-7B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-1.5-Mistral-7B-GGUF/blob/main/Locutusque-Hyperion-1.5-Mistral-7B-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [Locutusque-Hyperion-1.5-Mistral-7B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-1.5-Mistral-7B-GGUF/blob/main/Locutusque-Hyperion-1.5-Mistral-7B-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [Locutusque-Hyperion-1.5-Mistral-7B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-1.5-Mistral-7B-GGUF/blob/main/Locutusque-Hyperion-1.5-Mistral-7B-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [Locutusque-Hyperion-1.5-Mistral-7B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-1.5-Mistral-7B-GGUF/blob/main/Locutusque-Hyperion-1.5-Mistral-7B-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [Locutusque-Hyperion-1.5-Mistral-7B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-1.5-Mistral-7B-GGUF/blob/main/Locutusque-Hyperion-1.5-Mistral-7B-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [Locutusque-Hyperion-1.5-Mistral-7B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-1.5-Mistral-7B-GGUF/blob/main/Locutusque-Hyperion-1.5-Mistral-7B-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/Undi95-Meta-Llama-3.1-8B-Claude-bf16-GGUF | featherless-ai-quants | 2024-11-10T19:40:59Z | 94 | 0 | null | [
"gguf",
"text-generation",
"base_model:Undi95/Meta-Llama-3.1-8B-Claude-bf16",
"base_model:quantized:Undi95/Meta-Llama-3.1-8B-Claude-bf16",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-05T15:23:29Z | ---
base_model: Undi95/Meta-Llama-3.1-8B-Claude-bf16
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# Undi95/Meta-Llama-3.1-8B-Claude-bf16 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [Undi95-Meta-Llama-3.1-8B-Claude-bf16-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Undi95-Meta-Llama-3.1-8B-Claude-bf16-GGUF/blob/main/Undi95-Meta-Llama-3.1-8B-Claude-bf16-IQ4_XS.gguf) | 4276.63 MB |
| Q2_K | [Undi95-Meta-Llama-3.1-8B-Claude-bf16-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Undi95-Meta-Llama-3.1-8B-Claude-bf16-GGUF/blob/main/Undi95-Meta-Llama-3.1-8B-Claude-bf16-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [Undi95-Meta-Llama-3.1-8B-Claude-bf16-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Undi95-Meta-Llama-3.1-8B-Claude-bf16-GGUF/blob/main/Undi95-Meta-Llama-3.1-8B-Claude-bf16-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [Undi95-Meta-Llama-3.1-8B-Claude-bf16-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Undi95-Meta-Llama-3.1-8B-Claude-bf16-GGUF/blob/main/Undi95-Meta-Llama-3.1-8B-Claude-bf16-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [Undi95-Meta-Llama-3.1-8B-Claude-bf16-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Undi95-Meta-Llama-3.1-8B-Claude-bf16-GGUF/blob/main/Undi95-Meta-Llama-3.1-8B-Claude-bf16-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [Undi95-Meta-Llama-3.1-8B-Claude-bf16-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Undi95-Meta-Llama-3.1-8B-Claude-bf16-GGUF/blob/main/Undi95-Meta-Llama-3.1-8B-Claude-bf16-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [Undi95-Meta-Llama-3.1-8B-Claude-bf16-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Undi95-Meta-Llama-3.1-8B-Claude-bf16-GGUF/blob/main/Undi95-Meta-Llama-3.1-8B-Claude-bf16-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [Undi95-Meta-Llama-3.1-8B-Claude-bf16-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Undi95-Meta-Llama-3.1-8B-Claude-bf16-GGUF/blob/main/Undi95-Meta-Llama-3.1-8B-Claude-bf16-Q5_K_M.gguf) | 5467.41 MB |
| Q5_K_S | [Undi95-Meta-Llama-3.1-8B-Claude-bf16-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Undi95-Meta-Llama-3.1-8B-Claude-bf16-GGUF/blob/main/Undi95-Meta-Llama-3.1-8B-Claude-bf16-Q5_K_S.gguf) | 5339.91 MB |
| Q6_K | [Undi95-Meta-Llama-3.1-8B-Claude-bf16-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Undi95-Meta-Llama-3.1-8B-Claude-bf16-GGUF/blob/main/Undi95-Meta-Llama-3.1-8B-Claude-bf16-Q6_K.gguf) | 6290.45 MB |
| Q8_0 | [Undi95-Meta-Llama-3.1-8B-Claude-bf16-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Undi95-Meta-Llama-3.1-8B-Claude-bf16-GGUF/blob/main/Undi95-Meta-Llama-3.1-8B-Claude-bf16-Q8_0.gguf) | 8145.12 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/picAIso-TARS-8B-GGUF | featherless-ai-quants | 2024-11-10T19:40:53Z | 9 | 0 | null | [
"gguf",
"text-generation",
"base_model:picAIso/TARS-8B",
"base_model:quantized:picAIso/TARS-8B",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-05T15:21:33Z | ---
base_model: picAIso/TARS-8B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# picAIso/TARS-8B GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [picAIso-TARS-8B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/picAIso-TARS-8B-GGUF/blob/main/picAIso-TARS-8B-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [picAIso-TARS-8B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/picAIso-TARS-8B-GGUF/blob/main/picAIso-TARS-8B-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [picAIso-TARS-8B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/picAIso-TARS-8B-GGUF/blob/main/picAIso-TARS-8B-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [picAIso-TARS-8B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/picAIso-TARS-8B-GGUF/blob/main/picAIso-TARS-8B-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [picAIso-TARS-8B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/picAIso-TARS-8B-GGUF/blob/main/picAIso-TARS-8B-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [picAIso-TARS-8B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/picAIso-TARS-8B-GGUF/blob/main/picAIso-TARS-8B-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [picAIso-TARS-8B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/picAIso-TARS-8B-GGUF/blob/main/picAIso-TARS-8B-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [picAIso-TARS-8B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/picAIso-TARS-8B-GGUF/blob/main/picAIso-TARS-8B-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [picAIso-TARS-8B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/picAIso-TARS-8B-GGUF/blob/main/picAIso-TARS-8B-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [picAIso-TARS-8B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/picAIso-TARS-8B-GGUF/blob/main/picAIso-TARS-8B-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [picAIso-TARS-8B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/picAIso-TARS-8B-GGUF/blob/main/picAIso-TARS-8B-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/maldv-badger-writer-llama-3-8b-GGUF | featherless-ai-quants | 2024-11-10T19:40:49Z | 58 | 0 | null | [
"gguf",
"text-generation",
"base_model:maldv/badger-writer-llama-3-8b",
"base_model:quantized:maldv/badger-writer-llama-3-8b",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-05T14:36:24Z | ---
base_model: maldv/badger-writer-llama-3-8b
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# maldv/badger-writer-llama-3-8b GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [maldv-badger-writer-llama-3-8b-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/maldv-badger-writer-llama-3-8b-GGUF/blob/main/maldv-badger-writer-llama-3-8b-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [maldv-badger-writer-llama-3-8b-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/maldv-badger-writer-llama-3-8b-GGUF/blob/main/maldv-badger-writer-llama-3-8b-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [maldv-badger-writer-llama-3-8b-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/maldv-badger-writer-llama-3-8b-GGUF/blob/main/maldv-badger-writer-llama-3-8b-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [maldv-badger-writer-llama-3-8b-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/maldv-badger-writer-llama-3-8b-GGUF/blob/main/maldv-badger-writer-llama-3-8b-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [maldv-badger-writer-llama-3-8b-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/maldv-badger-writer-llama-3-8b-GGUF/blob/main/maldv-badger-writer-llama-3-8b-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [maldv-badger-writer-llama-3-8b-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/maldv-badger-writer-llama-3-8b-GGUF/blob/main/maldv-badger-writer-llama-3-8b-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [maldv-badger-writer-llama-3-8b-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/maldv-badger-writer-llama-3-8b-GGUF/blob/main/maldv-badger-writer-llama-3-8b-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [maldv-badger-writer-llama-3-8b-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/maldv-badger-writer-llama-3-8b-GGUF/blob/main/maldv-badger-writer-llama-3-8b-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [maldv-badger-writer-llama-3-8b-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/maldv-badger-writer-llama-3-8b-GGUF/blob/main/maldv-badger-writer-llama-3-8b-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [maldv-badger-writer-llama-3-8b-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/maldv-badger-writer-llama-3-8b-GGUF/blob/main/maldv-badger-writer-llama-3-8b-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [maldv-badger-writer-llama-3-8b-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/maldv-badger-writer-llama-3-8b-GGUF/blob/main/maldv-badger-writer-llama-3-8b-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/realshyfox-sharded-Llama-3-8B-GGUF | featherless-ai-quants | 2024-11-10T19:40:46Z | 13 | 0 | null | [
"gguf",
"text-generation",
"base_model:realshyfox/sharded-Llama-3-8B",
"base_model:quantized:realshyfox/sharded-Llama-3-8B",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-11-05T14:32:26Z | ---
base_model: realshyfox/sharded-Llama-3-8B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# realshyfox/sharded-Llama-3-8B GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [realshyfox-sharded-Llama-3-8B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/realshyfox-sharded-Llama-3-8B-GGUF/blob/main/realshyfox-sharded-Llama-3-8B-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [realshyfox-sharded-Llama-3-8B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/realshyfox-sharded-Llama-3-8B-GGUF/blob/main/realshyfox-sharded-Llama-3-8B-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [realshyfox-sharded-Llama-3-8B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/realshyfox-sharded-Llama-3-8B-GGUF/blob/main/realshyfox-sharded-Llama-3-8B-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [realshyfox-sharded-Llama-3-8B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/realshyfox-sharded-Llama-3-8B-GGUF/blob/main/realshyfox-sharded-Llama-3-8B-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [realshyfox-sharded-Llama-3-8B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/realshyfox-sharded-Llama-3-8B-GGUF/blob/main/realshyfox-sharded-Llama-3-8B-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [realshyfox-sharded-Llama-3-8B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/realshyfox-sharded-Llama-3-8B-GGUF/blob/main/realshyfox-sharded-Llama-3-8B-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [realshyfox-sharded-Llama-3-8B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/realshyfox-sharded-Llama-3-8B-GGUF/blob/main/realshyfox-sharded-Llama-3-8B-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [realshyfox-sharded-Llama-3-8B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/realshyfox-sharded-Llama-3-8B-GGUF/blob/main/realshyfox-sharded-Llama-3-8B-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [realshyfox-sharded-Llama-3-8B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/realshyfox-sharded-Llama-3-8B-GGUF/blob/main/realshyfox-sharded-Llama-3-8B-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [realshyfox-sharded-Llama-3-8B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/realshyfox-sharded-Llama-3-8B-GGUF/blob/main/realshyfox-sharded-Llama-3-8B-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [realshyfox-sharded-Llama-3-8B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/realshyfox-sharded-Llama-3-8B-GGUF/blob/main/realshyfox-sharded-Llama-3-8B-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/4yo1-llama3-eng-ko-8b-sl3-GGUF | featherless-ai-quants | 2024-11-10T19:40:44Z | 22 | 0 | null | [
"gguf",
"text-generation",
"base_model:4yo1/llama3-eng-ko-8b-sl3",
"base_model:quantized:4yo1/llama3-eng-ko-8b-sl3",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-05T14:21:39Z | ---
base_model: 4yo1/llama3-eng-ko-8b-sl3
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# 4yo1/llama3-eng-ko-8b-sl3 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [4yo1-llama3-eng-ko-8b-sl3-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/4yo1-llama3-eng-ko-8b-sl3-GGUF/blob/main/4yo1-llama3-eng-ko-8b-sl3-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [4yo1-llama3-eng-ko-8b-sl3-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/4yo1-llama3-eng-ko-8b-sl3-GGUF/blob/main/4yo1-llama3-eng-ko-8b-sl3-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [4yo1-llama3-eng-ko-8b-sl3-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/4yo1-llama3-eng-ko-8b-sl3-GGUF/blob/main/4yo1-llama3-eng-ko-8b-sl3-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [4yo1-llama3-eng-ko-8b-sl3-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/4yo1-llama3-eng-ko-8b-sl3-GGUF/blob/main/4yo1-llama3-eng-ko-8b-sl3-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [4yo1-llama3-eng-ko-8b-sl3-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/4yo1-llama3-eng-ko-8b-sl3-GGUF/blob/main/4yo1-llama3-eng-ko-8b-sl3-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [4yo1-llama3-eng-ko-8b-sl3-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/4yo1-llama3-eng-ko-8b-sl3-GGUF/blob/main/4yo1-llama3-eng-ko-8b-sl3-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [4yo1-llama3-eng-ko-8b-sl3-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/4yo1-llama3-eng-ko-8b-sl3-GGUF/blob/main/4yo1-llama3-eng-ko-8b-sl3-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [4yo1-llama3-eng-ko-8b-sl3-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/4yo1-llama3-eng-ko-8b-sl3-GGUF/blob/main/4yo1-llama3-eng-ko-8b-sl3-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [4yo1-llama3-eng-ko-8b-sl3-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/4yo1-llama3-eng-ko-8b-sl3-GGUF/blob/main/4yo1-llama3-eng-ko-8b-sl3-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [4yo1-llama3-eng-ko-8b-sl3-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/4yo1-llama3-eng-ko-8b-sl3-GGUF/blob/main/4yo1-llama3-eng-ko-8b-sl3-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [4yo1-llama3-eng-ko-8b-sl3-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/4yo1-llama3-eng-ko-8b-sl3-GGUF/blob/main/4yo1-llama3-eng-ko-8b-sl3-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/grimjim-llama-3-Nephilim-v2.1-8B-GGUF | featherless-ai-quants | 2024-11-10T19:40:34Z | 23 | 0 | null | [
"gguf",
"text-generation",
"base_model:grimjim/llama-3-Nephilim-v2.1-8B",
"base_model:quantized:grimjim/llama-3-Nephilim-v2.1-8B",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-05T13:43:39Z | ---
base_model: grimjim/llama-3-Nephilim-v2.1-8B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# grimjim/llama-3-Nephilim-v2.1-8B GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [grimjim-llama-3-Nephilim-v2.1-8B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/grimjim-llama-3-Nephilim-v2.1-8B-GGUF/blob/main/grimjim-llama-3-Nephilim-v2.1-8B-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [grimjim-llama-3-Nephilim-v2.1-8B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/grimjim-llama-3-Nephilim-v2.1-8B-GGUF/blob/main/grimjim-llama-3-Nephilim-v2.1-8B-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [grimjim-llama-3-Nephilim-v2.1-8B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/grimjim-llama-3-Nephilim-v2.1-8B-GGUF/blob/main/grimjim-llama-3-Nephilim-v2.1-8B-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [grimjim-llama-3-Nephilim-v2.1-8B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/grimjim-llama-3-Nephilim-v2.1-8B-GGUF/blob/main/grimjim-llama-3-Nephilim-v2.1-8B-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [grimjim-llama-3-Nephilim-v2.1-8B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/grimjim-llama-3-Nephilim-v2.1-8B-GGUF/blob/main/grimjim-llama-3-Nephilim-v2.1-8B-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [grimjim-llama-3-Nephilim-v2.1-8B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/grimjim-llama-3-Nephilim-v2.1-8B-GGUF/blob/main/grimjim-llama-3-Nephilim-v2.1-8B-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [grimjim-llama-3-Nephilim-v2.1-8B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/grimjim-llama-3-Nephilim-v2.1-8B-GGUF/blob/main/grimjim-llama-3-Nephilim-v2.1-8B-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [grimjim-llama-3-Nephilim-v2.1-8B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/grimjim-llama-3-Nephilim-v2.1-8B-GGUF/blob/main/grimjim-llama-3-Nephilim-v2.1-8B-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [grimjim-llama-3-Nephilim-v2.1-8B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/grimjim-llama-3-Nephilim-v2.1-8B-GGUF/blob/main/grimjim-llama-3-Nephilim-v2.1-8B-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [grimjim-llama-3-Nephilim-v2.1-8B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/grimjim-llama-3-Nephilim-v2.1-8B-GGUF/blob/main/grimjim-llama-3-Nephilim-v2.1-8B-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [grimjim-llama-3-Nephilim-v2.1-8B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/grimjim-llama-3-Nephilim-v2.1-8B-GGUF/blob/main/grimjim-llama-3-Nephilim-v2.1-8B-Q8_0.gguf) | 8145.12 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/Darkknight535-OpenCrystal-15B-L3-v3-GGUF | featherless-ai-quants | 2024-11-10T19:40:02Z | 73 | 0 | null | [
"gguf",
"text-generation",
"base_model:Darkknight535/OpenCrystal-15B-L3-v3",
"base_model:quantized:Darkknight535/OpenCrystal-15B-L3-v3",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-05T11:44:58Z | ---
base_model: Darkknight535/OpenCrystal-15B-L3-v3
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# Darkknight535/OpenCrystal-15B-L3-v3 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [Darkknight535-OpenCrystal-15B-L3-v3-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Darkknight535-OpenCrystal-15B-L3-v3-GGUF/blob/main/Darkknight535-OpenCrystal-15B-L3-v3-IQ4_XS.gguf) | 7868.64 MB |
| Q2_K | [Darkknight535-OpenCrystal-15B-L3-v3-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Darkknight535-OpenCrystal-15B-L3-v3-GGUF/blob/main/Darkknight535-OpenCrystal-15B-L3-v3-Q2_K.gguf) | 5480.87 MB |
| Q3_K_L | [Darkknight535-OpenCrystal-15B-L3-v3-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Darkknight535-OpenCrystal-15B-L3-v3-GGUF/blob/main/Darkknight535-OpenCrystal-15B-L3-v3-Q3_K_L.gguf) | 7609.76 MB |
| Q3_K_M | [Darkknight535-OpenCrystal-15B-L3-v3-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Darkknight535-OpenCrystal-15B-L3-v3-GGUF/blob/main/Darkknight535-OpenCrystal-15B-L3-v3-Q3_K_M.gguf) | 7030.76 MB |
| Q3_K_S | [Darkknight535-OpenCrystal-15B-L3-v3-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Darkknight535-OpenCrystal-15B-L3-v3-GGUF/blob/main/Darkknight535-OpenCrystal-15B-L3-v3-Q3_K_S.gguf) | 6355.76 MB |
| Q4_K_M | [Darkknight535-OpenCrystal-15B-L3-v3-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Darkknight535-OpenCrystal-15B-L3-v3-GGUF/blob/main/Darkknight535-OpenCrystal-15B-L3-v3-Q4_K_M.gguf) | 8685.29 MB |
| Q4_K_S | [Darkknight535-OpenCrystal-15B-L3-v3-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Darkknight535-OpenCrystal-15B-L3-v3-GGUF/blob/main/Darkknight535-OpenCrystal-15B-L3-v3-Q4_K_S.gguf) | 8248.29 MB |
| Q5_K_M | [Darkknight535-OpenCrystal-15B-L3-v3-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Darkknight535-OpenCrystal-15B-L3-v3-GGUF/blob/main/Darkknight535-OpenCrystal-15B-L3-v3-Q5_K_M.gguf) | 10171.92 MB |
| Q5_K_S | [Darkknight535-OpenCrystal-15B-L3-v3-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Darkknight535-OpenCrystal-15B-L3-v3-GGUF/blob/main/Darkknight535-OpenCrystal-15B-L3-v3-Q5_K_S.gguf) | 9916.92 MB |
| Q6_K | [Darkknight535-OpenCrystal-15B-L3-v3-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Darkknight535-OpenCrystal-15B-L3-v3-GGUF/blob/main/Darkknight535-OpenCrystal-15B-L3-v3-Q6_K.gguf) | 11751.46 MB |
| Q8_0 | [Darkknight535-OpenCrystal-15B-L3-v3-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Darkknight535-OpenCrystal-15B-L3-v3-GGUF/blob/main/Darkknight535-OpenCrystal-15B-L3-v3-Q8_0.gguf) | 15218.13 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-GGUF | featherless-ai-quants | 2024-11-10T19:40:00Z | 12 | 0 | null | [
"gguf",
"text-generation",
"base_model:wang7776/Mistral-7B-Instruct-v0.2-attention-sparsity-20",
"base_model:quantized:wang7776/Mistral-7B-Instruct-v0.2-attention-sparsity-20",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-05T11:20:50Z | ---
base_model: wang7776/Mistral-7B-Instruct-v0.2-attention-sparsity-20
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# wang7776/Mistral-7B-Instruct-v0.2-attention-sparsity-20 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-GGUF/blob/main/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-GGUF/blob/main/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-GGUF/blob/main/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-GGUF/blob/main/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-GGUF/blob/main/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-GGUF/blob/main/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-GGUF/blob/main/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-GGUF/blob/main/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-GGUF/blob/main/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-GGUF/blob/main/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-GGUF/blob/main/wang7776-Mistral-7B-Instruct-v0.2-attention-sparsity-20-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-GGUF | featherless-ai-quants | 2024-11-10T19:39:55Z | 25 | 0 | null | [
"gguf",
"text-generation",
"base_model:nbeerbower/Lyra-Gutenberg-mistral-nemo-12B",
"base_model:quantized:nbeerbower/Lyra-Gutenberg-mistral-nemo-12B",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-11-05T10:47:19Z | ---
base_model: nbeerbower/Lyra-Gutenberg-mistral-nemo-12B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# nbeerbower/Lyra-Gutenberg-mistral-nemo-12B GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-GGUF/blob/main/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-IQ4_XS.gguf) | 6485.04 MB |
| Q2_K | [nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-GGUF/blob/main/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-Q2_K.gguf) | 4569.10 MB |
| Q3_K_L | [nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-GGUF/blob/main/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-Q3_K_L.gguf) | 6257.54 MB |
| Q3_K_M | [nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-GGUF/blob/main/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-Q3_K_M.gguf) | 5801.29 MB |
| Q3_K_S | [nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-GGUF/blob/main/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-Q3_K_S.gguf) | 5277.85 MB |
| Q4_K_M | [nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-GGUF/blob/main/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-Q4_K_M.gguf) | 7130.82 MB |
| Q4_K_S | [nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-GGUF/blob/main/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-Q4_K_S.gguf) | 6790.35 MB |
| Q5_K_M | [nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-GGUF/blob/main/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-Q5_K_M.gguf) | 8323.32 MB |
| Q5_K_S | [nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-GGUF/blob/main/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-Q5_K_S.gguf) | 8124.10 MB |
| Q6_K | [nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-GGUF/blob/main/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-Q6_K.gguf) | 9590.35 MB |
| Q8_0 | [nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-GGUF/blob/main/nbeerbower-Lyra-Gutenberg-mistral-nemo-12B-Q8_0.gguf) | 12419.10 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/CerebrumTech-cere-llama-3-8b-tr-GGUF | featherless-ai-quants | 2024-11-10T19:39:52Z | 6 | 0 | null | [
"gguf",
"text-generation",
"base_model:CerebrumTech/cere-llama-3-8b-tr",
"base_model:quantized:CerebrumTech/cere-llama-3-8b-tr",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-05T10:41:31Z | ---
base_model: CerebrumTech/cere-llama-3-8b-tr
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# CerebrumTech/cere-llama-3-8b-tr GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [CerebrumTech-cere-llama-3-8b-tr-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/CerebrumTech-cere-llama-3-8b-tr-GGUF/blob/main/CerebrumTech-cere-llama-3-8b-tr-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [CerebrumTech-cere-llama-3-8b-tr-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/CerebrumTech-cere-llama-3-8b-tr-GGUF/blob/main/CerebrumTech-cere-llama-3-8b-tr-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [CerebrumTech-cere-llama-3-8b-tr-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/CerebrumTech-cere-llama-3-8b-tr-GGUF/blob/main/CerebrumTech-cere-llama-3-8b-tr-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [CerebrumTech-cere-llama-3-8b-tr-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/CerebrumTech-cere-llama-3-8b-tr-GGUF/blob/main/CerebrumTech-cere-llama-3-8b-tr-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [CerebrumTech-cere-llama-3-8b-tr-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/CerebrumTech-cere-llama-3-8b-tr-GGUF/blob/main/CerebrumTech-cere-llama-3-8b-tr-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [CerebrumTech-cere-llama-3-8b-tr-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/CerebrumTech-cere-llama-3-8b-tr-GGUF/blob/main/CerebrumTech-cere-llama-3-8b-tr-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [CerebrumTech-cere-llama-3-8b-tr-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/CerebrumTech-cere-llama-3-8b-tr-GGUF/blob/main/CerebrumTech-cere-llama-3-8b-tr-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [CerebrumTech-cere-llama-3-8b-tr-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/CerebrumTech-cere-llama-3-8b-tr-GGUF/blob/main/CerebrumTech-cere-llama-3-8b-tr-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [CerebrumTech-cere-llama-3-8b-tr-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/CerebrumTech-cere-llama-3-8b-tr-GGUF/blob/main/CerebrumTech-cere-llama-3-8b-tr-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [CerebrumTech-cere-llama-3-8b-tr-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/CerebrumTech-cere-llama-3-8b-tr-GGUF/blob/main/CerebrumTech-cere-llama-3-8b-tr-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [CerebrumTech-cere-llama-3-8b-tr-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/CerebrumTech-cere-llama-3-8b-tr-GGUF/blob/main/CerebrumTech-cere-llama-3-8b-tr-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/KOCDIGITAL-Kocdigital-LLM-8b-v0.1-GGUF | featherless-ai-quants | 2024-11-10T19:39:48Z | 19 | 0 | null | [
"gguf",
"text-generation",
"base_model:KOCDIGITAL/Kocdigital-LLM-8b-v0.1",
"base_model:quantized:KOCDIGITAL/Kocdigital-LLM-8b-v0.1",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-05T10:35:23Z | ---
base_model: KOCDIGITAL/Kocdigital-LLM-8b-v0.1
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# KOCDIGITAL/Kocdigital-LLM-8b-v0.1 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [KOCDIGITAL-Kocdigital-LLM-8b-v0.1-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/KOCDIGITAL-Kocdigital-LLM-8b-v0.1-GGUF/blob/main/KOCDIGITAL-Kocdigital-LLM-8b-v0.1-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [KOCDIGITAL-Kocdigital-LLM-8b-v0.1-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/KOCDIGITAL-Kocdigital-LLM-8b-v0.1-GGUF/blob/main/KOCDIGITAL-Kocdigital-LLM-8b-v0.1-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [KOCDIGITAL-Kocdigital-LLM-8b-v0.1-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/KOCDIGITAL-Kocdigital-LLM-8b-v0.1-GGUF/blob/main/KOCDIGITAL-Kocdigital-LLM-8b-v0.1-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [KOCDIGITAL-Kocdigital-LLM-8b-v0.1-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/KOCDIGITAL-Kocdigital-LLM-8b-v0.1-GGUF/blob/main/KOCDIGITAL-Kocdigital-LLM-8b-v0.1-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [KOCDIGITAL-Kocdigital-LLM-8b-v0.1-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/KOCDIGITAL-Kocdigital-LLM-8b-v0.1-GGUF/blob/main/KOCDIGITAL-Kocdigital-LLM-8b-v0.1-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [KOCDIGITAL-Kocdigital-LLM-8b-v0.1-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/KOCDIGITAL-Kocdigital-LLM-8b-v0.1-GGUF/blob/main/KOCDIGITAL-Kocdigital-LLM-8b-v0.1-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [KOCDIGITAL-Kocdigital-LLM-8b-v0.1-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/KOCDIGITAL-Kocdigital-LLM-8b-v0.1-GGUF/blob/main/KOCDIGITAL-Kocdigital-LLM-8b-v0.1-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [KOCDIGITAL-Kocdigital-LLM-8b-v0.1-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/KOCDIGITAL-Kocdigital-LLM-8b-v0.1-GGUF/blob/main/KOCDIGITAL-Kocdigital-LLM-8b-v0.1-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [KOCDIGITAL-Kocdigital-LLM-8b-v0.1-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/KOCDIGITAL-Kocdigital-LLM-8b-v0.1-GGUF/blob/main/KOCDIGITAL-Kocdigital-LLM-8b-v0.1-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [KOCDIGITAL-Kocdigital-LLM-8b-v0.1-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/KOCDIGITAL-Kocdigital-LLM-8b-v0.1-GGUF/blob/main/KOCDIGITAL-Kocdigital-LLM-8b-v0.1-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [KOCDIGITAL-Kocdigital-LLM-8b-v0.1-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/KOCDIGITAL-Kocdigital-LLM-8b-v0.1-GGUF/blob/main/KOCDIGITAL-Kocdigital-LLM-8b-v0.1-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/maldv-badger-lambda-llama-3-8b-GGUF | featherless-ai-quants | 2024-11-10T19:39:41Z | 31 | 0 | null | [
"gguf",
"text-generation",
"base_model:maldv/badger-lambda-llama-3-8b",
"base_model:quantized:maldv/badger-lambda-llama-3-8b",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-05T10:03:02Z | ---
base_model: maldv/badger-lambda-llama-3-8b
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# maldv/badger-lambda-llama-3-8b GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [maldv-badger-lambda-llama-3-8b-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/maldv-badger-lambda-llama-3-8b-GGUF/blob/main/maldv-badger-lambda-llama-3-8b-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [maldv-badger-lambda-llama-3-8b-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/maldv-badger-lambda-llama-3-8b-GGUF/blob/main/maldv-badger-lambda-llama-3-8b-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [maldv-badger-lambda-llama-3-8b-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/maldv-badger-lambda-llama-3-8b-GGUF/blob/main/maldv-badger-lambda-llama-3-8b-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [maldv-badger-lambda-llama-3-8b-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/maldv-badger-lambda-llama-3-8b-GGUF/blob/main/maldv-badger-lambda-llama-3-8b-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [maldv-badger-lambda-llama-3-8b-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/maldv-badger-lambda-llama-3-8b-GGUF/blob/main/maldv-badger-lambda-llama-3-8b-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [maldv-badger-lambda-llama-3-8b-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/maldv-badger-lambda-llama-3-8b-GGUF/blob/main/maldv-badger-lambda-llama-3-8b-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [maldv-badger-lambda-llama-3-8b-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/maldv-badger-lambda-llama-3-8b-GGUF/blob/main/maldv-badger-lambda-llama-3-8b-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [maldv-badger-lambda-llama-3-8b-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/maldv-badger-lambda-llama-3-8b-GGUF/blob/main/maldv-badger-lambda-llama-3-8b-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [maldv-badger-lambda-llama-3-8b-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/maldv-badger-lambda-llama-3-8b-GGUF/blob/main/maldv-badger-lambda-llama-3-8b-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [maldv-badger-lambda-llama-3-8b-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/maldv-badger-lambda-llama-3-8b-GGUF/blob/main/maldv-badger-lambda-llama-3-8b-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [maldv-badger-lambda-llama-3-8b-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/maldv-badger-lambda-llama-3-8b-GGUF/blob/main/maldv-badger-lambda-llama-3-8b-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/bunnycore-LLama-3.1-8B-Matrix-GGUF | featherless-ai-quants | 2024-11-10T19:39:34Z | 5 | 0 | null | [
"gguf",
"text-generation",
"base_model:bunnycore/LLama-3.1-8B-Matrix",
"base_model:quantized:bunnycore/LLama-3.1-8B-Matrix",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-11-05T09:29:29Z | ---
base_model: bunnycore/LLama-3.1-8B-Matrix
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# bunnycore/LLama-3.1-8B-Matrix GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [bunnycore-LLama-3.1-8B-Matrix-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-LLama-3.1-8B-Matrix-GGUF/blob/main/bunnycore-LLama-3.1-8B-Matrix-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [bunnycore-LLama-3.1-8B-Matrix-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-LLama-3.1-8B-Matrix-GGUF/blob/main/bunnycore-LLama-3.1-8B-Matrix-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [bunnycore-LLama-3.1-8B-Matrix-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-LLama-3.1-8B-Matrix-GGUF/blob/main/bunnycore-LLama-3.1-8B-Matrix-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [bunnycore-LLama-3.1-8B-Matrix-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-LLama-3.1-8B-Matrix-GGUF/blob/main/bunnycore-LLama-3.1-8B-Matrix-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [bunnycore-LLama-3.1-8B-Matrix-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-LLama-3.1-8B-Matrix-GGUF/blob/main/bunnycore-LLama-3.1-8B-Matrix-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [bunnycore-LLama-3.1-8B-Matrix-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-LLama-3.1-8B-Matrix-GGUF/blob/main/bunnycore-LLama-3.1-8B-Matrix-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [bunnycore-LLama-3.1-8B-Matrix-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-LLama-3.1-8B-Matrix-GGUF/blob/main/bunnycore-LLama-3.1-8B-Matrix-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [bunnycore-LLama-3.1-8B-Matrix-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-LLama-3.1-8B-Matrix-GGUF/blob/main/bunnycore-LLama-3.1-8B-Matrix-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [bunnycore-LLama-3.1-8B-Matrix-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-LLama-3.1-8B-Matrix-GGUF/blob/main/bunnycore-LLama-3.1-8B-Matrix-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [bunnycore-LLama-3.1-8B-Matrix-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-LLama-3.1-8B-Matrix-GGUF/blob/main/bunnycore-LLama-3.1-8B-Matrix-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [bunnycore-LLama-3.1-8B-Matrix-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-LLama-3.1-8B-Matrix-GGUF/blob/main/bunnycore-LLama-3.1-8B-Matrix-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/TheDrummer-Rocinante-12B-v1.1-GGUF | featherless-ai-quants | 2024-11-10T19:39:17Z | 57 | 0 | null | [
"gguf",
"text-generation",
"base_model:TheDrummer/Rocinante-12B-v1.1",
"base_model:quantized:TheDrummer/Rocinante-12B-v1.1",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-11-05T08:29:22Z | ---
base_model: TheDrummer/Rocinante-12B-v1.1
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# TheDrummer/Rocinante-12B-v1.1 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [TheDrummer-Rocinante-12B-v1.1-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/TheDrummer-Rocinante-12B-v1.1-GGUF/blob/main/TheDrummer-Rocinante-12B-v1.1-IQ4_XS.gguf) | 6485.04 MB |
| Q2_K | [TheDrummer-Rocinante-12B-v1.1-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/TheDrummer-Rocinante-12B-v1.1-GGUF/blob/main/TheDrummer-Rocinante-12B-v1.1-Q2_K.gguf) | 4569.10 MB |
| Q3_K_L | [TheDrummer-Rocinante-12B-v1.1-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/TheDrummer-Rocinante-12B-v1.1-GGUF/blob/main/TheDrummer-Rocinante-12B-v1.1-Q3_K_L.gguf) | 6257.54 MB |
| Q3_K_M | [TheDrummer-Rocinante-12B-v1.1-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/TheDrummer-Rocinante-12B-v1.1-GGUF/blob/main/TheDrummer-Rocinante-12B-v1.1-Q3_K_M.gguf) | 5801.29 MB |
| Q3_K_S | [TheDrummer-Rocinante-12B-v1.1-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/TheDrummer-Rocinante-12B-v1.1-GGUF/blob/main/TheDrummer-Rocinante-12B-v1.1-Q3_K_S.gguf) | 5277.85 MB |
| Q4_K_M | [TheDrummer-Rocinante-12B-v1.1-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/TheDrummer-Rocinante-12B-v1.1-GGUF/blob/main/TheDrummer-Rocinante-12B-v1.1-Q4_K_M.gguf) | 7130.82 MB |
| Q4_K_S | [TheDrummer-Rocinante-12B-v1.1-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/TheDrummer-Rocinante-12B-v1.1-GGUF/blob/main/TheDrummer-Rocinante-12B-v1.1-Q4_K_S.gguf) | 6790.35 MB |
| Q5_K_M | [TheDrummer-Rocinante-12B-v1.1-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/TheDrummer-Rocinante-12B-v1.1-GGUF/blob/main/TheDrummer-Rocinante-12B-v1.1-Q5_K_M.gguf) | 8323.32 MB |
| Q5_K_S | [TheDrummer-Rocinante-12B-v1.1-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/TheDrummer-Rocinante-12B-v1.1-GGUF/blob/main/TheDrummer-Rocinante-12B-v1.1-Q5_K_S.gguf) | 8124.10 MB |
| Q6_K | [TheDrummer-Rocinante-12B-v1.1-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/TheDrummer-Rocinante-12B-v1.1-GGUF/blob/main/TheDrummer-Rocinante-12B-v1.1-Q6_K.gguf) | 9590.35 MB |
| Q8_0 | [TheDrummer-Rocinante-12B-v1.1-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/TheDrummer-Rocinante-12B-v1.1-GGUF/blob/main/TheDrummer-Rocinante-12B-v1.1-Q8_0.gguf) | 12419.10 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/kaitchup-Mayonnaise-4in1-01-GGUF | featherless-ai-quants | 2024-11-10T19:39:13Z | 14 | 0 | null | [
"gguf",
"text-generation",
"base_model:kaitchup/Mayonnaise-4in1-01",
"base_model:quantized:kaitchup/Mayonnaise-4in1-01",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-11-05T08:23:39Z | ---
base_model: kaitchup/Mayonnaise-4in1-01
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# kaitchup/Mayonnaise-4in1-01 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [kaitchup-Mayonnaise-4in1-01-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/kaitchup-Mayonnaise-4in1-01-GGUF/blob/main/kaitchup-Mayonnaise-4in1-01-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [kaitchup-Mayonnaise-4in1-01-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/kaitchup-Mayonnaise-4in1-01-GGUF/blob/main/kaitchup-Mayonnaise-4in1-01-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [kaitchup-Mayonnaise-4in1-01-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/kaitchup-Mayonnaise-4in1-01-GGUF/blob/main/kaitchup-Mayonnaise-4in1-01-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [kaitchup-Mayonnaise-4in1-01-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/kaitchup-Mayonnaise-4in1-01-GGUF/blob/main/kaitchup-Mayonnaise-4in1-01-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [kaitchup-Mayonnaise-4in1-01-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/kaitchup-Mayonnaise-4in1-01-GGUF/blob/main/kaitchup-Mayonnaise-4in1-01-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [kaitchup-Mayonnaise-4in1-01-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/kaitchup-Mayonnaise-4in1-01-GGUF/blob/main/kaitchup-Mayonnaise-4in1-01-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [kaitchup-Mayonnaise-4in1-01-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/kaitchup-Mayonnaise-4in1-01-GGUF/blob/main/kaitchup-Mayonnaise-4in1-01-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [kaitchup-Mayonnaise-4in1-01-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/kaitchup-Mayonnaise-4in1-01-GGUF/blob/main/kaitchup-Mayonnaise-4in1-01-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [kaitchup-Mayonnaise-4in1-01-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/kaitchup-Mayonnaise-4in1-01-GGUF/blob/main/kaitchup-Mayonnaise-4in1-01-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [kaitchup-Mayonnaise-4in1-01-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/kaitchup-Mayonnaise-4in1-01-GGUF/blob/main/kaitchup-Mayonnaise-4in1-01-Q6_K.gguf) | 5666.79 MB |
| Q8_0 | [kaitchup-Mayonnaise-4in1-01-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/kaitchup-Mayonnaise-4in1-01-GGUF/blob/main/kaitchup-Mayonnaise-4in1-01-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/cookinai-Blitz-v0.1-GGUF | featherless-ai-quants | 2024-11-10T19:39:12Z | 6 | 0 | null | [
"gguf",
"text-generation",
"base_model:cookinai/Blitz-v0.1",
"base_model:quantized:cookinai/Blitz-v0.1",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-05T08:17:41Z | ---
base_model: cookinai/Blitz-v0.1
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# cookinai/Blitz-v0.1 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [cookinai-Blitz-v0.1-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/cookinai-Blitz-v0.1-GGUF/blob/main/cookinai-Blitz-v0.1-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [cookinai-Blitz-v0.1-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/cookinai-Blitz-v0.1-GGUF/blob/main/cookinai-Blitz-v0.1-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [cookinai-Blitz-v0.1-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/cookinai-Blitz-v0.1-GGUF/blob/main/cookinai-Blitz-v0.1-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [cookinai-Blitz-v0.1-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/cookinai-Blitz-v0.1-GGUF/blob/main/cookinai-Blitz-v0.1-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [cookinai-Blitz-v0.1-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/cookinai-Blitz-v0.1-GGUF/blob/main/cookinai-Blitz-v0.1-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [cookinai-Blitz-v0.1-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/cookinai-Blitz-v0.1-GGUF/blob/main/cookinai-Blitz-v0.1-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [cookinai-Blitz-v0.1-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/cookinai-Blitz-v0.1-GGUF/blob/main/cookinai-Blitz-v0.1-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [cookinai-Blitz-v0.1-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/cookinai-Blitz-v0.1-GGUF/blob/main/cookinai-Blitz-v0.1-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [cookinai-Blitz-v0.1-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/cookinai-Blitz-v0.1-GGUF/blob/main/cookinai-Blitz-v0.1-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [cookinai-Blitz-v0.1-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/cookinai-Blitz-v0.1-GGUF/blob/main/cookinai-Blitz-v0.1-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [cookinai-Blitz-v0.1-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/cookinai-Blitz-v0.1-GGUF/blob/main/cookinai-Blitz-v0.1-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/InferenceIllusionist-Excalibur-7b-DPO-GGUF | featherless-ai-quants | 2024-11-10T19:39:11Z | 17 | 0 | null | [
"gguf",
"text-generation",
"base_model:InferenceIllusionist/Excalibur-7b-DPO",
"base_model:quantized:InferenceIllusionist/Excalibur-7b-DPO",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-11-05T08:13:56Z | ---
base_model: InferenceIllusionist/Excalibur-7b-DPO
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# InferenceIllusionist/Excalibur-7b-DPO GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [InferenceIllusionist-Excalibur-7b-DPO-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/InferenceIllusionist-Excalibur-7b-DPO-GGUF/blob/main/InferenceIllusionist-Excalibur-7b-DPO-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [InferenceIllusionist-Excalibur-7b-DPO-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/InferenceIllusionist-Excalibur-7b-DPO-GGUF/blob/main/InferenceIllusionist-Excalibur-7b-DPO-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [InferenceIllusionist-Excalibur-7b-DPO-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/InferenceIllusionist-Excalibur-7b-DPO-GGUF/blob/main/InferenceIllusionist-Excalibur-7b-DPO-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [InferenceIllusionist-Excalibur-7b-DPO-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/InferenceIllusionist-Excalibur-7b-DPO-GGUF/blob/main/InferenceIllusionist-Excalibur-7b-DPO-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [InferenceIllusionist-Excalibur-7b-DPO-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/InferenceIllusionist-Excalibur-7b-DPO-GGUF/blob/main/InferenceIllusionist-Excalibur-7b-DPO-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [InferenceIllusionist-Excalibur-7b-DPO-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/InferenceIllusionist-Excalibur-7b-DPO-GGUF/blob/main/InferenceIllusionist-Excalibur-7b-DPO-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [InferenceIllusionist-Excalibur-7b-DPO-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/InferenceIllusionist-Excalibur-7b-DPO-GGUF/blob/main/InferenceIllusionist-Excalibur-7b-DPO-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [InferenceIllusionist-Excalibur-7b-DPO-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/InferenceIllusionist-Excalibur-7b-DPO-GGUF/blob/main/InferenceIllusionist-Excalibur-7b-DPO-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [InferenceIllusionist-Excalibur-7b-DPO-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/InferenceIllusionist-Excalibur-7b-DPO-GGUF/blob/main/InferenceIllusionist-Excalibur-7b-DPO-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [InferenceIllusionist-Excalibur-7b-DPO-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/InferenceIllusionist-Excalibur-7b-DPO-GGUF/blob/main/InferenceIllusionist-Excalibur-7b-DPO-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [InferenceIllusionist-Excalibur-7b-DPO-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/InferenceIllusionist-Excalibur-7b-DPO-GGUF/blob/main/InferenceIllusionist-Excalibur-7b-DPO-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/IntervitensInc-Mistral-Nemo-Base-2407-chatml-GGUF | featherless-ai-quants | 2024-11-10T19:39:04Z | 14 | 0 | null | [
"gguf",
"text-generation",
"base_model:IntervitensInc/Mistral-Nemo-Base-2407-chatml",
"base_model:quantized:IntervitensInc/Mistral-Nemo-Base-2407-chatml",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-05T07:30:26Z | ---
base_model: IntervitensInc/Mistral-Nemo-Base-2407-chatml
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# IntervitensInc/Mistral-Nemo-Base-2407-chatml GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [IntervitensInc-Mistral-Nemo-Base-2407-chatml-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/IntervitensInc-Mistral-Nemo-Base-2407-chatml-GGUF/blob/main/IntervitensInc-Mistral-Nemo-Base-2407-chatml-IQ4_XS.gguf) | 6485.04 MB |
| Q2_K | [IntervitensInc-Mistral-Nemo-Base-2407-chatml-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/IntervitensInc-Mistral-Nemo-Base-2407-chatml-GGUF/blob/main/IntervitensInc-Mistral-Nemo-Base-2407-chatml-Q2_K.gguf) | 4569.10 MB |
| Q3_K_L | [IntervitensInc-Mistral-Nemo-Base-2407-chatml-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/IntervitensInc-Mistral-Nemo-Base-2407-chatml-GGUF/blob/main/IntervitensInc-Mistral-Nemo-Base-2407-chatml-Q3_K_L.gguf) | 6257.54 MB |
| Q3_K_M | [IntervitensInc-Mistral-Nemo-Base-2407-chatml-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/IntervitensInc-Mistral-Nemo-Base-2407-chatml-GGUF/blob/main/IntervitensInc-Mistral-Nemo-Base-2407-chatml-Q3_K_M.gguf) | 5801.29 MB |
| Q3_K_S | [IntervitensInc-Mistral-Nemo-Base-2407-chatml-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/IntervitensInc-Mistral-Nemo-Base-2407-chatml-GGUF/blob/main/IntervitensInc-Mistral-Nemo-Base-2407-chatml-Q3_K_S.gguf) | 5277.85 MB |
| Q4_K_M | [IntervitensInc-Mistral-Nemo-Base-2407-chatml-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/IntervitensInc-Mistral-Nemo-Base-2407-chatml-GGUF/blob/main/IntervitensInc-Mistral-Nemo-Base-2407-chatml-Q4_K_M.gguf) | 7130.82 MB |
| Q4_K_S | [IntervitensInc-Mistral-Nemo-Base-2407-chatml-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/IntervitensInc-Mistral-Nemo-Base-2407-chatml-GGUF/blob/main/IntervitensInc-Mistral-Nemo-Base-2407-chatml-Q4_K_S.gguf) | 6790.35 MB |
| Q5_K_M | [IntervitensInc-Mistral-Nemo-Base-2407-chatml-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/IntervitensInc-Mistral-Nemo-Base-2407-chatml-GGUF/blob/main/IntervitensInc-Mistral-Nemo-Base-2407-chatml-Q5_K_M.gguf) | 8323.32 MB |
| Q5_K_S | [IntervitensInc-Mistral-Nemo-Base-2407-chatml-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/IntervitensInc-Mistral-Nemo-Base-2407-chatml-GGUF/blob/main/IntervitensInc-Mistral-Nemo-Base-2407-chatml-Q5_K_S.gguf) | 8124.10 MB |
| Q6_K | [IntervitensInc-Mistral-Nemo-Base-2407-chatml-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/IntervitensInc-Mistral-Nemo-Base-2407-chatml-GGUF/blob/main/IntervitensInc-Mistral-Nemo-Base-2407-chatml-Q6_K.gguf) | 9590.35 MB |
| Q8_0 | [IntervitensInc-Mistral-Nemo-Base-2407-chatml-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/IntervitensInc-Mistral-Nemo-Base-2407-chatml-GGUF/blob/main/IntervitensInc-Mistral-Nemo-Base-2407-chatml-Q8_0.gguf) | 12419.10 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/QuantumIntelligence-QI-neural-chat-7B-ko-DPO-GGUF | featherless-ai-quants | 2024-11-10T19:39:03Z | 24 | 0 | null | [
"gguf",
"text-generation",
"base_model:QuantumIntelligence/QI-neural-chat-7B-ko-DPO",
"base_model:quantized:QuantumIntelligence/QI-neural-chat-7B-ko-DPO",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-11-05T06:43:49Z | ---
base_model: QuantumIntelligence/QI-neural-chat-7B-ko-DPO
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# QuantumIntelligence/QI-neural-chat-7B-ko-DPO GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [QuantumIntelligence-QI-neural-chat-7B-ko-DPO-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/QuantumIntelligence-QI-neural-chat-7B-ko-DPO-GGUF/blob/main/QuantumIntelligence-QI-neural-chat-7B-ko-DPO-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [QuantumIntelligence-QI-neural-chat-7B-ko-DPO-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/QuantumIntelligence-QI-neural-chat-7B-ko-DPO-GGUF/blob/main/QuantumIntelligence-QI-neural-chat-7B-ko-DPO-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [QuantumIntelligence-QI-neural-chat-7B-ko-DPO-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/QuantumIntelligence-QI-neural-chat-7B-ko-DPO-GGUF/blob/main/QuantumIntelligence-QI-neural-chat-7B-ko-DPO-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [QuantumIntelligence-QI-neural-chat-7B-ko-DPO-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/QuantumIntelligence-QI-neural-chat-7B-ko-DPO-GGUF/blob/main/QuantumIntelligence-QI-neural-chat-7B-ko-DPO-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [QuantumIntelligence-QI-neural-chat-7B-ko-DPO-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/QuantumIntelligence-QI-neural-chat-7B-ko-DPO-GGUF/blob/main/QuantumIntelligence-QI-neural-chat-7B-ko-DPO-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [QuantumIntelligence-QI-neural-chat-7B-ko-DPO-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/QuantumIntelligence-QI-neural-chat-7B-ko-DPO-GGUF/blob/main/QuantumIntelligence-QI-neural-chat-7B-ko-DPO-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [QuantumIntelligence-QI-neural-chat-7B-ko-DPO-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/QuantumIntelligence-QI-neural-chat-7B-ko-DPO-GGUF/blob/main/QuantumIntelligence-QI-neural-chat-7B-ko-DPO-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [QuantumIntelligence-QI-neural-chat-7B-ko-DPO-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/QuantumIntelligence-QI-neural-chat-7B-ko-DPO-GGUF/blob/main/QuantumIntelligence-QI-neural-chat-7B-ko-DPO-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [QuantumIntelligence-QI-neural-chat-7B-ko-DPO-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/QuantumIntelligence-QI-neural-chat-7B-ko-DPO-GGUF/blob/main/QuantumIntelligence-QI-neural-chat-7B-ko-DPO-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [QuantumIntelligence-QI-neural-chat-7B-ko-DPO-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/QuantumIntelligence-QI-neural-chat-7B-ko-DPO-GGUF/blob/main/QuantumIntelligence-QI-neural-chat-7B-ko-DPO-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [QuantumIntelligence-QI-neural-chat-7B-ko-DPO-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/QuantumIntelligence-QI-neural-chat-7B-ko-DPO-GGUF/blob/main/QuantumIntelligence-QI-neural-chat-7B-ko-DPO-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/PathFinderKR-Waktaverse-Llama-3-KO-8B-Instruct-GGUF | featherless-ai-quants | 2024-11-10T19:38:58Z | 17 | 0 | null | [
"gguf",
"text-generation",
"base_model:PathFinderKR/Waktaverse-Llama-3-KO-8B-Instruct",
"base_model:quantized:PathFinderKR/Waktaverse-Llama-3-KO-8B-Instruct",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-05T06:41:30Z | ---
base_model: PathFinderKR/Waktaverse-Llama-3-KO-8B-Instruct
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# PathFinderKR/Waktaverse-Llama-3-KO-8B-Instruct GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [PathFinderKR-Waktaverse-Llama-3-KO-8B-Instruct-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/PathFinderKR-Waktaverse-Llama-3-KO-8B-Instruct-GGUF/blob/main/PathFinderKR-Waktaverse-Llama-3-KO-8B-Instruct-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [PathFinderKR-Waktaverse-Llama-3-KO-8B-Instruct-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/PathFinderKR-Waktaverse-Llama-3-KO-8B-Instruct-GGUF/blob/main/PathFinderKR-Waktaverse-Llama-3-KO-8B-Instruct-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [PathFinderKR-Waktaverse-Llama-3-KO-8B-Instruct-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/PathFinderKR-Waktaverse-Llama-3-KO-8B-Instruct-GGUF/blob/main/PathFinderKR-Waktaverse-Llama-3-KO-8B-Instruct-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [PathFinderKR-Waktaverse-Llama-3-KO-8B-Instruct-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/PathFinderKR-Waktaverse-Llama-3-KO-8B-Instruct-GGUF/blob/main/PathFinderKR-Waktaverse-Llama-3-KO-8B-Instruct-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [PathFinderKR-Waktaverse-Llama-3-KO-8B-Instruct-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/PathFinderKR-Waktaverse-Llama-3-KO-8B-Instruct-GGUF/blob/main/PathFinderKR-Waktaverse-Llama-3-KO-8B-Instruct-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [PathFinderKR-Waktaverse-Llama-3-KO-8B-Instruct-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/PathFinderKR-Waktaverse-Llama-3-KO-8B-Instruct-GGUF/blob/main/PathFinderKR-Waktaverse-Llama-3-KO-8B-Instruct-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [PathFinderKR-Waktaverse-Llama-3-KO-8B-Instruct-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/PathFinderKR-Waktaverse-Llama-3-KO-8B-Instruct-GGUF/blob/main/PathFinderKR-Waktaverse-Llama-3-KO-8B-Instruct-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [PathFinderKR-Waktaverse-Llama-3-KO-8B-Instruct-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/PathFinderKR-Waktaverse-Llama-3-KO-8B-Instruct-GGUF/blob/main/PathFinderKR-Waktaverse-Llama-3-KO-8B-Instruct-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [PathFinderKR-Waktaverse-Llama-3-KO-8B-Instruct-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/PathFinderKR-Waktaverse-Llama-3-KO-8B-Instruct-GGUF/blob/main/PathFinderKR-Waktaverse-Llama-3-KO-8B-Instruct-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [PathFinderKR-Waktaverse-Llama-3-KO-8B-Instruct-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/PathFinderKR-Waktaverse-Llama-3-KO-8B-Instruct-GGUF/blob/main/PathFinderKR-Waktaverse-Llama-3-KO-8B-Instruct-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [PathFinderKR-Waktaverse-Llama-3-KO-8B-Instruct-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/PathFinderKR-Waktaverse-Llama-3-KO-8B-Instruct-GGUF/blob/main/PathFinderKR-Waktaverse-Llama-3-KO-8B-Instruct-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-full-GGUF | featherless-ai-quants | 2024-11-10T19:38:55Z | 8 | 0 | null | [
"gguf",
"text-generation",
"base_model:lightblue/suzume-llama-3-8B-multilingual-orpo-borda-full",
"base_model:quantized:lightblue/suzume-llama-3-8B-multilingual-orpo-borda-full",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-05T06:35:22Z | ---
base_model: lightblue/suzume-llama-3-8B-multilingual-orpo-borda-full
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# lightblue/suzume-llama-3-8B-multilingual-orpo-borda-full GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [lightblue-suzume-llama-3-8B-multilingual-orpo-borda-full-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-full-GGUF/blob/main/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-full-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [lightblue-suzume-llama-3-8B-multilingual-orpo-borda-full-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-full-GGUF/blob/main/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-full-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [lightblue-suzume-llama-3-8B-multilingual-orpo-borda-full-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-full-GGUF/blob/main/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-full-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [lightblue-suzume-llama-3-8B-multilingual-orpo-borda-full-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-full-GGUF/blob/main/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-full-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [lightblue-suzume-llama-3-8B-multilingual-orpo-borda-full-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-full-GGUF/blob/main/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-full-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [lightblue-suzume-llama-3-8B-multilingual-orpo-borda-full-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-full-GGUF/blob/main/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-full-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [lightblue-suzume-llama-3-8B-multilingual-orpo-borda-full-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-full-GGUF/blob/main/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-full-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [lightblue-suzume-llama-3-8B-multilingual-orpo-borda-full-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-full-GGUF/blob/main/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-full-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [lightblue-suzume-llama-3-8B-multilingual-orpo-borda-full-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-full-GGUF/blob/main/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-full-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [lightblue-suzume-llama-3-8B-multilingual-orpo-borda-full-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-full-GGUF/blob/main/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-full-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [lightblue-suzume-llama-3-8B-multilingual-orpo-borda-full-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-full-GGUF/blob/main/lightblue-suzume-llama-3-8B-multilingual-orpo-borda-full-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/ichigoberry-MonarchPipe-7B-slerp-GGUF | featherless-ai-quants | 2024-11-10T19:38:50Z | 7 | 0 | null | [
"gguf",
"text-generation",
"base_model:ichigoberry/MonarchPipe-7B-slerp",
"base_model:quantized:ichigoberry/MonarchPipe-7B-slerp",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-11-05T06:02:18Z | ---
base_model: ichigoberry/MonarchPipe-7B-slerp
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# ichigoberry/MonarchPipe-7B-slerp GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [ichigoberry-MonarchPipe-7B-slerp-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/ichigoberry-MonarchPipe-7B-slerp-GGUF/blob/main/ichigoberry-MonarchPipe-7B-slerp-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [ichigoberry-MonarchPipe-7B-slerp-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/ichigoberry-MonarchPipe-7B-slerp-GGUF/blob/main/ichigoberry-MonarchPipe-7B-slerp-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [ichigoberry-MonarchPipe-7B-slerp-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/ichigoberry-MonarchPipe-7B-slerp-GGUF/blob/main/ichigoberry-MonarchPipe-7B-slerp-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [ichigoberry-MonarchPipe-7B-slerp-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/ichigoberry-MonarchPipe-7B-slerp-GGUF/blob/main/ichigoberry-MonarchPipe-7B-slerp-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [ichigoberry-MonarchPipe-7B-slerp-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/ichigoberry-MonarchPipe-7B-slerp-GGUF/blob/main/ichigoberry-MonarchPipe-7B-slerp-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [ichigoberry-MonarchPipe-7B-slerp-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/ichigoberry-MonarchPipe-7B-slerp-GGUF/blob/main/ichigoberry-MonarchPipe-7B-slerp-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [ichigoberry-MonarchPipe-7B-slerp-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/ichigoberry-MonarchPipe-7B-slerp-GGUF/blob/main/ichigoberry-MonarchPipe-7B-slerp-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [ichigoberry-MonarchPipe-7B-slerp-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/ichigoberry-MonarchPipe-7B-slerp-GGUF/blob/main/ichigoberry-MonarchPipe-7B-slerp-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [ichigoberry-MonarchPipe-7B-slerp-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/ichigoberry-MonarchPipe-7B-slerp-GGUF/blob/main/ichigoberry-MonarchPipe-7B-slerp-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [ichigoberry-MonarchPipe-7B-slerp-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/ichigoberry-MonarchPipe-7B-slerp-GGUF/blob/main/ichigoberry-MonarchPipe-7B-slerp-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [ichigoberry-MonarchPipe-7B-slerp-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/ichigoberry-MonarchPipe-7B-slerp-GGUF/blob/main/ichigoberry-MonarchPipe-7B-slerp-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/Locutusque-Hyperion-3.0-Mistral-7B-alpha-GGUF | featherless-ai-quants | 2024-11-10T19:38:49Z | 12 | 0 | null | [
"gguf",
"text-generation",
"base_model:Locutusque/Hyperion-3.0-Mistral-7B-alpha",
"base_model:quantized:Locutusque/Hyperion-3.0-Mistral-7B-alpha",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-11-05T05:58:53Z | ---
base_model: Locutusque/Hyperion-3.0-Mistral-7B-alpha
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# Locutusque/Hyperion-3.0-Mistral-7B-alpha GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [Locutusque-Hyperion-3.0-Mistral-7B-alpha-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-3.0-Mistral-7B-alpha-GGUF/blob/main/Locutusque-Hyperion-3.0-Mistral-7B-alpha-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [Locutusque-Hyperion-3.0-Mistral-7B-alpha-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-3.0-Mistral-7B-alpha-GGUF/blob/main/Locutusque-Hyperion-3.0-Mistral-7B-alpha-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [Locutusque-Hyperion-3.0-Mistral-7B-alpha-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-3.0-Mistral-7B-alpha-GGUF/blob/main/Locutusque-Hyperion-3.0-Mistral-7B-alpha-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [Locutusque-Hyperion-3.0-Mistral-7B-alpha-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-3.0-Mistral-7B-alpha-GGUF/blob/main/Locutusque-Hyperion-3.0-Mistral-7B-alpha-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [Locutusque-Hyperion-3.0-Mistral-7B-alpha-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-3.0-Mistral-7B-alpha-GGUF/blob/main/Locutusque-Hyperion-3.0-Mistral-7B-alpha-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [Locutusque-Hyperion-3.0-Mistral-7B-alpha-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-3.0-Mistral-7B-alpha-GGUF/blob/main/Locutusque-Hyperion-3.0-Mistral-7B-alpha-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [Locutusque-Hyperion-3.0-Mistral-7B-alpha-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-3.0-Mistral-7B-alpha-GGUF/blob/main/Locutusque-Hyperion-3.0-Mistral-7B-alpha-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [Locutusque-Hyperion-3.0-Mistral-7B-alpha-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-3.0-Mistral-7B-alpha-GGUF/blob/main/Locutusque-Hyperion-3.0-Mistral-7B-alpha-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [Locutusque-Hyperion-3.0-Mistral-7B-alpha-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-3.0-Mistral-7B-alpha-GGUF/blob/main/Locutusque-Hyperion-3.0-Mistral-7B-alpha-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [Locutusque-Hyperion-3.0-Mistral-7B-alpha-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-3.0-Mistral-7B-alpha-GGUF/blob/main/Locutusque-Hyperion-3.0-Mistral-7B-alpha-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [Locutusque-Hyperion-3.0-Mistral-7B-alpha-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-3.0-Mistral-7B-alpha-GGUF/blob/main/Locutusque-Hyperion-3.0-Mistral-7B-alpha-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/eren23-dpo-binarized-NeutrixOmnibe-7B-GGUF | featherless-ai-quants | 2024-11-10T19:38:47Z | 17 | 0 | null | [
"gguf",
"text-generation",
"base_model:eren23/dpo-binarized-NeutrixOmnibe-7B",
"base_model:quantized:eren23/dpo-binarized-NeutrixOmnibe-7B",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-11-05T05:53:13Z | ---
base_model: eren23/dpo-binarized-NeutrixOmnibe-7B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# eren23/dpo-binarized-NeutrixOmnibe-7B GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [eren23-dpo-binarized-NeutrixOmnibe-7B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/eren23-dpo-binarized-NeutrixOmnibe-7B-GGUF/blob/main/eren23-dpo-binarized-NeutrixOmnibe-7B-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [eren23-dpo-binarized-NeutrixOmnibe-7B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/eren23-dpo-binarized-NeutrixOmnibe-7B-GGUF/blob/main/eren23-dpo-binarized-NeutrixOmnibe-7B-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [eren23-dpo-binarized-NeutrixOmnibe-7B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/eren23-dpo-binarized-NeutrixOmnibe-7B-GGUF/blob/main/eren23-dpo-binarized-NeutrixOmnibe-7B-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [eren23-dpo-binarized-NeutrixOmnibe-7B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/eren23-dpo-binarized-NeutrixOmnibe-7B-GGUF/blob/main/eren23-dpo-binarized-NeutrixOmnibe-7B-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [eren23-dpo-binarized-NeutrixOmnibe-7B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/eren23-dpo-binarized-NeutrixOmnibe-7B-GGUF/blob/main/eren23-dpo-binarized-NeutrixOmnibe-7B-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [eren23-dpo-binarized-NeutrixOmnibe-7B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/eren23-dpo-binarized-NeutrixOmnibe-7B-GGUF/blob/main/eren23-dpo-binarized-NeutrixOmnibe-7B-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [eren23-dpo-binarized-NeutrixOmnibe-7B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/eren23-dpo-binarized-NeutrixOmnibe-7B-GGUF/blob/main/eren23-dpo-binarized-NeutrixOmnibe-7B-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [eren23-dpo-binarized-NeutrixOmnibe-7B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/eren23-dpo-binarized-NeutrixOmnibe-7B-GGUF/blob/main/eren23-dpo-binarized-NeutrixOmnibe-7B-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [eren23-dpo-binarized-NeutrixOmnibe-7B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/eren23-dpo-binarized-NeutrixOmnibe-7B-GGUF/blob/main/eren23-dpo-binarized-NeutrixOmnibe-7B-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [eren23-dpo-binarized-NeutrixOmnibe-7B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/eren23-dpo-binarized-NeutrixOmnibe-7B-GGUF/blob/main/eren23-dpo-binarized-NeutrixOmnibe-7B-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [eren23-dpo-binarized-NeutrixOmnibe-7B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/eren23-dpo-binarized-NeutrixOmnibe-7B-GGUF/blob/main/eren23-dpo-binarized-NeutrixOmnibe-7B-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/Azazelle-L3-RP_io-GGUF | featherless-ai-quants | 2024-11-10T19:38:44Z | 8 | 0 | null | [
"gguf",
"text-generation",
"base_model:Azazelle/L3-RP_io",
"base_model:quantized:Azazelle/L3-RP_io",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-05T05:40:48Z | ---
base_model: Azazelle/L3-RP_io
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# Azazelle/L3-RP_io GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [Azazelle-L3-RP_io-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Azazelle-L3-RP_io-GGUF/blob/main/Azazelle-L3-RP_io-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [Azazelle-L3-RP_io-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Azazelle-L3-RP_io-GGUF/blob/main/Azazelle-L3-RP_io-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [Azazelle-L3-RP_io-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Azazelle-L3-RP_io-GGUF/blob/main/Azazelle-L3-RP_io-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [Azazelle-L3-RP_io-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Azazelle-L3-RP_io-GGUF/blob/main/Azazelle-L3-RP_io-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [Azazelle-L3-RP_io-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Azazelle-L3-RP_io-GGUF/blob/main/Azazelle-L3-RP_io-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [Azazelle-L3-RP_io-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Azazelle-L3-RP_io-GGUF/blob/main/Azazelle-L3-RP_io-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [Azazelle-L3-RP_io-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Azazelle-L3-RP_io-GGUF/blob/main/Azazelle-L3-RP_io-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [Azazelle-L3-RP_io-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Azazelle-L3-RP_io-GGUF/blob/main/Azazelle-L3-RP_io-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [Azazelle-L3-RP_io-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Azazelle-L3-RP_io-GGUF/blob/main/Azazelle-L3-RP_io-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [Azazelle-L3-RP_io-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Azazelle-L3-RP_io-GGUF/blob/main/Azazelle-L3-RP_io-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [Azazelle-L3-RP_io-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Azazelle-L3-RP_io-GGUF/blob/main/Azazelle-L3-RP_io-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/ohyeah1-Pantheon-Hermes-rp-GGUF | featherless-ai-quants | 2024-11-10T19:38:41Z | 78 | 0 | null | [
"gguf",
"text-generation",
"base_model:ohyeah1/Pantheon-Hermes-rp",
"base_model:quantized:ohyeah1/Pantheon-Hermes-rp",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-05T05:40:29Z | ---
base_model: ohyeah1/Pantheon-Hermes-rp
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# ohyeah1/Pantheon-Hermes-rp GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [ohyeah1-Pantheon-Hermes-rp-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/ohyeah1-Pantheon-Hermes-rp-GGUF/blob/main/ohyeah1-Pantheon-Hermes-rp-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [ohyeah1-Pantheon-Hermes-rp-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/ohyeah1-Pantheon-Hermes-rp-GGUF/blob/main/ohyeah1-Pantheon-Hermes-rp-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [ohyeah1-Pantheon-Hermes-rp-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/ohyeah1-Pantheon-Hermes-rp-GGUF/blob/main/ohyeah1-Pantheon-Hermes-rp-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [ohyeah1-Pantheon-Hermes-rp-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/ohyeah1-Pantheon-Hermes-rp-GGUF/blob/main/ohyeah1-Pantheon-Hermes-rp-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [ohyeah1-Pantheon-Hermes-rp-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/ohyeah1-Pantheon-Hermes-rp-GGUF/blob/main/ohyeah1-Pantheon-Hermes-rp-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [ohyeah1-Pantheon-Hermes-rp-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/ohyeah1-Pantheon-Hermes-rp-GGUF/blob/main/ohyeah1-Pantheon-Hermes-rp-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [ohyeah1-Pantheon-Hermes-rp-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/ohyeah1-Pantheon-Hermes-rp-GGUF/blob/main/ohyeah1-Pantheon-Hermes-rp-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [ohyeah1-Pantheon-Hermes-rp-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/ohyeah1-Pantheon-Hermes-rp-GGUF/blob/main/ohyeah1-Pantheon-Hermes-rp-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [ohyeah1-Pantheon-Hermes-rp-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/ohyeah1-Pantheon-Hermes-rp-GGUF/blob/main/ohyeah1-Pantheon-Hermes-rp-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [ohyeah1-Pantheon-Hermes-rp-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/ohyeah1-Pantheon-Hermes-rp-GGUF/blob/main/ohyeah1-Pantheon-Hermes-rp-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [ohyeah1-Pantheon-Hermes-rp-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/ohyeah1-Pantheon-Hermes-rp-GGUF/blob/main/ohyeah1-Pantheon-Hermes-rp-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/jdqqjr-llama3-8b-instruct-uncensored-JR-GGUF | featherless-ai-quants | 2024-11-10T19:38:29Z | 12 | 0 | null | [
"gguf",
"text-generation",
"base_model:jdqqjr/llama3-8b-instruct-uncensored-JR",
"base_model:quantized:jdqqjr/llama3-8b-instruct-uncensored-JR",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-05T04:53:14Z | ---
base_model: jdqqjr/llama3-8b-instruct-uncensored-JR
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# jdqqjr/llama3-8b-instruct-uncensored-JR GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [jdqqjr-llama3-8b-instruct-uncensored-JR-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/jdqqjr-llama3-8b-instruct-uncensored-JR-GGUF/blob/main/jdqqjr-llama3-8b-instruct-uncensored-JR-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [jdqqjr-llama3-8b-instruct-uncensored-JR-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/jdqqjr-llama3-8b-instruct-uncensored-JR-GGUF/blob/main/jdqqjr-llama3-8b-instruct-uncensored-JR-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [jdqqjr-llama3-8b-instruct-uncensored-JR-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/jdqqjr-llama3-8b-instruct-uncensored-JR-GGUF/blob/main/jdqqjr-llama3-8b-instruct-uncensored-JR-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [jdqqjr-llama3-8b-instruct-uncensored-JR-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/jdqqjr-llama3-8b-instruct-uncensored-JR-GGUF/blob/main/jdqqjr-llama3-8b-instruct-uncensored-JR-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [jdqqjr-llama3-8b-instruct-uncensored-JR-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/jdqqjr-llama3-8b-instruct-uncensored-JR-GGUF/blob/main/jdqqjr-llama3-8b-instruct-uncensored-JR-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [jdqqjr-llama3-8b-instruct-uncensored-JR-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/jdqqjr-llama3-8b-instruct-uncensored-JR-GGUF/blob/main/jdqqjr-llama3-8b-instruct-uncensored-JR-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [jdqqjr-llama3-8b-instruct-uncensored-JR-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/jdqqjr-llama3-8b-instruct-uncensored-JR-GGUF/blob/main/jdqqjr-llama3-8b-instruct-uncensored-JR-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [jdqqjr-llama3-8b-instruct-uncensored-JR-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/jdqqjr-llama3-8b-instruct-uncensored-JR-GGUF/blob/main/jdqqjr-llama3-8b-instruct-uncensored-JR-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [jdqqjr-llama3-8b-instruct-uncensored-JR-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/jdqqjr-llama3-8b-instruct-uncensored-JR-GGUF/blob/main/jdqqjr-llama3-8b-instruct-uncensored-JR-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [jdqqjr-llama3-8b-instruct-uncensored-JR-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/jdqqjr-llama3-8b-instruct-uncensored-JR-GGUF/blob/main/jdqqjr-llama3-8b-instruct-uncensored-JR-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [jdqqjr-llama3-8b-instruct-uncensored-JR-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/jdqqjr-llama3-8b-instruct-uncensored-JR-GGUF/blob/main/jdqqjr-llama3-8b-instruct-uncensored-JR-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-GGUF | featherless-ai-quants | 2024-11-10T19:38:15Z | 14 | 0 | null | [
"gguf",
"text-generation",
"base_model:BarryFutureman/WestLakeX-7B-EvoMerge-Variant2",
"base_model:quantized:BarryFutureman/WestLakeX-7B-EvoMerge-Variant2",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-11-05T04:18:09Z | ---
base_model: BarryFutureman/WestLakeX-7B-EvoMerge-Variant2
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# BarryFutureman/WestLakeX-7B-EvoMerge-Variant2 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-GGUF/blob/main/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-GGUF/blob/main/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-GGUF/blob/main/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-GGUF/blob/main/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-GGUF/blob/main/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-GGUF/blob/main/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-GGUF/blob/main/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-GGUF/blob/main/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-GGUF/blob/main/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-GGUF/blob/main/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-GGUF/blob/main/BarryFutureman-WestLakeX-7B-EvoMerge-Variant2-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
Subsets and Splits