modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-26 12:28:48
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 498
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-26 12:28:16
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mradermacher/Guardian-Samantha-7b-slerp-GGUF | mradermacher | 2024-05-06T05:36:25Z | 6 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"llamas-community/LlamaGuard-7b",
"ParthasarathyShanmugam/llama-2-7b-samantha",
"en",
"base_model:brichett/Guardian-Samantha-7b-slerp",
"base_model:quantized:brichett/Guardian-Samantha-7b-slerp",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-03-29T17:39:09Z | ---
base_model: brichett/Guardian-Samantha-7b-slerp
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- llamas-community/LlamaGuard-7b
- ParthasarathyShanmugam/llama-2-7b-samantha
---
## About
static quants of https://huggingface.co/brichett/Guardian-Samantha-7b-slerp
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Guardian-Samantha-7b-slerp-GGUF/resolve/main/Guardian-Samantha-7b-slerp.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Guardian-Samantha-7b-slerp-GGUF/resolve/main/Guardian-Samantha-7b-slerp.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Guardian-Samantha-7b-slerp-GGUF/resolve/main/Guardian-Samantha-7b-slerp.IQ3_S.gguf) | IQ3_S | 3.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Guardian-Samantha-7b-slerp-GGUF/resolve/main/Guardian-Samantha-7b-slerp.Q3_K_S.gguf) | Q3_K_S | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Guardian-Samantha-7b-slerp-GGUF/resolve/main/Guardian-Samantha-7b-slerp.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Guardian-Samantha-7b-slerp-GGUF/resolve/main/Guardian-Samantha-7b-slerp.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Guardian-Samantha-7b-slerp-GGUF/resolve/main/Guardian-Samantha-7b-slerp.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Guardian-Samantha-7b-slerp-GGUF/resolve/main/Guardian-Samantha-7b-slerp.IQ4_XS.gguf) | IQ4_XS | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Guardian-Samantha-7b-slerp-GGUF/resolve/main/Guardian-Samantha-7b-slerp.Q4_0.gguf) | Q4_0 | 4.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Guardian-Samantha-7b-slerp-GGUF/resolve/main/Guardian-Samantha-7b-slerp.IQ4_NL.gguf) | IQ4_NL | 4.1 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Guardian-Samantha-7b-slerp-GGUF/resolve/main/Guardian-Samantha-7b-slerp.Q4_K_S.gguf) | Q4_K_S | 4.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Guardian-Samantha-7b-slerp-GGUF/resolve/main/Guardian-Samantha-7b-slerp.Q4_K_M.gguf) | Q4_K_M | 4.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Guardian-Samantha-7b-slerp-GGUF/resolve/main/Guardian-Samantha-7b-slerp.Q5_K_S.gguf) | Q5_K_S | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Guardian-Samantha-7b-slerp-GGUF/resolve/main/Guardian-Samantha-7b-slerp.Q5_K_M.gguf) | Q5_K_M | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Guardian-Samantha-7b-slerp-GGUF/resolve/main/Guardian-Samantha-7b-slerp.Q6_K.gguf) | Q6_K | 5.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Guardian-Samantha-7b-slerp-GGUF/resolve/main/Guardian-Samantha-7b-slerp.Q8_0.gguf) | Q8_0 | 7.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/llamoe-8x1b-hermes-GGUF | mradermacher | 2024-05-06T05:36:05Z | 15 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-03-29T18:12:35Z | ---
base_model: N8Programs/llamoe-8x1b-hermes
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
static quants of https://huggingface.co/N8Programs/llamoe-8x1b-hermes
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llamoe-8x1b-hermes-GGUF/resolve/main/llamoe-8x1b-hermes.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/llamoe-8x1b-hermes-GGUF/resolve/main/llamoe-8x1b-hermes.IQ3_XS.gguf) | IQ3_XS | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/llamoe-8x1b-hermes-GGUF/resolve/main/llamoe-8x1b-hermes.IQ3_S.gguf) | IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/llamoe-8x1b-hermes-GGUF/resolve/main/llamoe-8x1b-hermes.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/llamoe-8x1b-hermes-GGUF/resolve/main/llamoe-8x1b-hermes.IQ3_M.gguf) | IQ3_M | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/llamoe-8x1b-hermes-GGUF/resolve/main/llamoe-8x1b-hermes.Q3_K_M.gguf) | Q3_K_M | 3.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llamoe-8x1b-hermes-GGUF/resolve/main/llamoe-8x1b-hermes.Q3_K_L.gguf) | Q3_K_L | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/llamoe-8x1b-hermes-GGUF/resolve/main/llamoe-8x1b-hermes.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/llamoe-8x1b-hermes-GGUF/resolve/main/llamoe-8x1b-hermes.Q4_0.gguf) | Q4_0 | 3.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/llamoe-8x1b-hermes-GGUF/resolve/main/llamoe-8x1b-hermes.IQ4_NL.gguf) | IQ4_NL | 3.9 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/llamoe-8x1b-hermes-GGUF/resolve/main/llamoe-8x1b-hermes.Q4_K_S.gguf) | Q4_K_S | 3.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llamoe-8x1b-hermes-GGUF/resolve/main/llamoe-8x1b-hermes.Q4_K_M.gguf) | Q4_K_M | 4.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llamoe-8x1b-hermes-GGUF/resolve/main/llamoe-8x1b-hermes.Q5_K_S.gguf) | Q5_K_S | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/llamoe-8x1b-hermes-GGUF/resolve/main/llamoe-8x1b-hermes.Q5_K_M.gguf) | Q5_K_M | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/llamoe-8x1b-hermes-GGUF/resolve/main/llamoe-8x1b-hermes.Q6_K.gguf) | Q6_K | 5.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/llamoe-8x1b-hermes-GGUF/resolve/main/llamoe-8x1b-hermes.Q8_0.gguf) | Q8_0 | 7.0 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/llamoe-8x1b-GGUF | mradermacher | 2024-05-06T05:36:02Z | 22 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-03-29T18:44:17Z | ---
base_model: N8Programs/llamoe-8x1b
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
static quants of https://huggingface.co/N8Programs/llamoe-8x1b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llamoe-8x1b-GGUF/resolve/main/llamoe-8x1b.Q2_K.gguf) | Q2_K | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/llamoe-8x1b-GGUF/resolve/main/llamoe-8x1b.IQ3_XS.gguf) | IQ3_XS | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/llamoe-8x1b-GGUF/resolve/main/llamoe-8x1b.IQ3_S.gguf) | IQ3_S | 3.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/llamoe-8x1b-GGUF/resolve/main/llamoe-8x1b.Q3_K_S.gguf) | Q3_K_S | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/llamoe-8x1b-GGUF/resolve/main/llamoe-8x1b.IQ3_M.gguf) | IQ3_M | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/llamoe-8x1b-GGUF/resolve/main/llamoe-8x1b.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llamoe-8x1b-GGUF/resolve/main/llamoe-8x1b.Q3_K_L.gguf) | Q3_K_L | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/llamoe-8x1b-GGUF/resolve/main/llamoe-8x1b.IQ4_XS.gguf) | IQ4_XS | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/llamoe-8x1b-GGUF/resolve/main/llamoe-8x1b.Q4_0.gguf) | Q4_0 | 4.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/llamoe-8x1b-GGUF/resolve/main/llamoe-8x1b.IQ4_NL.gguf) | IQ4_NL | 4.0 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/llamoe-8x1b-GGUF/resolve/main/llamoe-8x1b.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llamoe-8x1b-GGUF/resolve/main/llamoe-8x1b.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llamoe-8x1b-GGUF/resolve/main/llamoe-8x1b.Q5_K_S.gguf) | Q5_K_S | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/llamoe-8x1b-GGUF/resolve/main/llamoe-8x1b.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/llamoe-8x1b-GGUF/resolve/main/llamoe-8x1b.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/llamoe-8x1b-GGUF/resolve/main/llamoe-8x1b.Q8_0.gguf) | Q8_0 | 7.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Wespeaker/wespeaker-voxceleb-ecapa-tdnn512 | Wespeaker | 2024-05-06T05:35:55Z | 5 | 0 | null | [
"onnx",
"license:apache-2.0",
"region:us"
] | null | 2024-05-06T05:10:38Z | ---
license: apache-2.0
---
|
mradermacher/roleplay-mis_wes-GGUF | mradermacher | 2024-05-06T05:35:50Z | 225 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"HuggingFaceH4/mistral-7b-grok",
"senseable/WestLake-7B-v2",
"en",
"base_model:ajay141/roleplay-mis_wes",
"base_model:quantized:ajay141/roleplay-mis_wes",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-03-29T20:03:16Z | ---
base_model: ajay141/roleplay-mis_wes
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- HuggingFaceH4/mistral-7b-grok
- senseable/WestLake-7B-v2
---
## About
static quants of https://huggingface.co/ajay141/roleplay-mis_wes
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/roleplay-mis_wes-GGUF/resolve/main/roleplay-mis_wes.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/roleplay-mis_wes-GGUF/resolve/main/roleplay-mis_wes.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/roleplay-mis_wes-GGUF/resolve/main/roleplay-mis_wes.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/roleplay-mis_wes-GGUF/resolve/main/roleplay-mis_wes.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/roleplay-mis_wes-GGUF/resolve/main/roleplay-mis_wes.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/roleplay-mis_wes-GGUF/resolve/main/roleplay-mis_wes.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/roleplay-mis_wes-GGUF/resolve/main/roleplay-mis_wes.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/roleplay-mis_wes-GGUF/resolve/main/roleplay-mis_wes.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/roleplay-mis_wes-GGUF/resolve/main/roleplay-mis_wes.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/roleplay-mis_wes-GGUF/resolve/main/roleplay-mis_wes.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/roleplay-mis_wes-GGUF/resolve/main/roleplay-mis_wes.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/roleplay-mis_wes-GGUF/resolve/main/roleplay-mis_wes.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/roleplay-mis_wes-GGUF/resolve/main/roleplay-mis_wes.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/roleplay-mis_wes-GGUF/resolve/main/roleplay-mis_wes.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/roleplay-mis_wes-GGUF/resolve/main/roleplay-mis_wes.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/roleplay-mis_wes-GGUF/resolve/main/roleplay-mis_wes.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Customer-Support-Clown-Extended-GGUF | mradermacher | 2024-05-06T05:35:47Z | 134 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"arcee-ai/Clown-DPO-Extended",
"mistralai/Mistral-7B-v0.1+predibase/customer_support",
"en",
"base_model:arcee-ai/Customer-Support-Clown-Extended",
"base_model:quantized:arcee-ai/Customer-Support-Clown-Extended",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-29T20:11:49Z | ---
base_model: arcee-ai/Customer-Support-Clown-Extended
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- arcee-ai/Clown-DPO-Extended
- mistralai/Mistral-7B-v0.1+predibase/customer_support
---
## About
static quants of https://huggingface.co/arcee-ai/Customer-Support-Clown-Extended
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Customer-Support-Clown-Extended-GGUF/resolve/main/Customer-Support-Clown-Extended.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Customer-Support-Clown-Extended-GGUF/resolve/main/Customer-Support-Clown-Extended.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Customer-Support-Clown-Extended-GGUF/resolve/main/Customer-Support-Clown-Extended.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Customer-Support-Clown-Extended-GGUF/resolve/main/Customer-Support-Clown-Extended.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Customer-Support-Clown-Extended-GGUF/resolve/main/Customer-Support-Clown-Extended.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Customer-Support-Clown-Extended-GGUF/resolve/main/Customer-Support-Clown-Extended.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Customer-Support-Clown-Extended-GGUF/resolve/main/Customer-Support-Clown-Extended.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Customer-Support-Clown-Extended-GGUF/resolve/main/Customer-Support-Clown-Extended.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Customer-Support-Clown-Extended-GGUF/resolve/main/Customer-Support-Clown-Extended.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Customer-Support-Clown-Extended-GGUF/resolve/main/Customer-Support-Clown-Extended.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Customer-Support-Clown-Extended-GGUF/resolve/main/Customer-Support-Clown-Extended.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Customer-Support-Clown-Extended-GGUF/resolve/main/Customer-Support-Clown-Extended.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Customer-Support-Clown-Extended-GGUF/resolve/main/Customer-Support-Clown-Extended.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Customer-Support-Clown-Extended-GGUF/resolve/main/Customer-Support-Clown-Extended.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Customer-Support-Clown-Extended-GGUF/resolve/main/Customer-Support-Clown-Extended.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Customer-Support-Clown-Extended-GGUF/resolve/main/Customer-Support-Clown-Extended.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/QueenLiz-120B-GGUF | mradermacher | 2024-05-06T05:35:44Z | 18 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Noodlz/QueenLiz-120B",
"base_model:quantized:Noodlz/QueenLiz-120B",
"endpoints_compatible",
"region:us"
] | null | 2024-03-29T21:13:38Z | ---
base_model: Noodlz/QueenLiz-120B
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
static quants of https://huggingface.co/Noodlz/QueenLiz-120B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/QueenLiz-120B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/QueenLiz-120B-GGUF/resolve/main/QueenLiz-120B.Q2_K.gguf) | Q2_K | 44.6 | |
| [GGUF](https://huggingface.co/mradermacher/QueenLiz-120B-GGUF/resolve/main/QueenLiz-120B.IQ3_XS.gguf) | IQ3_XS | 49.6 | |
| [PART 1](https://huggingface.co/mradermacher/QueenLiz-120B-GGUF/resolve/main/QueenLiz-120B.Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/QueenLiz-120B-GGUF/resolve/main/QueenLiz-120B.Q3_K_S.gguf.part2of2) | Q3_K_S | 52.2 | |
| [PART 1](https://huggingface.co/mradermacher/QueenLiz-120B-GGUF/resolve/main/QueenLiz-120B.IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/QueenLiz-120B-GGUF/resolve/main/QueenLiz-120B.IQ3_S.gguf.part2of2) | IQ3_S | 52.4 | beats Q3_K* |
| [PART 1](https://huggingface.co/mradermacher/QueenLiz-120B-GGUF/resolve/main/QueenLiz-120B.IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/QueenLiz-120B-GGUF/resolve/main/QueenLiz-120B.IQ3_M.gguf.part2of2) | IQ3_M | 54.2 | |
| [PART 1](https://huggingface.co/mradermacher/QueenLiz-120B-GGUF/resolve/main/QueenLiz-120B.Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/QueenLiz-120B-GGUF/resolve/main/QueenLiz-120B.Q3_K_M.gguf.part2of2) | Q3_K_M | 58.2 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/QueenLiz-120B-GGUF/resolve/main/QueenLiz-120B.Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/QueenLiz-120B-GGUF/resolve/main/QueenLiz-120B.Q3_K_L.gguf.part2of2) | Q3_K_L | 63.4 | |
| [PART 1](https://huggingface.co/mradermacher/QueenLiz-120B-GGUF/resolve/main/QueenLiz-120B.IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/QueenLiz-120B-GGUF/resolve/main/QueenLiz-120B.IQ4_XS.gguf.part2of2) | IQ4_XS | 65.2 | |
| [PART 1](https://huggingface.co/mradermacher/QueenLiz-120B-GGUF/resolve/main/QueenLiz-120B.Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/QueenLiz-120B-GGUF/resolve/main/QueenLiz-120B.Q4_0.gguf.part2of2) | Q4_0 | 68.2 | fast, low quality |
| [PART 1](https://huggingface.co/mradermacher/QueenLiz-120B-GGUF/resolve/main/QueenLiz-120B.Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/QueenLiz-120B-GGUF/resolve/main/QueenLiz-120B.Q4_K_S.gguf.part2of2) | Q4_K_S | 68.7 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/QueenLiz-120B-GGUF/resolve/main/QueenLiz-120B.IQ4_NL.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/QueenLiz-120B-GGUF/resolve/main/QueenLiz-120B.IQ4_NL.gguf.part2of2) | IQ4_NL | 68.8 | prefer IQ4_XS |
| [PART 1](https://huggingface.co/mradermacher/QueenLiz-120B-GGUF/resolve/main/QueenLiz-120B.Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/QueenLiz-120B-GGUF/resolve/main/QueenLiz-120B.Q4_K_M.gguf.part2of2) | Q4_K_M | 72.6 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/QueenLiz-120B-GGUF/resolve/main/QueenLiz-120B.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/QueenLiz-120B-GGUF/resolve/main/QueenLiz-120B.Q5_K_S.gguf.part2of2) | Q5_K_S | 83.2 | |
| [PART 1](https://huggingface.co/mradermacher/QueenLiz-120B-GGUF/resolve/main/QueenLiz-120B.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/QueenLiz-120B-GGUF/resolve/main/QueenLiz-120B.Q5_K_M.gguf.part2of2) | Q5_K_M | 85.4 | |
| [PART 1](https://huggingface.co/mradermacher/QueenLiz-120B-GGUF/resolve/main/QueenLiz-120B.Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/QueenLiz-120B-GGUF/resolve/main/QueenLiz-120B.Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/QueenLiz-120B-GGUF/resolve/main/QueenLiz-120B.Q6_K.gguf.part3of3) | Q6_K | 99.1 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/QueenLiz-120B-GGUF/resolve/main/QueenLiz-120B.Q8_0.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/QueenLiz-120B-GGUF/resolve/main/QueenLiz-120B.Q8_0.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/QueenLiz-120B-GGUF/resolve/main/QueenLiz-120B.Q8_0.gguf.part3of3) | Q8_0 | 128.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/shqiponja-59b-v1-i1-GGUF | mradermacher | 2024-05-06T05:35:13Z | 3 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"frankenstein",
"merge",
"en",
"base_model:nisten/shqiponja-59b-v1",
"base_model:quantized:nisten/shqiponja-59b-v1",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-03-29T23:04:42Z | ---
base_model: nisten/shqiponja-59b-v1
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- mergekit
- frankenstein
- merge
---
## About
weighted/imatrix quants of https://huggingface.co/nisten/shqiponja-59b-v1
**only the first 40k tokens of my 160k token training data is used as the model overflowed (likely a problem with the model weights)**
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/shqiponja-59b-v1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/shqiponja-59b-v1-i1-GGUF/resolve/main/shqiponja-59b-v1.i1-IQ1_S.gguf) | i1-IQ1_S | 13.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/shqiponja-59b-v1-i1-GGUF/resolve/main/shqiponja-59b-v1.i1-IQ1_M.gguf) | i1-IQ1_M | 14.5 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/shqiponja-59b-v1-i1-GGUF/resolve/main/shqiponja-59b-v1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 16.5 | |
| [GGUF](https://huggingface.co/mradermacher/shqiponja-59b-v1-i1-GGUF/resolve/main/shqiponja-59b-v1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 18.2 | |
| [GGUF](https://huggingface.co/mradermacher/shqiponja-59b-v1-i1-GGUF/resolve/main/shqiponja-59b-v1.i1-IQ2_S.gguf) | i1-IQ2_S | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/shqiponja-59b-v1-i1-GGUF/resolve/main/shqiponja-59b-v1.i1-IQ2_M.gguf) | i1-IQ2_M | 20.8 | |
| [GGUF](https://huggingface.co/mradermacher/shqiponja-59b-v1-i1-GGUF/resolve/main/shqiponja-59b-v1.i1-Q2_K.gguf) | i1-Q2_K | 22.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/shqiponja-59b-v1-i1-GGUF/resolve/main/shqiponja-59b-v1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 23.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/shqiponja-59b-v1-i1-GGUF/resolve/main/shqiponja-59b-v1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 24.9 | |
| [GGUF](https://huggingface.co/mradermacher/shqiponja-59b-v1-i1-GGUF/resolve/main/shqiponja-59b-v1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 26.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/shqiponja-59b-v1-i1-GGUF/resolve/main/shqiponja-59b-v1.i1-IQ3_S.gguf) | i1-IQ3_S | 26.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/shqiponja-59b-v1-i1-GGUF/resolve/main/shqiponja-59b-v1.i1-IQ3_M.gguf) | i1-IQ3_M | 27.2 | |
| [GGUF](https://huggingface.co/mradermacher/shqiponja-59b-v1-i1-GGUF/resolve/main/shqiponja-59b-v1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 29.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/shqiponja-59b-v1-i1-GGUF/resolve/main/shqiponja-59b-v1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 31.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/shqiponja-59b-v1-i1-GGUF/resolve/main/shqiponja-59b-v1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 32.2 | |
| [GGUF](https://huggingface.co/mradermacher/shqiponja-59b-v1-i1-GGUF/resolve/main/shqiponja-59b-v1.i1-Q4_0.gguf) | i1-Q4_0 | 34.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/shqiponja-59b-v1-i1-GGUF/resolve/main/shqiponja-59b-v1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 34.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/shqiponja-59b-v1-i1-GGUF/resolve/main/shqiponja-59b-v1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 36.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/shqiponja-59b-v1-i1-GGUF/resolve/main/shqiponja-59b-v1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 41.2 | |
| [GGUF](https://huggingface.co/mradermacher/shqiponja-59b-v1-i1-GGUF/resolve/main/shqiponja-59b-v1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 42.3 | |
| [GGUF](https://huggingface.co/mradermacher/shqiponja-59b-v1-i1-GGUF/resolve/main/shqiponja-59b-v1.i1-Q6_K.gguf) | i1-Q6_K | 49.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
brdemorin/Llama3_7b-custom_v2 | brdemorin | 2024-05-06T05:35:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-06T05:34:46Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** brdemorin
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/NeuralGanesha-7b-GGUF | mradermacher | 2024-05-06T05:35:10Z | 102 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"Kukedlc/SomeModelsMerge-7b",
"Kukedlc/MyModelsMerge-7b",
"en",
"base_model:Kukedlc/NeuralGanesha-7b",
"base_model:quantized:Kukedlc/NeuralGanesha-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-29T23:18:08Z | ---
base_model: Kukedlc/NeuralGanesha-7b
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- Kukedlc/SomeModelsMerge-7b
- Kukedlc/MyModelsMerge-7b
---
## About
static quants of https://huggingface.co/Kukedlc/NeuralGanesha-7b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NeuralGanesha-7b-GGUF/resolve/main/NeuralGanesha-7b.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralGanesha-7b-GGUF/resolve/main/NeuralGanesha-7b.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralGanesha-7b-GGUF/resolve/main/NeuralGanesha-7b.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralGanesha-7b-GGUF/resolve/main/NeuralGanesha-7b.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/NeuralGanesha-7b-GGUF/resolve/main/NeuralGanesha-7b.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralGanesha-7b-GGUF/resolve/main/NeuralGanesha-7b.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralGanesha-7b-GGUF/resolve/main/NeuralGanesha-7b.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralGanesha-7b-GGUF/resolve/main/NeuralGanesha-7b.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralGanesha-7b-GGUF/resolve/main/NeuralGanesha-7b.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralGanesha-7b-GGUF/resolve/main/NeuralGanesha-7b.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeuralGanesha-7b-GGUF/resolve/main/NeuralGanesha-7b.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/NeuralGanesha-7b-GGUF/resolve/main/NeuralGanesha-7b.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeuralGanesha-7b-GGUF/resolve/main/NeuralGanesha-7b.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralGanesha-7b-GGUF/resolve/main/NeuralGanesha-7b.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralGanesha-7b-GGUF/resolve/main/NeuralGanesha-7b.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralGanesha-7b-GGUF/resolve/main/NeuralGanesha-7b.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Rhea-72b-v0.5-GGUF | mradermacher | 2024-05-06T05:35:07Z | 32 | 3 | transformers | [
"transformers",
"gguf",
"en",
"base_model:davidkim205/Rhea-72b-v0.5",
"base_model:quantized:davidkim205/Rhea-72b-v0.5",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-29T23:30:00Z | ---
base_model: davidkim205/Rhea-72b-v0.5
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
static quants of https://huggingface.co/davidkim205/Rhea-72b-v0.5
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Rhea-72b-v0.5-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Rhea-72b-v0.5-GGUF/resolve/main/Rhea-72b-v0.5.Q2_K.gguf) | Q2_K | 31.1 | |
| [GGUF](https://huggingface.co/mradermacher/Rhea-72b-v0.5-GGUF/resolve/main/Rhea-72b-v0.5.IQ3_XS.gguf) | IQ3_XS | 34.0 | |
| [GGUF](https://huggingface.co/mradermacher/Rhea-72b-v0.5-GGUF/resolve/main/Rhea-72b-v0.5.IQ3_S.gguf) | IQ3_S | 35.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Rhea-72b-v0.5-GGUF/resolve/main/Rhea-72b-v0.5.Q3_K_S.gguf) | Q3_K_S | 35.6 | |
| [GGUF](https://huggingface.co/mradermacher/Rhea-72b-v0.5-GGUF/resolve/main/Rhea-72b-v0.5.IQ3_M.gguf) | IQ3_M | 37.3 | |
| [GGUF](https://huggingface.co/mradermacher/Rhea-72b-v0.5-GGUF/resolve/main/Rhea-72b-v0.5.Q3_K_M.gguf) | Q3_K_M | 39.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Rhea-72b-v0.5-GGUF/resolve/main/Rhea-72b-v0.5.Q3_K_L.gguf) | Q3_K_L | 42.6 | |
| [GGUF](https://huggingface.co/mradermacher/Rhea-72b-v0.5-GGUF/resolve/main/Rhea-72b-v0.5.IQ4_XS.gguf) | IQ4_XS | 43.2 | |
| [GGUF](https://huggingface.co/mradermacher/Rhea-72b-v0.5-GGUF/resolve/main/Rhea-72b-v0.5.Q4_0.gguf) | Q4_0 | 45.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Rhea-72b-v0.5-GGUF/resolve/main/Rhea-72b-v0.5.IQ4_NL.gguf) | IQ4_NL | 45.3 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Rhea-72b-v0.5-GGUF/resolve/main/Rhea-72b-v0.5.Q4_K_S.gguf) | Q4_K_S | 45.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Rhea-72b-v0.5-GGUF/resolve/main/Rhea-72b-v0.5.Q4_K_M.gguf) | Q4_K_M | 47.8 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Rhea-72b-v0.5-GGUF/resolve/main/Rhea-72b-v0.5.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Rhea-72b-v0.5-GGUF/resolve/main/Rhea-72b-v0.5.Q5_K_S.gguf.part2of2) | Q5_K_S | 53.9 | |
| [PART 1](https://huggingface.co/mradermacher/Rhea-72b-v0.5-GGUF/resolve/main/Rhea-72b-v0.5.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Rhea-72b-v0.5-GGUF/resolve/main/Rhea-72b-v0.5.Q5_K_M.gguf.part2of2) | Q5_K_M | 55.4 | |
| [PART 1](https://huggingface.co/mradermacher/Rhea-72b-v0.5-GGUF/resolve/main/Rhea-72b-v0.5.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Rhea-72b-v0.5-GGUF/resolve/main/Rhea-72b-v0.5.Q6_K.gguf.part2of2) | Q6_K | 63.4 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Rhea-72b-v0.5-GGUF/resolve/main/Rhea-72b-v0.5.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Rhea-72b-v0.5-GGUF/resolve/main/Rhea-72b-v0.5.Q8_0.gguf.part2of2) | Q8_0 | 80.6 | fast, best quality |
| [P1](https://huggingface.co/mradermacher/Rhea-72b-v0.5-GGUF/resolve/main/Rhea-72b-v0.5.SOURCE.gguf.part1of6) [P2](https://huggingface.co/mradermacher/Rhea-72b-v0.5-GGUF/resolve/main/Rhea-72b-v0.5.SOURCE.gguf.part2of6) [P3](https://huggingface.co/mradermacher/Rhea-72b-v0.5-GGUF/resolve/main/Rhea-72b-v0.5.SOURCE.gguf.part3of6) [P4](https://huggingface.co/mradermacher/Rhea-72b-v0.5-GGUF/resolve/main/Rhea-72b-v0.5.SOURCE.gguf.part4of6) [P5](https://huggingface.co/mradermacher/Rhea-72b-v0.5-GGUF/resolve/main/Rhea-72b-v0.5.SOURCE.gguf.part5of6) [P6](https://huggingface.co/mradermacher/Rhea-72b-v0.5-GGUF/resolve/main/Rhea-72b-v0.5.SOURCE.gguf.part6of6) | SOURCE | 289.3 | source gguf, only provided when it was hard to come by |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
baaaaaaaam/v6 | baaaaaaaam | 2024-05-06T05:35:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-06T03:19:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/NeuralMaths-Experiment-7b-GGUF | mradermacher | 2024-05-06T05:34:52Z | 92 | 1 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"WizardLM/WizardMath-7B-V1.1",
"mlabonne/NeuralDaredevil-7B",
"Kukedlc/Neural4gsm8k",
"Eric111/Mayo",
"Kukedlc/NeuralSirKrishna-7b",
"en",
"base_model:Kukedlc/NeuralMaths-Experiment-7b",
"base_model:quantized:Kukedlc/NeuralMaths-Experiment-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-30T02:04:55Z | ---
base_model: Kukedlc/NeuralMaths-Experiment-7b
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- WizardLM/WizardMath-7B-V1.1
- mlabonne/NeuralDaredevil-7B
- Kukedlc/Neural4gsm8k
- Eric111/Mayo
- Kukedlc/NeuralSirKrishna-7b
---
## About
static quants of https://huggingface.co/Kukedlc/NeuralMaths-Experiment-7b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NeuralMaths-Experiment-7b-GGUF/resolve/main/NeuralMaths-Experiment-7b.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralMaths-Experiment-7b-GGUF/resolve/main/NeuralMaths-Experiment-7b.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralMaths-Experiment-7b-GGUF/resolve/main/NeuralMaths-Experiment-7b.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralMaths-Experiment-7b-GGUF/resolve/main/NeuralMaths-Experiment-7b.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/NeuralMaths-Experiment-7b-GGUF/resolve/main/NeuralMaths-Experiment-7b.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralMaths-Experiment-7b-GGUF/resolve/main/NeuralMaths-Experiment-7b.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralMaths-Experiment-7b-GGUF/resolve/main/NeuralMaths-Experiment-7b.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralMaths-Experiment-7b-GGUF/resolve/main/NeuralMaths-Experiment-7b.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralMaths-Experiment-7b-GGUF/resolve/main/NeuralMaths-Experiment-7b.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralMaths-Experiment-7b-GGUF/resolve/main/NeuralMaths-Experiment-7b.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeuralMaths-Experiment-7b-GGUF/resolve/main/NeuralMaths-Experiment-7b.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/NeuralMaths-Experiment-7b-GGUF/resolve/main/NeuralMaths-Experiment-7b.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeuralMaths-Experiment-7b-GGUF/resolve/main/NeuralMaths-Experiment-7b.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralMaths-Experiment-7b-GGUF/resolve/main/NeuralMaths-Experiment-7b.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralMaths-Experiment-7b-GGUF/resolve/main/NeuralMaths-Experiment-7b.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralMaths-Experiment-7b-GGUF/resolve/main/NeuralMaths-Experiment-7b.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/NeuraRP-7B-slerp-GGUF | mradermacher | 2024-05-06T05:34:16Z | 4 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"ChaoticNeutrals/BuRP_7B",
"en",
"base_model:stevez80/NeuraRP-7B-slerp",
"base_model:quantized:stevez80/NeuraRP-7B-slerp",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-03-30T07:04:45Z | ---
base_model: stevez80/NeuraRP-7B-slerp
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- mlabonne/NeuralHermes-2.5-Mistral-7B
- ChaoticNeutrals/BuRP_7B
---
## About
static quants of https://huggingface.co/stevez80/NeuraRP-7B-slerp
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NeuraRP-7B-slerp-GGUF/resolve/main/NeuraRP-7B-slerp.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/NeuraRP-7B-slerp-GGUF/resolve/main/NeuraRP-7B-slerp.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/NeuraRP-7B-slerp-GGUF/resolve/main/NeuraRP-7B-slerp.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/NeuraRP-7B-slerp-GGUF/resolve/main/NeuraRP-7B-slerp.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/NeuraRP-7B-slerp-GGUF/resolve/main/NeuraRP-7B-slerp.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/NeuraRP-7B-slerp-GGUF/resolve/main/NeuraRP-7B-slerp.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NeuraRP-7B-slerp-GGUF/resolve/main/NeuraRP-7B-slerp.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/NeuraRP-7B-slerp-GGUF/resolve/main/NeuraRP-7B-slerp.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/NeuraRP-7B-slerp-GGUF/resolve/main/NeuraRP-7B-slerp.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/NeuraRP-7B-slerp-GGUF/resolve/main/NeuraRP-7B-slerp.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeuraRP-7B-slerp-GGUF/resolve/main/NeuraRP-7B-slerp.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/NeuraRP-7B-slerp-GGUF/resolve/main/NeuraRP-7B-slerp.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeuraRP-7B-slerp-GGUF/resolve/main/NeuraRP-7B-slerp.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/NeuraRP-7B-slerp-GGUF/resolve/main/NeuraRP-7B-slerp.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/NeuraRP-7B-slerp-GGUF/resolve/main/NeuraRP-7B-slerp.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/NeuraRP-7B-slerp-GGUF/resolve/main/NeuraRP-7B-slerp.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/ErebusNeuralSamir-7B-dare-ties-GGUF | mradermacher | 2024-05-06T05:33:55Z | 195 | 1 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"samir-fama/SamirGPT-v1",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"KoboldAI/Mistral-7B-Erebus-v3",
"en",
"base_model:stevez80/ErebusNeuralSamir-7B-dare-ties",
"base_model:quantized:stevez80/ErebusNeuralSamir-7B-dare-ties",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-30T08:36:39Z | ---
base_model: stevez80/ErebusNeuralSamir-7B-dare-ties
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- samir-fama/SamirGPT-v1
- mlabonne/NeuralHermes-2.5-Mistral-7B
- KoboldAI/Mistral-7B-Erebus-v3
---
## About
static quants of https://huggingface.co/stevez80/ErebusNeuralSamir-7B-dare-ties
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ErebusNeuralSamir-7B-dare-ties-GGUF/resolve/main/ErebusNeuralSamir-7B-dare-ties.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/ErebusNeuralSamir-7B-dare-ties-GGUF/resolve/main/ErebusNeuralSamir-7B-dare-ties.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/ErebusNeuralSamir-7B-dare-ties-GGUF/resolve/main/ErebusNeuralSamir-7B-dare-ties.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/ErebusNeuralSamir-7B-dare-ties-GGUF/resolve/main/ErebusNeuralSamir-7B-dare-ties.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/ErebusNeuralSamir-7B-dare-ties-GGUF/resolve/main/ErebusNeuralSamir-7B-dare-ties.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/ErebusNeuralSamir-7B-dare-ties-GGUF/resolve/main/ErebusNeuralSamir-7B-dare-ties.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ErebusNeuralSamir-7B-dare-ties-GGUF/resolve/main/ErebusNeuralSamir-7B-dare-ties.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/ErebusNeuralSamir-7B-dare-ties-GGUF/resolve/main/ErebusNeuralSamir-7B-dare-ties.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/ErebusNeuralSamir-7B-dare-ties-GGUF/resolve/main/ErebusNeuralSamir-7B-dare-ties.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/ErebusNeuralSamir-7B-dare-ties-GGUF/resolve/main/ErebusNeuralSamir-7B-dare-ties.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ErebusNeuralSamir-7B-dare-ties-GGUF/resolve/main/ErebusNeuralSamir-7B-dare-ties.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/ErebusNeuralSamir-7B-dare-ties-GGUF/resolve/main/ErebusNeuralSamir-7B-dare-ties.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ErebusNeuralSamir-7B-dare-ties-GGUF/resolve/main/ErebusNeuralSamir-7B-dare-ties.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/ErebusNeuralSamir-7B-dare-ties-GGUF/resolve/main/ErebusNeuralSamir-7B-dare-ties.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/ErebusNeuralSamir-7B-dare-ties-GGUF/resolve/main/ErebusNeuralSamir-7B-dare-ties.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ErebusNeuralSamir-7B-dare-ties-GGUF/resolve/main/ErebusNeuralSamir-7B-dare-ties.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/StarFuse-7B-DARE-GGUF | mradermacher | 2024-05-06T05:33:51Z | 39 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"en",
"base_model:hfghfghg/StarFuse-7B-DARE",
"base_model:quantized:hfghfghg/StarFuse-7B-DARE",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-03-30T10:06:48Z | ---
base_model: hfghfghg/StarFuse-7B-DARE
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
---
## About
static quants of https://huggingface.co/hfghfghg/StarFuse-7B-DARE
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/StarFuse-7B-DARE-GGUF/resolve/main/StarFuse-7B-DARE.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/StarFuse-7B-DARE-GGUF/resolve/main/StarFuse-7B-DARE.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/StarFuse-7B-DARE-GGUF/resolve/main/StarFuse-7B-DARE.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/StarFuse-7B-DARE-GGUF/resolve/main/StarFuse-7B-DARE.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/StarFuse-7B-DARE-GGUF/resolve/main/StarFuse-7B-DARE.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/StarFuse-7B-DARE-GGUF/resolve/main/StarFuse-7B-DARE.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/StarFuse-7B-DARE-GGUF/resolve/main/StarFuse-7B-DARE.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/StarFuse-7B-DARE-GGUF/resolve/main/StarFuse-7B-DARE.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/StarFuse-7B-DARE-GGUF/resolve/main/StarFuse-7B-DARE.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/StarFuse-7B-DARE-GGUF/resolve/main/StarFuse-7B-DARE.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/StarFuse-7B-DARE-GGUF/resolve/main/StarFuse-7B-DARE.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/StarFuse-7B-DARE-GGUF/resolve/main/StarFuse-7B-DARE.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/StarFuse-7B-DARE-GGUF/resolve/main/StarFuse-7B-DARE.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/StarFuse-7B-DARE-GGUF/resolve/main/StarFuse-7B-DARE.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/StarFuse-7B-DARE-GGUF/resolve/main/StarFuse-7B-DARE.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/StarFuse-7B-DARE-GGUF/resolve/main/StarFuse-7B-DARE.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
xinping/Mixtral-8x7B-instruction-v0.1_zh-GGUF | xinping | 2024-05-06T05:33:37Z | 1 | 0 | adapter-transformers | [
"adapter-transformers",
"gguf",
"zh",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T08:33:12Z | ---
license: apache-2.0
language:
- zh
- en
library_name: adapter-transformers
---
安装llama.cpp
保存gguf文件路径:../Mixtral-8x7B-instruction-zh_V0.1.Q4_K_S.gguf
linux系统下的测试:
进入安装的llama.cpp根目录下,
在命令行界面CLI下执行:
CUDA_VISIBLE_DEVICES=0 ./main -m ../Mixtral-8x7B-instruction-zh_V0.1.Q4_K_S.gguf -n 2048 -p "今年是2024年,大后年是哪年?"
|
mradermacher/Neural-4-Wino-7b-GGUF | mradermacher | 2024-05-06T05:33:19Z | 368 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"Kukedlc/NeuralFusion-7b-Dare-Ties",
"paulml/OmniBeagleSquaredMBX-v3-7B-v2",
"macadeliccc/MBX-7B-v3-DPO",
"Kukedlc/Fasciculus-Arcuatus-7B-slerp",
"liminerity/Neurotic-Jomainotrik-7b-slerp",
"en",
"base_model:Kukedlc/Neural-4-Wino-7b",
"base_model:quantized:Kukedlc/Neural-4-Wino-7b",
"endpoints_compatible",
"region:us"
] | null | 2024-03-30T13:08:10Z | ---
base_model: Kukedlc/Neural-4-Wino-7b
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- Kukedlc/NeuralFusion-7b-Dare-Ties
- paulml/OmniBeagleSquaredMBX-v3-7B-v2
- macadeliccc/MBX-7B-v3-DPO
- Kukedlc/Fasciculus-Arcuatus-7B-slerp
- liminerity/Neurotic-Jomainotrik-7b-slerp
---
## About
static quants of https://huggingface.co/Kukedlc/Neural-4-Wino-7b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Neural-4-Wino-7b-GGUF/resolve/main/Neural-4-Wino-7b.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Neural-4-Wino-7b-GGUF/resolve/main/Neural-4-Wino-7b.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Neural-4-Wino-7b-GGUF/resolve/main/Neural-4-Wino-7b.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Neural-4-Wino-7b-GGUF/resolve/main/Neural-4-Wino-7b.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Neural-4-Wino-7b-GGUF/resolve/main/Neural-4-Wino-7b.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Neural-4-Wino-7b-GGUF/resolve/main/Neural-4-Wino-7b.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Neural-4-Wino-7b-GGUF/resolve/main/Neural-4-Wino-7b.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Neural-4-Wino-7b-GGUF/resolve/main/Neural-4-Wino-7b.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Neural-4-Wino-7b-GGUF/resolve/main/Neural-4-Wino-7b.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Neural-4-Wino-7b-GGUF/resolve/main/Neural-4-Wino-7b.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Neural-4-Wino-7b-GGUF/resolve/main/Neural-4-Wino-7b.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Neural-4-Wino-7b-GGUF/resolve/main/Neural-4-Wino-7b.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Neural-4-Wino-7b-GGUF/resolve/main/Neural-4-Wino-7b.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Neural-4-Wino-7b-GGUF/resolve/main/Neural-4-Wino-7b.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Neural-4-Wino-7b-GGUF/resolve/main/Neural-4-Wino-7b.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Neural-4-Wino-7b-GGUF/resolve/main/Neural-4-Wino-7b.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
DavidClark314/ppo-LunarLander-v2 | DavidClark314 | 2024-05-06T05:33:18Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-06T04:56:23Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ppo
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 259.55 +/- 14.71
name: mean_reward
verified: false
---
# **ppo** Agent playing **LunarLander-v2**
This is a trained model of a **ppo** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mradermacher/Rhea-72b-v0.5-i1-GGUF | mradermacher | 2024-05-06T05:33:16Z | 62 | 4 | transformers | [
"transformers",
"gguf",
"en",
"base_model:davidkim205/Rhea-72b-v0.5",
"base_model:quantized:davidkim205/Rhea-72b-v0.5",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-30T13:33:40Z | ---
base_model: davidkim205/Rhea-72b-v0.5
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
weighted/imatrix quants of https://huggingface.co/davidkim205/Rhea-72b-v0.5
**the imatrix was calculated on a reduced 40k token set (the "quarter" set) as the full token set caused overflows in the model (likely a model bug)**
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Rhea-72b-v0.5-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Rhea-72b-v0.5-i1-GGUF/resolve/main/Rhea-72b-v0.5.i1-IQ1_S.gguf) | i1-IQ1_S | 20.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Rhea-72b-v0.5-i1-GGUF/resolve/main/Rhea-72b-v0.5.i1-IQ1_M.gguf) | i1-IQ1_M | 21.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Rhea-72b-v0.5-i1-GGUF/resolve/main/Rhea-72b-v0.5.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 24.0 | |
| [GGUF](https://huggingface.co/mradermacher/Rhea-72b-v0.5-i1-GGUF/resolve/main/Rhea-72b-v0.5.i1-IQ2_XS.gguf) | i1-IQ2_XS | 26.0 | |
| [GGUF](https://huggingface.co/mradermacher/Rhea-72b-v0.5-i1-GGUF/resolve/main/Rhea-72b-v0.5.i1-IQ2_S.gguf) | i1-IQ2_S | 27.6 | |
| [GGUF](https://huggingface.co/mradermacher/Rhea-72b-v0.5-i1-GGUF/resolve/main/Rhea-72b-v0.5.i1-IQ2_M.gguf) | i1-IQ2_M | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Rhea-72b-v0.5-i1-GGUF/resolve/main/Rhea-72b-v0.5.i1-Q2_K.gguf) | i1-Q2_K | 31.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Rhea-72b-v0.5-i1-GGUF/resolve/main/Rhea-72b-v0.5.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 31.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Rhea-72b-v0.5-i1-GGUF/resolve/main/Rhea-72b-v0.5.i1-IQ3_XS.gguf) | i1-IQ3_XS | 34.0 | |
| [GGUF](https://huggingface.co/mradermacher/Rhea-72b-v0.5-i1-GGUF/resolve/main/Rhea-72b-v0.5.i1-IQ3_S.gguf) | i1-IQ3_S | 35.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Rhea-72b-v0.5-i1-GGUF/resolve/main/Rhea-72b-v0.5.i1-Q3_K_S.gguf) | i1-Q3_K_S | 35.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Rhea-72b-v0.5-i1-GGUF/resolve/main/Rhea-72b-v0.5.i1-IQ3_M.gguf) | i1-IQ3_M | 37.3 | |
| [GGUF](https://huggingface.co/mradermacher/Rhea-72b-v0.5-i1-GGUF/resolve/main/Rhea-72b-v0.5.i1-Q3_K_M.gguf) | i1-Q3_K_M | 39.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Rhea-72b-v0.5-i1-GGUF/resolve/main/Rhea-72b-v0.5.i1-Q3_K_L.gguf) | i1-Q3_K_L | 42.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Rhea-72b-v0.5-i1-GGUF/resolve/main/Rhea-72b-v0.5.i1-IQ4_XS.gguf) | i1-IQ4_XS | 42.8 | |
| [GGUF](https://huggingface.co/mradermacher/Rhea-72b-v0.5-i1-GGUF/resolve/main/Rhea-72b-v0.5.i1-IQ4_NL.gguf) | i1-IQ4_NL | 45.1 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Rhea-72b-v0.5-i1-GGUF/resolve/main/Rhea-72b-v0.5.i1-Q4_0.gguf) | i1-Q4_0 | 45.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Rhea-72b-v0.5-i1-GGUF/resolve/main/Rhea-72b-v0.5.i1-Q4_K_S.gguf) | i1-Q4_K_S | 45.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Rhea-72b-v0.5-i1-GGUF/resolve/main/Rhea-72b-v0.5.i1-Q4_K_M.gguf) | i1-Q4_K_M | 47.8 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Rhea-72b-v0.5-i1-GGUF/resolve/main/Rhea-72b-v0.5.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Rhea-72b-v0.5-i1-GGUF/resolve/main/Rhea-72b-v0.5.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 53.9 | |
| [PART 1](https://huggingface.co/mradermacher/Rhea-72b-v0.5-i1-GGUF/resolve/main/Rhea-72b-v0.5.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Rhea-72b-v0.5-i1-GGUF/resolve/main/Rhea-72b-v0.5.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 55.4 | |
| [PART 1](https://huggingface.co/mradermacher/Rhea-72b-v0.5-i1-GGUF/resolve/main/Rhea-72b-v0.5.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Rhea-72b-v0.5-i1-GGUF/resolve/main/Rhea-72b-v0.5.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 63.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Mixtral_AI_Cyber_4.0-GGUF | mradermacher | 2024-05-06T05:32:00Z | 18 | 0 | transformers | [
"transformers",
"gguf",
"biology",
"chemistry",
"medical",
"en",
"base_model:LeroyDyer/Mixtral_AI_Cyber_4.0",
"base_model:quantized:LeroyDyer/Mixtral_AI_Cyber_4.0",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-03-31T00:09:59Z | ---
base_model: LeroyDyer/Mixtral_AI_Cyber_4.0
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- biology
- chemistry
- medical
---
## About
static quants of https://huggingface.co/LeroyDyer/Mixtral_AI_Cyber_4.0
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_4.0-GGUF/resolve/main/Mixtral_AI_Cyber_4.0.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_4.0-GGUF/resolve/main/Mixtral_AI_Cyber_4.0.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_4.0-GGUF/resolve/main/Mixtral_AI_Cyber_4.0.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_4.0-GGUF/resolve/main/Mixtral_AI_Cyber_4.0.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_4.0-GGUF/resolve/main/Mixtral_AI_Cyber_4.0.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_4.0-GGUF/resolve/main/Mixtral_AI_Cyber_4.0.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_4.0-GGUF/resolve/main/Mixtral_AI_Cyber_4.0.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_4.0-GGUF/resolve/main/Mixtral_AI_Cyber_4.0.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_4.0-GGUF/resolve/main/Mixtral_AI_Cyber_4.0.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_4.0-GGUF/resolve/main/Mixtral_AI_Cyber_4.0.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_4.0-GGUF/resolve/main/Mixtral_AI_Cyber_4.0.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_4.0-GGUF/resolve/main/Mixtral_AI_Cyber_4.0.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_4.0-GGUF/resolve/main/Mixtral_AI_Cyber_4.0.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_4.0-GGUF/resolve/main/Mixtral_AI_Cyber_4.0.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_4.0-GGUF/resolve/main/Mixtral_AI_Cyber_4.0.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Saily_220B-GGUF | mradermacher | 2024-05-06T05:31:50Z | 0 | 0 | transformers | [
"transformers",
"en",
"dataset:tiiuae/falcon-refinedweb",
"dataset:EleutherAI/pile",
"dataset:meta-math/MetaMathQA",
"base_model:deepnight-research/Saily_220B",
"base_model:finetune:deepnight-research/Saily_220B",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-03-31T02:09:11Z | ---
base_model: deepnight-research/Saily_220B
datasets:
- tiiuae/falcon-refinedweb
- EleutherAI/pile
- meta-math/MetaMathQA
language:
- en
library_name: transformers
license: llama2
no_imatrix: 'GGML_ASSERT: llama.cpp/ggml.c:16553: i != GGML_HASHTABLE_FULL'
quantized_by: mradermacher
---
## About
static quants of https://huggingface.co/deepnight-research/Saily_220B
<!-- provided-files -->
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [PART 1](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.Q2_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.Q2_K.gguf.part2of2) | Q2_K | 76.9 | |
| [PART 1](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.IQ3_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.IQ3_XS.gguf.part2of2) | IQ3_XS | 85.5 | |
| [PART 1](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.Q3_K_S.gguf.part2of2) | Q3_K_S | 90.1 | |
| [PART 1](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.IQ3_S.gguf.part2of2) | IQ3_S | 90.4 | beats Q3_K* |
| [PART 1](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.IQ3_M.gguf.part2of2) | IQ3_M | 93.5 | |
| [PART 1](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.Q3_K_M.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.Q3_K_M.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.Q3_K_M.gguf.part3of3) | Q3_K_M | 100.6 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.Q3_K_L.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.Q3_K_L.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.Q3_K_L.gguf.part3of3) | Q3_K_L | 109.5 | |
| [PART 1](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.IQ4_XS.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.IQ4_XS.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.IQ4_XS.gguf.part3of3) | IQ4_XS | 112.7 | |
| [PART 1](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.Q4_0.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.Q4_0.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.Q4_0.gguf.part3of3) | Q4_0 | 117.7 | fast, low quality |
| [PART 1](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.Q4_K_S.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.Q4_K_S.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.Q4_K_S.gguf.part3of3) | Q4_K_S | 118.6 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.Q4_K_M.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.Q4_K_M.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.Q4_K_M.gguf.part3of3) | Q4_K_M | 125.3 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.Q5_K_S.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.Q5_K_S.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.Q5_K_S.gguf.part3of3) | Q5_K_S | 143.8 | |
| [PART 1](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.Q5_K_M.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.Q5_K_M.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.Q5_K_M.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.Q5_K_M.gguf.part4of4) | Q5_K_M | 147.7 | |
| [PART 1](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.Q6_K.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.Q6_K.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.Q6_K.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.Q6_K.gguf.part4of4) | Q6_K | 171.4 | very good quality |
| [P1](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.Q8_0.gguf.part1of5) [P2](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.Q8_0.gguf.part2of5) [P3](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.Q8_0.gguf.part3of5) [P4](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.Q8_0.gguf.part4of5) [P5](https://huggingface.co/mradermacher/Saily_220B-GGUF/resolve/main/Saily_220B.Q8_0.gguf.part5of5) | Q8_0 | 221.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/NeuralKuke-4-All-7b-GGUF | mradermacher | 2024-05-06T05:31:24Z | 177 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"Kukedlc/Neural-4-ARC-7b",
"Kukedlc/Neural-4-Wino-7b",
"Kukedlc/NeuralSirKrishna-7b",
"Kukedlc/Neural-4-QA-7b",
"Kukedlc/Neural-4-Maths-7b",
"en",
"base_model:Kukedlc/NeuralKuke-4-All-7b",
"base_model:quantized:Kukedlc/NeuralKuke-4-All-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-31T04:15:37Z | ---
base_model: Kukedlc/NeuralKuke-4-All-7b
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- Kukedlc/Neural-4-ARC-7b
- Kukedlc/Neural-4-Wino-7b
- Kukedlc/NeuralSirKrishna-7b
- Kukedlc/Neural-4-QA-7b
- Kukedlc/Neural-4-Maths-7b
---
## About
static quants of https://huggingface.co/Kukedlc/NeuralKuke-4-All-7b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NeuralKuke-4-All-7b-GGUF/resolve/main/NeuralKuke-4-All-7b.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralKuke-4-All-7b-GGUF/resolve/main/NeuralKuke-4-All-7b.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralKuke-4-All-7b-GGUF/resolve/main/NeuralKuke-4-All-7b.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralKuke-4-All-7b-GGUF/resolve/main/NeuralKuke-4-All-7b.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/NeuralKuke-4-All-7b-GGUF/resolve/main/NeuralKuke-4-All-7b.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralKuke-4-All-7b-GGUF/resolve/main/NeuralKuke-4-All-7b.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralKuke-4-All-7b-GGUF/resolve/main/NeuralKuke-4-All-7b.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralKuke-4-All-7b-GGUF/resolve/main/NeuralKuke-4-All-7b.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralKuke-4-All-7b-GGUF/resolve/main/NeuralKuke-4-All-7b.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralKuke-4-All-7b-GGUF/resolve/main/NeuralKuke-4-All-7b.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeuralKuke-4-All-7b-GGUF/resolve/main/NeuralKuke-4-All-7b.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeuralKuke-4-All-7b-GGUF/resolve/main/NeuralKuke-4-All-7b.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralKuke-4-All-7b-GGUF/resolve/main/NeuralKuke-4-All-7b.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralKuke-4-All-7b-GGUF/resolve/main/NeuralKuke-4-All-7b.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralKuke-4-All-7b-GGUF/resolve/main/NeuralKuke-4-All-7b.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Mixtral_AI_Cyber_3.1_SFT-GGUF | mradermacher | 2024-05-06T05:31:17Z | 25 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"music",
"Cyber-Series",
"en",
"dataset:WhiteRabbitNeo/WRN-Chapter-1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-03-31T04:46:29Z | ---
base_model: LeroyDyer/Mixtral_AI_Cyber_3.1_SFT
datasets:
- WhiteRabbitNeo/WRN-Chapter-1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- music
- Cyber-Series
---
## About
static quants of https://huggingface.co/LeroyDyer/Mixtral_AI_Cyber_3.1_SFT
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_3.1_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_3.1_SFT.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_3.1_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_3.1_SFT.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_3.1_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_3.1_SFT.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_3.1_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_3.1_SFT.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_3.1_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_3.1_SFT.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_3.1_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_3.1_SFT.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_3.1_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_3.1_SFT.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_3.1_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_3.1_SFT.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_3.1_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_3.1_SFT.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_3.1_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_3.1_SFT.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_3.1_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_3.1_SFT.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_3.1_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_3.1_SFT.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_3.1_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_3.1_SFT.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_3.1_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_3.1_SFT.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_3.1_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_3.1_SFT.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/MultiverseEx26-7B-slerp-GGUF | mradermacher | 2024-05-06T05:31:08Z | 2 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"yam-peleg/Experiment26-7B",
"MTSAIR/multi_verse_model",
"en",
"base_model:allknowingroger/MultiverseEx26-7B-slerp",
"base_model:quantized:allknowingroger/MultiverseEx26-7B-slerp",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-31T05:37:00Z | ---
base_model: allknowingroger/MultiverseEx26-7B-slerp
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- yam-peleg/Experiment26-7B
- MTSAIR/multi_verse_model
---
## About
static quants of https://huggingface.co/allknowingroger/MultiverseEx26-7B-slerp
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MultiverseEx26-7B-slerp-GGUF/resolve/main/MultiverseEx26-7B-slerp.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/MultiverseEx26-7B-slerp-GGUF/resolve/main/MultiverseEx26-7B-slerp.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/MultiverseEx26-7B-slerp-GGUF/resolve/main/MultiverseEx26-7B-slerp.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/MultiverseEx26-7B-slerp-GGUF/resolve/main/MultiverseEx26-7B-slerp.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MultiverseEx26-7B-slerp-GGUF/resolve/main/MultiverseEx26-7B-slerp.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/MultiverseEx26-7B-slerp-GGUF/resolve/main/MultiverseEx26-7B-slerp.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MultiverseEx26-7B-slerp-GGUF/resolve/main/MultiverseEx26-7B-slerp.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/MultiverseEx26-7B-slerp-GGUF/resolve/main/MultiverseEx26-7B-slerp.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/MultiverseEx26-7B-slerp-GGUF/resolve/main/MultiverseEx26-7B-slerp.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/MultiverseEx26-7B-slerp-GGUF/resolve/main/MultiverseEx26-7B-slerp.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MultiverseEx26-7B-slerp-GGUF/resolve/main/MultiverseEx26-7B-slerp.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MultiverseEx26-7B-slerp-GGUF/resolve/main/MultiverseEx26-7B-slerp.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/MultiverseEx26-7B-slerp-GGUF/resolve/main/MultiverseEx26-7B-slerp.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/MultiverseEx26-7B-slerp-GGUF/resolve/main/MultiverseEx26-7B-slerp.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MultiverseEx26-7B-slerp-GGUF/resolve/main/MultiverseEx26-7B-slerp.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Neurallaymons-7B-slerp-GGUF | mradermacher | 2024-05-06T05:31:05Z | 36 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"Kukedlc/Neural-4-Maths-7b",
"ABX-AI/Starfinite-Laymons-7B",
"en",
"base_model:allknowingroger/Neurallaymons-7B-slerp",
"base_model:quantized:allknowingroger/Neurallaymons-7B-slerp",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-31T07:01:46Z | ---
base_model: allknowingroger/Neurallaymons-7B-slerp
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- Kukedlc/Neural-4-Maths-7b
- ABX-AI/Starfinite-Laymons-7B
---
## About
static quants of https://huggingface.co/allknowingroger/Neurallaymons-7B-slerp
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Neurallaymons-7B-slerp-GGUF/resolve/main/Neurallaymons-7B-slerp.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Neurallaymons-7B-slerp-GGUF/resolve/main/Neurallaymons-7B-slerp.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Neurallaymons-7B-slerp-GGUF/resolve/main/Neurallaymons-7B-slerp.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Neurallaymons-7B-slerp-GGUF/resolve/main/Neurallaymons-7B-slerp.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Neurallaymons-7B-slerp-GGUF/resolve/main/Neurallaymons-7B-slerp.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Neurallaymons-7B-slerp-GGUF/resolve/main/Neurallaymons-7B-slerp.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Neurallaymons-7B-slerp-GGUF/resolve/main/Neurallaymons-7B-slerp.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Neurallaymons-7B-slerp-GGUF/resolve/main/Neurallaymons-7B-slerp.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Neurallaymons-7B-slerp-GGUF/resolve/main/Neurallaymons-7B-slerp.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Neurallaymons-7B-slerp-GGUF/resolve/main/Neurallaymons-7B-slerp.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Neurallaymons-7B-slerp-GGUF/resolve/main/Neurallaymons-7B-slerp.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Neurallaymons-7B-slerp-GGUF/resolve/main/Neurallaymons-7B-slerp.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Neurallaymons-7B-slerp-GGUF/resolve/main/Neurallaymons-7B-slerp.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Neurallaymons-7B-slerp-GGUF/resolve/main/Neurallaymons-7B-slerp.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Neurallaymons-7B-slerp-GGUF/resolve/main/Neurallaymons-7B-slerp.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/GodziLLa-30B-GGUF | mradermacher | 2024-05-06T05:30:35Z | 119 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mix",
"cot",
"en",
"base_model:MayaPH/GodziLLa-30B",
"base_model:quantized:MayaPH/GodziLLa-30B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-31T12:51:30Z | ---
base_model: MayaPH/GodziLLa-30B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- merge
- mix
- cot
---
## About
static quants of https://huggingface.co/MayaPH/GodziLLa-30B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/GodziLLa-30B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/GodziLLa-30B-GGUF/resolve/main/GodziLLa-30B.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/GodziLLa-30B-GGUF/resolve/main/GodziLLa-30B.IQ3_XS.gguf) | IQ3_XS | 13.7 | |
| [GGUF](https://huggingface.co/mradermacher/GodziLLa-30B-GGUF/resolve/main/GodziLLa-30B.IQ3_S.gguf) | IQ3_S | 14.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/GodziLLa-30B-GGUF/resolve/main/GodziLLa-30B.Q3_K_S.gguf) | Q3_K_S | 14.4 | |
| [GGUF](https://huggingface.co/mradermacher/GodziLLa-30B-GGUF/resolve/main/GodziLLa-30B.IQ3_M.gguf) | IQ3_M | 15.2 | |
| [GGUF](https://huggingface.co/mradermacher/GodziLLa-30B-GGUF/resolve/main/GodziLLa-30B.Q3_K_M.gguf) | Q3_K_M | 16.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/GodziLLa-30B-GGUF/resolve/main/GodziLLa-30B.Q3_K_L.gguf) | Q3_K_L | 17.6 | |
| [GGUF](https://huggingface.co/mradermacher/GodziLLa-30B-GGUF/resolve/main/GodziLLa-30B.IQ4_XS.gguf) | IQ4_XS | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/GodziLLa-30B-GGUF/resolve/main/GodziLLa-30B.Q4_0.gguf) | Q4_0 | 18.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/GodziLLa-30B-GGUF/resolve/main/GodziLLa-30B.Q4_K_S.gguf) | Q4_K_S | 18.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GodziLLa-30B-GGUF/resolve/main/GodziLLa-30B.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GodziLLa-30B-GGUF/resolve/main/GodziLLa-30B.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/GodziLLa-30B-GGUF/resolve/main/GodziLLa-30B.Q5_K_M.gguf) | Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/GodziLLa-30B-GGUF/resolve/main/GodziLLa-30B.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/GodziLLa-30B-GGUF/resolve/main/GodziLLa-30B.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/BigLiberated-20B-V2-GGUF | mradermacher | 2024-05-06T05:29:57Z | 4 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:win10/BigLiberated-20B-V2",
"base_model:quantized:win10/BigLiberated-20B-V2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-03-31T15:47:35Z | ---
base_model: win10/BigLiberated-20B-V2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
static quants of https://huggingface.co/win10/BigLiberated-20B-V2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/BigLiberated-20B-V2-GGUF/resolve/main/BigLiberated-20B-V2.Q2_K.gguf) | Q2_K | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/BigLiberated-20B-V2-GGUF/resolve/main/BigLiberated-20B-V2.IQ3_XS.gguf) | IQ3_XS | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/BigLiberated-20B-V2-GGUF/resolve/main/BigLiberated-20B-V2.IQ3_S.gguf) | IQ3_S | 10.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/BigLiberated-20B-V2-GGUF/resolve/main/BigLiberated-20B-V2.Q3_K_S.gguf) | Q3_K_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/BigLiberated-20B-V2-GGUF/resolve/main/BigLiberated-20B-V2.IQ3_M.gguf) | IQ3_M | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/BigLiberated-20B-V2-GGUF/resolve/main/BigLiberated-20B-V2.Q3_K_M.gguf) | Q3_K_M | 11.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/BigLiberated-20B-V2-GGUF/resolve/main/BigLiberated-20B-V2.Q3_K_L.gguf) | Q3_K_L | 12.1 | |
| [GGUF](https://huggingface.co/mradermacher/BigLiberated-20B-V2-GGUF/resolve/main/BigLiberated-20B-V2.IQ4_XS.gguf) | IQ4_XS | 12.2 | |
| [GGUF](https://huggingface.co/mradermacher/BigLiberated-20B-V2-GGUF/resolve/main/BigLiberated-20B-V2.Q4_0.gguf) | Q4_0 | 12.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/BigLiberated-20B-V2-GGUF/resolve/main/BigLiberated-20B-V2.Q4_K_S.gguf) | Q4_K_S | 13.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BigLiberated-20B-V2-GGUF/resolve/main/BigLiberated-20B-V2.Q4_K_M.gguf) | Q4_K_M | 14.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BigLiberated-20B-V2-GGUF/resolve/main/BigLiberated-20B-V2.Q5_K_S.gguf) | Q5_K_S | 15.2 | |
| [GGUF](https://huggingface.co/mradermacher/BigLiberated-20B-V2-GGUF/resolve/main/BigLiberated-20B-V2.Q5_K_M.gguf) | Q5_K_M | 16.0 | |
| [GGUF](https://huggingface.co/mradermacher/BigLiberated-20B-V2-GGUF/resolve/main/BigLiberated-20B-V2.Q6_K.gguf) | Q6_K | 18.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/BigLiberated-20B-V2-GGUF/resolve/main/BigLiberated-20B-V2.Q8_0.gguf) | Q8_0 | 22.3 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/NeuralAlgo-7B-slerp-GGUF | mradermacher | 2024-05-06T05:29:52Z | 69 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"AurelPx/Percival_01-7b-slerp",
"yam-peleg/Experiment26-7B",
"en",
"base_model:Kukedlc/NeuralAlgo-7B-slerp",
"base_model:quantized:Kukedlc/NeuralAlgo-7B-slerp",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-31T15:53:31Z | ---
base_model: Kukedlc/NeuralAlgo-7B-slerp
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- AurelPx/Percival_01-7b-slerp
- yam-peleg/Experiment26-7B
---
## About
static quants of https://huggingface.co/Kukedlc/NeuralAlgo-7B-slerp
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NeuralAlgo-7B-slerp-GGUF/resolve/main/NeuralAlgo-7B-slerp.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralAlgo-7B-slerp-GGUF/resolve/main/NeuralAlgo-7B-slerp.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralAlgo-7B-slerp-GGUF/resolve/main/NeuralAlgo-7B-slerp.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralAlgo-7B-slerp-GGUF/resolve/main/NeuralAlgo-7B-slerp.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/NeuralAlgo-7B-slerp-GGUF/resolve/main/NeuralAlgo-7B-slerp.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralAlgo-7B-slerp-GGUF/resolve/main/NeuralAlgo-7B-slerp.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralAlgo-7B-slerp-GGUF/resolve/main/NeuralAlgo-7B-slerp.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralAlgo-7B-slerp-GGUF/resolve/main/NeuralAlgo-7B-slerp.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralAlgo-7B-slerp-GGUF/resolve/main/NeuralAlgo-7B-slerp.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralAlgo-7B-slerp-GGUF/resolve/main/NeuralAlgo-7B-slerp.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeuralAlgo-7B-slerp-GGUF/resolve/main/NeuralAlgo-7B-slerp.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeuralAlgo-7B-slerp-GGUF/resolve/main/NeuralAlgo-7B-slerp.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralAlgo-7B-slerp-GGUF/resolve/main/NeuralAlgo-7B-slerp.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralAlgo-7B-slerp-GGUF/resolve/main/NeuralAlgo-7B-slerp.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralAlgo-7B-slerp-GGUF/resolve/main/NeuralAlgo-7B-slerp.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Melusine_103b-GGUF | mradermacher | 2024-05-06T05:29:32Z | 95 | 1 | transformers | [
"transformers",
"gguf",
"rp",
"erp",
"chat",
"miqu",
"en",
"base_model:MarsupialAI/Melusine_103b",
"base_model:quantized:MarsupialAI/Melusine_103b",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-03-31T16:13:39Z | ---
base_model: MarsupialAI/Melusine_103b
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- rp
- erp
- chat
- miqu
---
## About
static quants of https://huggingface.co/MarsupialAI/Melusine_103b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Melusine_103b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Melusine_103b-GGUF/resolve/main/Melusine_103b.Q2_K.gguf) | Q2_K | 38.3 | |
| [GGUF](https://huggingface.co/mradermacher/Melusine_103b-GGUF/resolve/main/Melusine_103b.IQ3_XS.gguf) | IQ3_XS | 42.6 | |
| [GGUF](https://huggingface.co/mradermacher/Melusine_103b-GGUF/resolve/main/Melusine_103b.Q3_K_S.gguf) | Q3_K_S | 44.9 | |
| [GGUF](https://huggingface.co/mradermacher/Melusine_103b-GGUF/resolve/main/Melusine_103b.IQ3_S.gguf) | IQ3_S | 45.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Melusine_103b-GGUF/resolve/main/Melusine_103b.IQ3_M.gguf) | IQ3_M | 46.5 | |
| [PART 1](https://huggingface.co/mradermacher/Melusine_103b-GGUF/resolve/main/Melusine_103b.Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Melusine_103b-GGUF/resolve/main/Melusine_103b.Q3_K_M.gguf.part2of2) | Q3_K_M | 50.0 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/Melusine_103b-GGUF/resolve/main/Melusine_103b.Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Melusine_103b-GGUF/resolve/main/Melusine_103b.Q3_K_L.gguf.part2of2) | Q3_K_L | 54.5 | |
| [PART 1](https://huggingface.co/mradermacher/Melusine_103b-GGUF/resolve/main/Melusine_103b.IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Melusine_103b-GGUF/resolve/main/Melusine_103b.IQ4_XS.gguf.part2of2) | IQ4_XS | 56.0 | |
| [PART 1](https://huggingface.co/mradermacher/Melusine_103b-GGUF/resolve/main/Melusine_103b.Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Melusine_103b-GGUF/resolve/main/Melusine_103b.Q4_0.gguf.part2of2) | Q4_0 | 58.5 | fast, low quality |
| [PART 1](https://huggingface.co/mradermacher/Melusine_103b-GGUF/resolve/main/Melusine_103b.Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Melusine_103b-GGUF/resolve/main/Melusine_103b.Q4_K_S.gguf.part2of2) | Q4_K_S | 59.0 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Melusine_103b-GGUF/resolve/main/Melusine_103b.Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Melusine_103b-GGUF/resolve/main/Melusine_103b.Q4_K_M.gguf.part2of2) | Q4_K_M | 62.3 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Melusine_103b-GGUF/resolve/main/Melusine_103b.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Melusine_103b-GGUF/resolve/main/Melusine_103b.Q5_K_S.gguf.part2of2) | Q5_K_S | 71.4 | |
| [PART 1](https://huggingface.co/mradermacher/Melusine_103b-GGUF/resolve/main/Melusine_103b.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Melusine_103b-GGUF/resolve/main/Melusine_103b.Q5_K_M.gguf.part2of2) | Q5_K_M | 73.3 | |
| [PART 1](https://huggingface.co/mradermacher/Melusine_103b-GGUF/resolve/main/Melusine_103b.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Melusine_103b-GGUF/resolve/main/Melusine_103b.Q6_K.gguf.part2of2) | Q6_K | 85.1 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Melusine_103b-GGUF/resolve/main/Melusine_103b.Q8_0.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Melusine_103b-GGUF/resolve/main/Melusine_103b.Q8_0.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Melusine_103b-GGUF/resolve/main/Melusine_103b.Q8_0.gguf.part3of3) | Q8_0 | 110.0 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/rubra-13b-h-GGUF | mradermacher | 2024-05-06T05:28:20Z | 4 | 0 | transformers | [
"transformers",
"gguf",
"en",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-03-31T19:58:01Z | ---
base_model: sanjay920/rubra-13b-h
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
static quants of https://huggingface.co/sanjay920/rubra-13b-h
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/rubra-13b-h-GGUF/resolve/main/rubra-13b-h.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/rubra-13b-h-GGUF/resolve/main/rubra-13b-h.IQ3_XS.gguf) | IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/rubra-13b-h-GGUF/resolve/main/rubra-13b-h.Q3_K_S.gguf) | Q3_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/rubra-13b-h-GGUF/resolve/main/rubra-13b-h.IQ3_S.gguf) | IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/rubra-13b-h-GGUF/resolve/main/rubra-13b-h.IQ3_M.gguf) | IQ3_M | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/rubra-13b-h-GGUF/resolve/main/rubra-13b-h.Q3_K_M.gguf) | Q3_K_M | 6.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/rubra-13b-h-GGUF/resolve/main/rubra-13b-h.Q3_K_L.gguf) | Q3_K_L | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/rubra-13b-h-GGUF/resolve/main/rubra-13b-h.IQ4_XS.gguf) | IQ4_XS | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/rubra-13b-h-GGUF/resolve/main/rubra-13b-h.Q4_K_S.gguf) | Q4_K_S | 7.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/rubra-13b-h-GGUF/resolve/main/rubra-13b-h.Q4_K_M.gguf) | Q4_K_M | 7.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/rubra-13b-h-GGUF/resolve/main/rubra-13b-h.Q5_K_S.gguf) | Q5_K_S | 8.9 | |
| [GGUF](https://huggingface.co/mradermacher/rubra-13b-h-GGUF/resolve/main/rubra-13b-h.Q5_K_M.gguf) | Q5_K_M | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/rubra-13b-h-GGUF/resolve/main/rubra-13b-h.Q6_K.gguf) | Q6_K | 10.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/rubra-13b-h-GGUF/resolve/main/rubra-13b-h.Q8_0.gguf) | Q8_0 | 13.5 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/KitchenSink_103b-GGUF | mradermacher | 2024-05-06T05:27:53Z | 74 | 1 | transformers | [
"transformers",
"gguf",
"rp",
"erp",
"chat",
"storywriting",
"en",
"base_model:MarsupialAI/KitchenSink_103b",
"base_model:quantized:MarsupialAI/KitchenSink_103b",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-03-31T23:00:20Z | ---
base_model: MarsupialAI/KitchenSink_103b
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
tags:
- rp
- erp
- chat
- storywriting
---
## About
static quants of https://huggingface.co/MarsupialAI/KitchenSink_103b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/KitchenSink_103b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/KitchenSink_103b-GGUF/resolve/main/KitchenSink_103b.Q2_K.gguf) | Q2_K | 38.3 | |
| [GGUF](https://huggingface.co/mradermacher/KitchenSink_103b-GGUF/resolve/main/KitchenSink_103b.IQ3_XS.gguf) | IQ3_XS | 42.6 | |
| [GGUF](https://huggingface.co/mradermacher/KitchenSink_103b-GGUF/resolve/main/KitchenSink_103b.Q3_K_S.gguf) | Q3_K_S | 44.9 | |
| [GGUF](https://huggingface.co/mradermacher/KitchenSink_103b-GGUF/resolve/main/KitchenSink_103b.IQ3_S.gguf) | IQ3_S | 45.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/KitchenSink_103b-GGUF/resolve/main/KitchenSink_103b.IQ3_M.gguf) | IQ3_M | 46.5 | |
| [PART 1](https://huggingface.co/mradermacher/KitchenSink_103b-GGUF/resolve/main/KitchenSink_103b.Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/KitchenSink_103b-GGUF/resolve/main/KitchenSink_103b.Q3_K_M.gguf.part2of2) | Q3_K_M | 50.0 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/KitchenSink_103b-GGUF/resolve/main/KitchenSink_103b.Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/KitchenSink_103b-GGUF/resolve/main/KitchenSink_103b.Q3_K_L.gguf.part2of2) | Q3_K_L | 54.5 | |
| [PART 1](https://huggingface.co/mradermacher/KitchenSink_103b-GGUF/resolve/main/KitchenSink_103b.IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/KitchenSink_103b-GGUF/resolve/main/KitchenSink_103b.IQ4_XS.gguf.part2of2) | IQ4_XS | 56.0 | |
| [PART 1](https://huggingface.co/mradermacher/KitchenSink_103b-GGUF/resolve/main/KitchenSink_103b.Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/KitchenSink_103b-GGUF/resolve/main/KitchenSink_103b.Q4_K_S.gguf.part2of2) | Q4_K_S | 59.0 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/KitchenSink_103b-GGUF/resolve/main/KitchenSink_103b.Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/KitchenSink_103b-GGUF/resolve/main/KitchenSink_103b.Q4_K_M.gguf.part2of2) | Q4_K_M | 62.3 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/KitchenSink_103b-GGUF/resolve/main/KitchenSink_103b.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/KitchenSink_103b-GGUF/resolve/main/KitchenSink_103b.Q5_K_S.gguf.part2of2) | Q5_K_S | 71.4 | |
| [PART 1](https://huggingface.co/mradermacher/KitchenSink_103b-GGUF/resolve/main/KitchenSink_103b.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/KitchenSink_103b-GGUF/resolve/main/KitchenSink_103b.Q5_K_M.gguf.part2of2) | Q5_K_M | 73.3 | |
| [PART 1](https://huggingface.co/mradermacher/KitchenSink_103b-GGUF/resolve/main/KitchenSink_103b.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/KitchenSink_103b-GGUF/resolve/main/KitchenSink_103b.Q6_K.gguf.part2of2) | Q6_K | 85.1 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/KitchenSink_103b-GGUF/resolve/main/KitchenSink_103b.Q8_0.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/KitchenSink_103b-GGUF/resolve/main/KitchenSink_103b.Q8_0.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/KitchenSink_103b-GGUF/resolve/main/KitchenSink_103b.Q8_0.gguf.part3of3) | Q8_0 | 110.0 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/MistralMath-7B-v0.1-GGUF | mradermacher | 2024-05-06T05:27:16Z | 22 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"WizardLM/WizardMath-7B-V1.1",
"meta-math/MetaMath-Mistral-7B",
"en",
"base_model:nachoaristimuno/MistralMath-7B-v0.1",
"base_model:quantized:nachoaristimuno/MistralMath-7B-v0.1",
"endpoints_compatible",
"region:us"
] | null | 2024-04-01T03:34:59Z | ---
base_model: nachoaristimuno/MistralMath-7B-v0.1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- WizardLM/WizardMath-7B-V1.1
- meta-math/MetaMath-Mistral-7B
---
## About
static quants of https://huggingface.co/nachoaristimuno/MistralMath-7B-v0.1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MistralMath-7B-v0.1-GGUF/resolve/main/MistralMath-7B-v0.1.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/MistralMath-7B-v0.1-GGUF/resolve/main/MistralMath-7B-v0.1.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/MistralMath-7B-v0.1-GGUF/resolve/main/MistralMath-7B-v0.1.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/MistralMath-7B-v0.1-GGUF/resolve/main/MistralMath-7B-v0.1.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MistralMath-7B-v0.1-GGUF/resolve/main/MistralMath-7B-v0.1.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/MistralMath-7B-v0.1-GGUF/resolve/main/MistralMath-7B-v0.1.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MistralMath-7B-v0.1-GGUF/resolve/main/MistralMath-7B-v0.1.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/MistralMath-7B-v0.1-GGUF/resolve/main/MistralMath-7B-v0.1.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/MistralMath-7B-v0.1-GGUF/resolve/main/MistralMath-7B-v0.1.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MistralMath-7B-v0.1-GGUF/resolve/main/MistralMath-7B-v0.1.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MistralMath-7B-v0.1-GGUF/resolve/main/MistralMath-7B-v0.1.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/MistralMath-7B-v0.1-GGUF/resolve/main/MistralMath-7B-v0.1.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/MistralMath-7B-v0.1-GGUF/resolve/main/MistralMath-7B-v0.1.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MistralMath-7B-v0.1-GGUF/resolve/main/MistralMath-7B-v0.1.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Helen-v1_7B-GGUF | mradermacher | 2024-05-06T05:27:11Z | 24 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"mistral",
"roleplay",
"en",
"base_model:Virt-io/Helen-v1_7B",
"base_model:quantized:Virt-io/Helen-v1_7B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-01T04:59:07Z | ---
base_model: Virt-io/Helen-v1_7B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
- mistral
- roleplay
---
## About
static quants of https://huggingface.co/Virt-io/Helen-v1_7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Helen-v1_7B-GGUF/resolve/main/Helen-v1_7B.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Helen-v1_7B-GGUF/resolve/main/Helen-v1_7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Helen-v1_7B-GGUF/resolve/main/Helen-v1_7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Helen-v1_7B-GGUF/resolve/main/Helen-v1_7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Helen-v1_7B-GGUF/resolve/main/Helen-v1_7B.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Helen-v1_7B-GGUF/resolve/main/Helen-v1_7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Helen-v1_7B-GGUF/resolve/main/Helen-v1_7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Helen-v1_7B-GGUF/resolve/main/Helen-v1_7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Helen-v1_7B-GGUF/resolve/main/Helen-v1_7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Helen-v1_7B-GGUF/resolve/main/Helen-v1_7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Helen-v1_7B-GGUF/resolve/main/Helen-v1_7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Helen-v1_7B-GGUF/resolve/main/Helen-v1_7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Helen-v1_7B-GGUF/resolve/main/Helen-v1_7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Helen-v1_7B-GGUF/resolve/main/Helen-v1_7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/lemonade-rebase-32k-7B-GGUF | mradermacher | 2024-05-06T05:27:08Z | 34 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:grimjim/lemonade-rebase-32k-7B",
"base_model:quantized:grimjim/lemonade-rebase-32k-7B",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-01T05:09:10Z | ---
base_model: grimjim/lemonade-rebase-32k-7B
language:
- en
library_name: transformers
license: cc-by-4.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
static quants of https://huggingface.co/grimjim/lemonade-rebase-32k-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/lemonade-rebase-32k-7B-GGUF/resolve/main/lemonade-rebase-32k-7B.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/lemonade-rebase-32k-7B-GGUF/resolve/main/lemonade-rebase-32k-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/lemonade-rebase-32k-7B-GGUF/resolve/main/lemonade-rebase-32k-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/lemonade-rebase-32k-7B-GGUF/resolve/main/lemonade-rebase-32k-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/lemonade-rebase-32k-7B-GGUF/resolve/main/lemonade-rebase-32k-7B.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/lemonade-rebase-32k-7B-GGUF/resolve/main/lemonade-rebase-32k-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/lemonade-rebase-32k-7B-GGUF/resolve/main/lemonade-rebase-32k-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/lemonade-rebase-32k-7B-GGUF/resolve/main/lemonade-rebase-32k-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/lemonade-rebase-32k-7B-GGUF/resolve/main/lemonade-rebase-32k-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/lemonade-rebase-32k-7B-GGUF/resolve/main/lemonade-rebase-32k-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/lemonade-rebase-32k-7B-GGUF/resolve/main/lemonade-rebase-32k-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/lemonade-rebase-32k-7B-GGUF/resolve/main/lemonade-rebase-32k-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/lemonade-rebase-32k-7B-GGUF/resolve/main/lemonade-rebase-32k-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/lemonade-rebase-32k-7B-GGUF/resolve/main/lemonade-rebase-32k-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/athena-120b-GGUF | mradermacher | 2024-05-06T05:26:29Z | 10 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"en",
"base_model:ibivibiv/athena-120b",
"base_model:quantized:ibivibiv/athena-120b",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-04-01T05:58:00Z | ---
base_model: ibivibiv/athena-120b
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
tags:
- merge
---
## About
static quants of https://huggingface.co/ibivibiv/athena-120b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/athena-120b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/athena-120b-GGUF/resolve/main/athena-120b.Q2_K.gguf) | Q2_K | 45.1 | |
| [PART 1](https://huggingface.co/mradermacher/athena-120b-GGUF/resolve/main/athena-120b.IQ3_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/athena-120b-GGUF/resolve/main/athena-120b.IQ3_XS.gguf.part2of2) | IQ3_XS | 50.1 | |
| [PART 1](https://huggingface.co/mradermacher/athena-120b-GGUF/resolve/main/athena-120b.Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/athena-120b-GGUF/resolve/main/athena-120b.Q3_K_S.gguf.part2of2) | Q3_K_S | 52.7 | |
| [PART 1](https://huggingface.co/mradermacher/athena-120b-GGUF/resolve/main/athena-120b.IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/athena-120b-GGUF/resolve/main/athena-120b.IQ3_S.gguf.part2of2) | IQ3_S | 52.9 | beats Q3_K* |
| [PART 1](https://huggingface.co/mradermacher/athena-120b-GGUF/resolve/main/athena-120b.IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/athena-120b-GGUF/resolve/main/athena-120b.IQ3_M.gguf.part2of2) | IQ3_M | 54.7 | |
| [PART 1](https://huggingface.co/mradermacher/athena-120b-GGUF/resolve/main/athena-120b.Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/athena-120b-GGUF/resolve/main/athena-120b.Q3_K_M.gguf.part2of2) | Q3_K_M | 58.8 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/athena-120b-GGUF/resolve/main/athena-120b.Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/athena-120b-GGUF/resolve/main/athena-120b.Q3_K_L.gguf.part2of2) | Q3_K_L | 63.9 | |
| [PART 1](https://huggingface.co/mradermacher/athena-120b-GGUF/resolve/main/athena-120b.IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/athena-120b-GGUF/resolve/main/athena-120b.IQ4_XS.gguf.part2of2) | IQ4_XS | 65.7 | |
| [PART 1](https://huggingface.co/mradermacher/athena-120b-GGUF/resolve/main/athena-120b.Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/athena-120b-GGUF/resolve/main/athena-120b.Q4_K_S.gguf.part2of2) | Q4_K_S | 69.2 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/athena-120b-GGUF/resolve/main/athena-120b.Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/athena-120b-GGUF/resolve/main/athena-120b.Q4_K_M.gguf.part2of2) | Q4_K_M | 73.1 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/athena-120b-GGUF/resolve/main/athena-120b.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/athena-120b-GGUF/resolve/main/athena-120b.Q5_K_S.gguf.part2of2) | Q5_K_S | 83.7 | |
| [PART 1](https://huggingface.co/mradermacher/athena-120b-GGUF/resolve/main/athena-120b.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/athena-120b-GGUF/resolve/main/athena-120b.Q5_K_M.gguf.part2of2) | Q5_K_M | 86.0 | |
| [PART 1](https://huggingface.co/mradermacher/athena-120b-GGUF/resolve/main/athena-120b.Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/athena-120b-GGUF/resolve/main/athena-120b.Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/athena-120b-GGUF/resolve/main/athena-120b.Q6_K.gguf.part3of3) | Q6_K | 99.6 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/athena-120b-GGUF/resolve/main/athena-120b.Q8_0.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/athena-120b-GGUF/resolve/main/athena-120b.Q8_0.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/athena-120b-GGUF/resolve/main/athena-120b.Q8_0.gguf.part3of3) | Q8_0 | 128.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/LadybirdGonzo-7B-slerp-GGUF | mradermacher | 2024-05-06T05:26:25Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"Badgids/Gonzo-Chat-7B",
"bobofrut/ladybird-base-7B-v8",
"en",
"base_model:allknowingroger/LadybirdGonzo-7B-slerp",
"base_model:quantized:allknowingroger/LadybirdGonzo-7B-slerp",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-01T06:22:40Z | ---
base_model: allknowingroger/LadybirdGonzo-7B-slerp
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- Badgids/Gonzo-Chat-7B
- bobofrut/ladybird-base-7B-v8
---
## About
static quants of https://huggingface.co/allknowingroger/LadybirdGonzo-7B-slerp
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LadybirdGonzo-7B-slerp-GGUF/resolve/main/LadybirdGonzo-7B-slerp.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/LadybirdGonzo-7B-slerp-GGUF/resolve/main/LadybirdGonzo-7B-slerp.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/LadybirdGonzo-7B-slerp-GGUF/resolve/main/LadybirdGonzo-7B-slerp.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/LadybirdGonzo-7B-slerp-GGUF/resolve/main/LadybirdGonzo-7B-slerp.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/LadybirdGonzo-7B-slerp-GGUF/resolve/main/LadybirdGonzo-7B-slerp.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/LadybirdGonzo-7B-slerp-GGUF/resolve/main/LadybirdGonzo-7B-slerp.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LadybirdGonzo-7B-slerp-GGUF/resolve/main/LadybirdGonzo-7B-slerp.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/LadybirdGonzo-7B-slerp-GGUF/resolve/main/LadybirdGonzo-7B-slerp.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/LadybirdGonzo-7B-slerp-GGUF/resolve/main/LadybirdGonzo-7B-slerp.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LadybirdGonzo-7B-slerp-GGUF/resolve/main/LadybirdGonzo-7B-slerp.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LadybirdGonzo-7B-slerp-GGUF/resolve/main/LadybirdGonzo-7B-slerp.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/LadybirdGonzo-7B-slerp-GGUF/resolve/main/LadybirdGonzo-7B-slerp.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/LadybirdGonzo-7B-slerp-GGUF/resolve/main/LadybirdGonzo-7B-slerp.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/LadybirdGonzo-7B-slerp-GGUF/resolve/main/LadybirdGonzo-7B-slerp.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Merak-7B-v4-GGUF | mradermacher | 2024-05-06T05:26:06Z | 50 | 0 | transformers | [
"transformers",
"gguf",
"id",
"en",
"dataset:wikipedia",
"dataset:Ichsan2895/OASST_Top1_Indonesian",
"dataset:Ichsan2895/alpaca-gpt4-indonesian",
"base_model:Ichsan2895/Merak-7B-v4",
"base_model:quantized:Ichsan2895/Merak-7B-v4",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-01T06:35:32Z | ---
base_model: Ichsan2895/Merak-7B-v4
datasets:
- wikipedia
- Ichsan2895/OASST_Top1_Indonesian
- Ichsan2895/alpaca-gpt4-indonesian
language:
- id
- en
library_name: transformers
license: cc-by-nc-sa-4.0
quantized_by: mradermacher
---
## About
static quants of https://huggingface.co/Ichsan2895/Merak-7B-v4
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Merak-7B-v4-GGUF/resolve/main/Merak-7B-v4.Q2_K.gguf) | Q2_K | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Merak-7B-v4-GGUF/resolve/main/Merak-7B-v4.IQ3_XS.gguf) | IQ3_XS | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Merak-7B-v4-GGUF/resolve/main/Merak-7B-v4.Q3_K_S.gguf) | Q3_K_S | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Merak-7B-v4-GGUF/resolve/main/Merak-7B-v4.IQ3_S.gguf) | IQ3_S | 3.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Merak-7B-v4-GGUF/resolve/main/Merak-7B-v4.IQ3_M.gguf) | IQ3_M | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Merak-7B-v4-GGUF/resolve/main/Merak-7B-v4.Q3_K_M.gguf) | Q3_K_M | 4.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Merak-7B-v4-GGUF/resolve/main/Merak-7B-v4.Q3_K_L.gguf) | Q3_K_L | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Merak-7B-v4-GGUF/resolve/main/Merak-7B-v4.IQ4_XS.gguf) | IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Merak-7B-v4-GGUF/resolve/main/Merak-7B-v4.Q4_K_S.gguf) | Q4_K_S | 4.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Merak-7B-v4-GGUF/resolve/main/Merak-7B-v4.Q4_K_M.gguf) | Q4_K_M | 4.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Merak-7B-v4-GGUF/resolve/main/Merak-7B-v4.Q5_K_S.gguf) | Q5_K_S | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Merak-7B-v4-GGUF/resolve/main/Merak-7B-v4.Q5_K_M.gguf) | Q5_K_M | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/Merak-7B-v4-GGUF/resolve/main/Merak-7B-v4.Q6_K.gguf) | Q6_K | 6.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Merak-7B-v4-GGUF/resolve/main/Merak-7B-v4.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/MasherAI-v6-7B-GGUF | mradermacher | 2024-05-06T05:25:55Z | 4 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"sft",
"en",
"base_model:mahiatlinux/MasherAI-v6-7B",
"base_model:quantized:mahiatlinux/MasherAI-v6-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-01T08:00:28Z | ---
base_model: mahiatlinux/MasherAI-v6-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
---
## About
static quants of https://huggingface.co/mahiatlinux/MasherAI-v6-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MasherAI-v6-7B-GGUF/resolve/main/MasherAI-v6-7B.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/MasherAI-v6-7B-GGUF/resolve/main/MasherAI-v6-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/MasherAI-v6-7B-GGUF/resolve/main/MasherAI-v6-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/MasherAI-v6-7B-GGUF/resolve/main/MasherAI-v6-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MasherAI-v6-7B-GGUF/resolve/main/MasherAI-v6-7B.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/MasherAI-v6-7B-GGUF/resolve/main/MasherAI-v6-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MasherAI-v6-7B-GGUF/resolve/main/MasherAI-v6-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/MasherAI-v6-7B-GGUF/resolve/main/MasherAI-v6-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/MasherAI-v6-7B-GGUF/resolve/main/MasherAI-v6-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MasherAI-v6-7B-GGUF/resolve/main/MasherAI-v6-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MasherAI-v6-7B-GGUF/resolve/main/MasherAI-v6-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/MasherAI-v6-7B-GGUF/resolve/main/MasherAI-v6-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/MasherAI-v6-7B-GGUF/resolve/main/MasherAI-v6-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MasherAI-v6-7B-GGUF/resolve/main/MasherAI-v6-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
tsavage68/chat_550_STEPS_01beta_1e6_rate_CDPOSFT | tsavage68 | 2024-05-06T05:25:40Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:tsavage68/chat_600STEPS_1e8rate_SFT",
"base_model:finetune:tsavage68/chat_600STEPS_1e8rate_SFT",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-06T05:14:30Z | ---
base_model: tsavage68/chat_600STEPS_1e8rate_SFT
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: chat_550_STEPS_01beta_1e6_rate_CDPOSFT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chat_550_STEPS_01beta_1e6_rate_CDPOSFT
This model is a fine-tuned version of [tsavage68/chat_600STEPS_1e8rate_SFT](https://huggingface.co/tsavage68/chat_600STEPS_1e8rate_SFT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6716
- Rewards/chosen: -0.1192
- Rewards/rejected: -0.1802
- Rewards/accuracies: 0.5253
- Rewards/margins: 0.0610
- Logps/rejected: -20.6044
- Logps/chosen: -17.9469
- Logits/rejected: -0.6222
- Logits/chosen: -0.6220
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 550
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6925 | 0.0977 | 50 | 0.6917 | 0.0117 | 0.0085 | 0.4659 | 0.0031 | -18.7166 | -16.6380 | -0.6015 | -0.6013 |
| 0.6776 | 0.1953 | 100 | 0.6812 | -0.0371 | -0.0646 | 0.5253 | 0.0275 | -19.4479 | -17.1259 | -0.6242 | -0.6241 |
| 0.6927 | 0.2930 | 150 | 0.6819 | -0.0802 | -0.1112 | 0.5011 | 0.0310 | -19.9140 | -17.5569 | -0.6222 | -0.6220 |
| 0.6928 | 0.3906 | 200 | 0.6776 | -0.1032 | -0.1444 | 0.5033 | 0.0412 | -20.2463 | -17.7865 | -0.6050 | -0.6048 |
| 0.6937 | 0.4883 | 250 | 0.6762 | -0.0643 | -0.1121 | 0.5121 | 0.0478 | -19.9228 | -17.3977 | -0.6013 | -0.6011 |
| 0.6758 | 0.5859 | 300 | 0.6717 | -0.1055 | -0.1663 | 0.5231 | 0.0608 | -20.4645 | -17.8094 | -0.6301 | -0.6299 |
| 0.6696 | 0.6836 | 350 | 0.6724 | -0.1144 | -0.1731 | 0.5275 | 0.0587 | -20.5330 | -17.8991 | -0.6162 | -0.6160 |
| 0.6587 | 0.7812 | 400 | 0.6711 | -0.1221 | -0.1842 | 0.5297 | 0.0621 | -20.6441 | -17.9756 | -0.6249 | -0.6247 |
| 0.6755 | 0.8789 | 450 | 0.6713 | -0.1178 | -0.1794 | 0.5341 | 0.0616 | -20.5960 | -17.9326 | -0.6214 | -0.6212 |
| 0.6637 | 0.9766 | 500 | 0.6712 | -0.1188 | -0.1808 | 0.5253 | 0.0620 | -20.6100 | -17.9427 | -0.6222 | -0.6220 |
| 0.5575 | 1.0742 | 550 | 0.6716 | -0.1192 | -0.1802 | 0.5253 | 0.0610 | -20.6044 | -17.9469 | -0.6222 | -0.6220 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.0.0+cu117
- Datasets 2.19.0
- Tokenizers 0.19.1
|
mradermacher/Mixtral_AI_CyberBrain_SFT-GGUF | mradermacher | 2024-05-06T05:25:33Z | 23 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-01T09:23:52Z | ---
base_model: LeroyDyer/Mixtral_AI_CyberBrain_SFT
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
## About
static quants of https://huggingface.co/LeroyDyer/Mixtral_AI_CyberBrain_SFT
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberBrain_SFT-GGUF/resolve/main/Mixtral_AI_CyberBrain_SFT.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberBrain_SFT-GGUF/resolve/main/Mixtral_AI_CyberBrain_SFT.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberBrain_SFT-GGUF/resolve/main/Mixtral_AI_CyberBrain_SFT.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberBrain_SFT-GGUF/resolve/main/Mixtral_AI_CyberBrain_SFT.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberBrain_SFT-GGUF/resolve/main/Mixtral_AI_CyberBrain_SFT.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberBrain_SFT-GGUF/resolve/main/Mixtral_AI_CyberBrain_SFT.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberBrain_SFT-GGUF/resolve/main/Mixtral_AI_CyberBrain_SFT.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberBrain_SFT-GGUF/resolve/main/Mixtral_AI_CyberBrain_SFT.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberBrain_SFT-GGUF/resolve/main/Mixtral_AI_CyberBrain_SFT.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberBrain_SFT-GGUF/resolve/main/Mixtral_AI_CyberBrain_SFT.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberBrain_SFT-GGUF/resolve/main/Mixtral_AI_CyberBrain_SFT.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberBrain_SFT-GGUF/resolve/main/Mixtral_AI_CyberBrain_SFT.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberBrain_SFT-GGUF/resolve/main/Mixtral_AI_CyberBrain_SFT.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberBrain_SFT-GGUF/resolve/main/Mixtral_AI_CyberBrain_SFT.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/saily-13b-v0-GGUF | mradermacher | 2024-05-06T05:24:55Z | 90 | 0 | transformers | [
"transformers",
"gguf",
"7B",
"Saily",
"DEEPNIGHT",
"Llama",
"Llama2",
"en",
"base_model:deepnight-research/saily-13b-v0",
"base_model:quantized:deepnight-research/saily-13b-v0",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-04-01T13:39:23Z | ---
base_model: deepnight-research/saily-13b-v0
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- 7B
- Saily
- DEEPNIGHT
- Llama
- Llama2
---
## About
static quants of https://huggingface.co/deepnight-research/saily-13b-v0
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/saily-13b-v0-GGUF/resolve/main/saily-13b-v0.Q2_K.gguf) | Q2_K | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/saily-13b-v0-GGUF/resolve/main/saily-13b-v0.IQ3_XS.gguf) | IQ3_XS | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/saily-13b-v0-GGUF/resolve/main/saily-13b-v0.IQ3_S.gguf) | IQ3_S | 6.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/saily-13b-v0-GGUF/resolve/main/saily-13b-v0.Q3_K_S.gguf) | Q3_K_S | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/saily-13b-v0-GGUF/resolve/main/saily-13b-v0.IQ3_M.gguf) | IQ3_M | 6.3 | |
| [GGUF](https://huggingface.co/mradermacher/saily-13b-v0-GGUF/resolve/main/saily-13b-v0.Q3_K_M.gguf) | Q3_K_M | 6.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/saily-13b-v0-GGUF/resolve/main/saily-13b-v0.Q3_K_L.gguf) | Q3_K_L | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/saily-13b-v0-GGUF/resolve/main/saily-13b-v0.IQ4_XS.gguf) | IQ4_XS | 7.3 | |
| [GGUF](https://huggingface.co/mradermacher/saily-13b-v0-GGUF/resolve/main/saily-13b-v0.Q4_K_S.gguf) | Q4_K_S | 7.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/saily-13b-v0-GGUF/resolve/main/saily-13b-v0.Q4_K_M.gguf) | Q4_K_M | 8.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/saily-13b-v0-GGUF/resolve/main/saily-13b-v0.Q5_K_S.gguf) | Q5_K_S | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/saily-13b-v0-GGUF/resolve/main/saily-13b-v0.Q5_K_M.gguf) | Q5_K_M | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/saily-13b-v0-GGUF/resolve/main/saily-13b-v0.Q6_K.gguf) | Q6_K | 11.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/saily-13b-v0-GGUF/resolve/main/saily-13b-v0.Q8_0.gguf) | Q8_0 | 14.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Yeet_51b_200k-GGUF | mradermacher | 2024-05-06T05:24:43Z | 106 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:MarsupialAI/Yeet_51b_200k",
"base_model:quantized:MarsupialAI/Yeet_51b_200k",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-04-01T16:03:18Z | ---
base_model: MarsupialAI/Yeet_51b_200k
language:
- en
library_name: transformers
license: other
license_name: yi-other
quantized_by: mradermacher
---
## About
static quants of https://huggingface.co/MarsupialAI/Yeet_51b_200k
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Yeet_51b_200k-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-GGUF/resolve/main/Yeet_51b_200k.Q2_K.gguf) | Q2_K | 19.6 | |
| [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-GGUF/resolve/main/Yeet_51b_200k.IQ3_XS.gguf) | IQ3_XS | 21.7 | |
| [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-GGUF/resolve/main/Yeet_51b_200k.Q3_K_S.gguf) | Q3_K_S | 22.8 | |
| [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-GGUF/resolve/main/Yeet_51b_200k.IQ3_S.gguf) | IQ3_S | 22.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-GGUF/resolve/main/Yeet_51b_200k.IQ3_M.gguf) | IQ3_M | 23.7 | |
| [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-GGUF/resolve/main/Yeet_51b_200k.Q3_K_M.gguf) | Q3_K_M | 25.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-GGUF/resolve/main/Yeet_51b_200k.Q3_K_L.gguf) | Q3_K_L | 27.6 | |
| [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-GGUF/resolve/main/Yeet_51b_200k.IQ4_XS.gguf) | IQ4_XS | 28.3 | |
| [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-GGUF/resolve/main/Yeet_51b_200k.Q4_K_S.gguf) | Q4_K_S | 29.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-GGUF/resolve/main/Yeet_51b_200k.Q4_K_M.gguf) | Q4_K_M | 31.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-GGUF/resolve/main/Yeet_51b_200k.Q5_K_S.gguf) | Q5_K_S | 35.9 | |
| [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-GGUF/resolve/main/Yeet_51b_200k.Q5_K_M.gguf) | Q5_K_M | 36.8 | |
| [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-GGUF/resolve/main/Yeet_51b_200k.Q6_K.gguf) | Q6_K | 42.6 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Yeet_51b_200k-GGUF/resolve/main/Yeet_51b_200k.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Yeet_51b_200k-GGUF/resolve/main/Yeet_51b_200k.Q8_0.gguf.part2of2) | Q8_0 | 54.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/penchant-7B-GGUF | mradermacher | 2024-05-06T05:24:39Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:afoland/penchant-7B",
"base_model:quantized:afoland/penchant-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-01T16:09:20Z | ---
base_model: afoland/penchant-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
static quants of https://huggingface.co/afoland/penchant-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/penchant-7B-GGUF/resolve/main/penchant-7B.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/penchant-7B-GGUF/resolve/main/penchant-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/penchant-7B-GGUF/resolve/main/penchant-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/penchant-7B-GGUF/resolve/main/penchant-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/penchant-7B-GGUF/resolve/main/penchant-7B.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/penchant-7B-GGUF/resolve/main/penchant-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/penchant-7B-GGUF/resolve/main/penchant-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/penchant-7B-GGUF/resolve/main/penchant-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/penchant-7B-GGUF/resolve/main/penchant-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/penchant-7B-GGUF/resolve/main/penchant-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/penchant-7B-GGUF/resolve/main/penchant-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/penchant-7B-GGUF/resolve/main/penchant-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/penchant-7B-GGUF/resolve/main/penchant-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/penchant-7B-GGUF/resolve/main/penchant-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/TimeMax-20B-GGUF | mradermacher | 2024-05-06T05:24:37Z | 2 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:R136a1/TimeMax-20B",
"base_model:quantized:R136a1/TimeMax-20B",
"endpoints_compatible",
"region:us"
] | null | 2024-04-01T16:14:35Z | ---
base_model: R136a1/TimeMax-20B
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
static quants of https://huggingface.co/R136a1/TimeMax-20B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-GGUF/resolve/main/TimeMax-20B.Q2_K.gguf) | Q2_K | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-GGUF/resolve/main/TimeMax-20B.IQ3_XS.gguf) | IQ3_XS | 8.5 | |
| [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-GGUF/resolve/main/TimeMax-20B.IQ3_S.gguf) | IQ3_S | 9.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-GGUF/resolve/main/TimeMax-20B.Q3_K_S.gguf) | Q3_K_S | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-GGUF/resolve/main/TimeMax-20B.IQ3_M.gguf) | IQ3_M | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-GGUF/resolve/main/TimeMax-20B.Q3_K_M.gguf) | Q3_K_M | 10.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-GGUF/resolve/main/TimeMax-20B.Q3_K_L.gguf) | Q3_K_L | 10.9 | |
| [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-GGUF/resolve/main/TimeMax-20B.IQ4_XS.gguf) | IQ4_XS | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-GGUF/resolve/main/TimeMax-20B.Q4_K_S.gguf) | Q4_K_S | 11.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-GGUF/resolve/main/TimeMax-20B.Q4_K_M.gguf) | Q4_K_M | 12.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-GGUF/resolve/main/TimeMax-20B.Q5_K_S.gguf) | Q5_K_S | 14.1 | |
| [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-GGUF/resolve/main/TimeMax-20B.Q5_K_M.gguf) | Q5_K_M | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-GGUF/resolve/main/TimeMax-20B.Q6_K.gguf) | Q6_K | 16.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-GGUF/resolve/main/TimeMax-20B.Q8_0.gguf) | Q8_0 | 21.5 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/GodziLLa-30B-i1-GGUF | mradermacher | 2024-05-06T05:24:23Z | 108 | 1 | transformers | [
"transformers",
"gguf",
"merge",
"mix",
"cot",
"en",
"base_model:MayaPH/GodziLLa-30B",
"base_model:quantized:MayaPH/GodziLLa-30B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-01T19:31:22Z | ---
base_model: MayaPH/GodziLLa-30B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- merge
- mix
- cot
---
## About
weighted/imatrix quants of https://huggingface.co/MayaPH/GodziLLa-30B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/GodziLLa-30B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/GodziLLa-30B-i1-GGUF/resolve/main/GodziLLa-30B.i1-IQ1_S.gguf) | i1-IQ1_S | 7.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/GodziLLa-30B-i1-GGUF/resolve/main/GodziLLa-30B.i1-IQ1_M.gguf) | i1-IQ1_M | 8.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/GodziLLa-30B-i1-GGUF/resolve/main/GodziLLa-30B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/GodziLLa-30B-i1-GGUF/resolve/main/GodziLLa-30B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/GodziLLa-30B-i1-GGUF/resolve/main/GodziLLa-30B.i1-IQ2_S.gguf) | i1-IQ2_S | 10.8 | |
| [GGUF](https://huggingface.co/mradermacher/GodziLLa-30B-i1-GGUF/resolve/main/GodziLLa-30B.i1-IQ2_M.gguf) | i1-IQ2_M | 11.6 | |
| [GGUF](https://huggingface.co/mradermacher/GodziLLa-30B-i1-GGUF/resolve/main/GodziLLa-30B.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/GodziLLa-30B-i1-GGUF/resolve/main/GodziLLa-30B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/GodziLLa-30B-i1-GGUF/resolve/main/GodziLLa-30B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.7 | |
| [GGUF](https://huggingface.co/mradermacher/GodziLLa-30B-i1-GGUF/resolve/main/GodziLLa-30B.i1-IQ3_S.gguf) | i1-IQ3_S | 14.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/GodziLLa-30B-i1-GGUF/resolve/main/GodziLLa-30B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/GodziLLa-30B-i1-GGUF/resolve/main/GodziLLa-30B.i1-IQ3_M.gguf) | i1-IQ3_M | 15.2 | |
| [GGUF](https://huggingface.co/mradermacher/GodziLLa-30B-i1-GGUF/resolve/main/GodziLLa-30B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/GodziLLa-30B-i1-GGUF/resolve/main/GodziLLa-30B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/GodziLLa-30B-i1-GGUF/resolve/main/GodziLLa-30B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.7 | |
| [GGUF](https://huggingface.co/mradermacher/GodziLLa-30B-i1-GGUF/resolve/main/GodziLLa-30B.i1-Q4_0.gguf) | i1-Q4_0 | 18.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/GodziLLa-30B-i1-GGUF/resolve/main/GodziLLa-30B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/GodziLLa-30B-i1-GGUF/resolve/main/GodziLLa-30B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GodziLLa-30B-i1-GGUF/resolve/main/GodziLLa-30B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/GodziLLa-30B-i1-GGUF/resolve/main/GodziLLa-30B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/GodziLLa-30B-i1-GGUF/resolve/main/GodziLLa-30B.i1-Q6_K.gguf) | i1-Q6_K | 27.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/TimeMax-20B-i1-GGUF | mradermacher | 2024-05-06T05:24:20Z | 6 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"text-generation",
"en",
"base_model:R136a1/TimeMax-20B",
"base_model:quantized:R136a1/TimeMax-20B",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-01T20:10:41Z | ---
base_model: R136a1/TimeMax-20B
language:
- en
library_name: transformers
pipeline_tag: text-generation
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
weighted/imatrix quants of https://huggingface.co/R136a1/TimeMax-20B
**Only 50k tokens from my standard set have been used, as more caused an overflow. This is likely a problem with the model itself.**
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/TimeMax-20B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-IQ1_S.gguf) | i1-IQ1_S | 4.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-IQ1_M.gguf) | i1-IQ1_M | 5.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 6.3 | |
| [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-IQ2_S.gguf) | i1-IQ2_S | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-IQ2_M.gguf) | i1-IQ2_M | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-Q2_K.gguf) | i1-Q2_K | 7.7 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 7.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 8.5 | |
| [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-IQ3_S.gguf) | i1-IQ3_S | 9.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 9.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-IQ3_M.gguf) | i1-IQ3_M | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 10.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 10.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-Q4_0.gguf) | i1-Q4_0 | 11.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 11.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 12.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 14.1 | |
| [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-Q6_K.gguf) | i1-Q6_K | 16.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Stork-7B-slerp-GGUF | mradermacher | 2024-05-06T05:24:18Z | 6 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"bofenghuang/vigostral-7b-chat",
"jpacifico/French-Alpaca-7B-Instruct-beta",
"fr",
"base_model:ntnq/Stork-7B-slerp",
"base_model:quantized:ntnq/Stork-7B-slerp",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-01T20:27:02Z | ---
base_model: ntnq/Stork-7B-slerp
language:
- fr
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- bofenghuang/vigostral-7b-chat
- jpacifico/French-Alpaca-7B-Instruct-beta
---
## About
static quants of https://huggingface.co/ntnq/Stork-7B-slerp
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Stork-7B-slerp-GGUF/resolve/main/Stork-7B-slerp.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Stork-7B-slerp-GGUF/resolve/main/Stork-7B-slerp.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Stork-7B-slerp-GGUF/resolve/main/Stork-7B-slerp.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Stork-7B-slerp-GGUF/resolve/main/Stork-7B-slerp.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Stork-7B-slerp-GGUF/resolve/main/Stork-7B-slerp.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Stork-7B-slerp-GGUF/resolve/main/Stork-7B-slerp.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Stork-7B-slerp-GGUF/resolve/main/Stork-7B-slerp.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Stork-7B-slerp-GGUF/resolve/main/Stork-7B-slerp.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Stork-7B-slerp-GGUF/resolve/main/Stork-7B-slerp.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Stork-7B-slerp-GGUF/resolve/main/Stork-7B-slerp.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Stork-7B-slerp-GGUF/resolve/main/Stork-7B-slerp.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Stork-7B-slerp-GGUF/resolve/main/Stork-7B-slerp.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Stork-7B-slerp-GGUF/resolve/main/Stork-7B-slerp.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Stork-7B-slerp-GGUF/resolve/main/Stork-7B-slerp.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Zebrafish-dare-7B-GGUF | mradermacher | 2024-05-06T05:24:07Z | 17 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"en",
"base_model:mlabonne/Zebrafish-dare-7B",
"base_model:quantized:mlabonne/Zebrafish-dare-7B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-01T23:15:33Z | ---
base_model: mlabonne/Zebrafish-dare-7B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
---
## About
static quants of https://huggingface.co/mlabonne/Zebrafish-dare-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Zebrafish-dare-7B-GGUF/resolve/main/Zebrafish-dare-7B.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Zebrafish-dare-7B-GGUF/resolve/main/Zebrafish-dare-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Zebrafish-dare-7B-GGUF/resolve/main/Zebrafish-dare-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Zebrafish-dare-7B-GGUF/resolve/main/Zebrafish-dare-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Zebrafish-dare-7B-GGUF/resolve/main/Zebrafish-dare-7B.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Zebrafish-dare-7B-GGUF/resolve/main/Zebrafish-dare-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Zebrafish-dare-7B-GGUF/resolve/main/Zebrafish-dare-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Zebrafish-dare-7B-GGUF/resolve/main/Zebrafish-dare-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Zebrafish-dare-7B-GGUF/resolve/main/Zebrafish-dare-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Zebrafish-dare-7B-GGUF/resolve/main/Zebrafish-dare-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Zebrafish-dare-7B-GGUF/resolve/main/Zebrafish-dare-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Zebrafish-dare-7B-GGUF/resolve/main/Zebrafish-dare-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Zebrafish-dare-7B-GGUF/resolve/main/Zebrafish-dare-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Zebrafish-dare-7B-GGUF/resolve/main/Zebrafish-dare-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Zebrafish-linear-7B-GGUF | mradermacher | 2024-05-06T05:24:05Z | 13 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"en",
"base_model:mlabonne/Zebrafish-linear-7B",
"base_model:quantized:mlabonne/Zebrafish-linear-7B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-02T00:44:08Z | ---
base_model: mlabonne/Zebrafish-linear-7B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
---
## About
static quants of https://huggingface.co/mlabonne/Zebrafish-linear-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Zebrafish-linear-7B-GGUF/resolve/main/Zebrafish-linear-7B.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Zebrafish-linear-7B-GGUF/resolve/main/Zebrafish-linear-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Zebrafish-linear-7B-GGUF/resolve/main/Zebrafish-linear-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Zebrafish-linear-7B-GGUF/resolve/main/Zebrafish-linear-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Zebrafish-linear-7B-GGUF/resolve/main/Zebrafish-linear-7B.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Zebrafish-linear-7B-GGUF/resolve/main/Zebrafish-linear-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Zebrafish-linear-7B-GGUF/resolve/main/Zebrafish-linear-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Zebrafish-linear-7B-GGUF/resolve/main/Zebrafish-linear-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Zebrafish-linear-7B-GGUF/resolve/main/Zebrafish-linear-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Zebrafish-linear-7B-GGUF/resolve/main/Zebrafish-linear-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Zebrafish-linear-7B-GGUF/resolve/main/Zebrafish-linear-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Zebrafish-linear-7B-GGUF/resolve/main/Zebrafish-linear-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Zebrafish-linear-7B-GGUF/resolve/main/Zebrafish-linear-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Zebrafish-linear-7B-GGUF/resolve/main/Zebrafish-linear-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/MonarchPipe-7B-slerp-GGUF | mradermacher | 2024-05-06T05:23:52Z | 25 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"OpenPipe/mistral-ft-optimized-1227",
"mlabonne/AlphaMonarch-7B",
"en",
"base_model:ichigoberry/MonarchPipe-7B-slerp",
"base_model:quantized:ichigoberry/MonarchPipe-7B-slerp",
"license:cc-by-nc-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-02T02:11:17Z | ---
base_model: ichigoberry/MonarchPipe-7B-slerp
language:
- en
library_name: transformers
license: cc-by-nc-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- OpenPipe/mistral-ft-optimized-1227
- mlabonne/AlphaMonarch-7B
---
## About
static quants of https://huggingface.co/ichigoberry/MonarchPipe-7B-slerp
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MonarchPipe-7B-slerp-GGUF/resolve/main/MonarchPipe-7B-slerp.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/MonarchPipe-7B-slerp-GGUF/resolve/main/MonarchPipe-7B-slerp.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/MonarchPipe-7B-slerp-GGUF/resolve/main/MonarchPipe-7B-slerp.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/MonarchPipe-7B-slerp-GGUF/resolve/main/MonarchPipe-7B-slerp.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MonarchPipe-7B-slerp-GGUF/resolve/main/MonarchPipe-7B-slerp.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/MonarchPipe-7B-slerp-GGUF/resolve/main/MonarchPipe-7B-slerp.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MonarchPipe-7B-slerp-GGUF/resolve/main/MonarchPipe-7B-slerp.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/MonarchPipe-7B-slerp-GGUF/resolve/main/MonarchPipe-7B-slerp.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/MonarchPipe-7B-slerp-GGUF/resolve/main/MonarchPipe-7B-slerp.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MonarchPipe-7B-slerp-GGUF/resolve/main/MonarchPipe-7B-slerp.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MonarchPipe-7B-slerp-GGUF/resolve/main/MonarchPipe-7B-slerp.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/MonarchPipe-7B-slerp-GGUF/resolve/main/MonarchPipe-7B-slerp.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/MonarchPipe-7B-slerp-GGUF/resolve/main/MonarchPipe-7B-slerp.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MonarchPipe-7B-slerp-GGUF/resolve/main/MonarchPipe-7B-slerp.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Wespeaker/wespeaker-voxceleb-gemini-DFresnet114-LM | Wespeaker | 2024-05-06T05:23:46Z | 4 | 0 | null | [
"onnx",
"license:apache-2.0",
"region:us"
] | null | 2024-05-06T05:14:05Z | ---
license: apache-2.0
---
|
mradermacher/NeuralStock-7B-v2-GGUF | mradermacher | 2024-05-06T05:23:35Z | 1 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"en",
"base_model:Kukedlc/NeuralStock-7B-v2",
"base_model:quantized:Kukedlc/NeuralStock-7B-v2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-02T04:58:23Z | ---
base_model: Kukedlc/NeuralStock-7B-v2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
---
## About
static quants of https://huggingface.co/Kukedlc/NeuralStock-7B-v2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NeuralStock-7B-v2-GGUF/resolve/main/NeuralStock-7B-v2.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralStock-7B-v2-GGUF/resolve/main/NeuralStock-7B-v2.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralStock-7B-v2-GGUF/resolve/main/NeuralStock-7B-v2.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralStock-7B-v2-GGUF/resolve/main/NeuralStock-7B-v2.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/NeuralStock-7B-v2-GGUF/resolve/main/NeuralStock-7B-v2.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralStock-7B-v2-GGUF/resolve/main/NeuralStock-7B-v2.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralStock-7B-v2-GGUF/resolve/main/NeuralStock-7B-v2.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralStock-7B-v2-GGUF/resolve/main/NeuralStock-7B-v2.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralStock-7B-v2-GGUF/resolve/main/NeuralStock-7B-v2.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeuralStock-7B-v2-GGUF/resolve/main/NeuralStock-7B-v2.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeuralStock-7B-v2-GGUF/resolve/main/NeuralStock-7B-v2.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralStock-7B-v2-GGUF/resolve/main/NeuralStock-7B-v2.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralStock-7B-v2-GGUF/resolve/main/NeuralStock-7B-v2.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralStock-7B-v2-GGUF/resolve/main/NeuralStock-7B-v2.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/UNAversal-8x7B-v1beta-GGUF | mradermacher | 2024-05-06T05:23:33Z | 69 | 0 | transformers | [
"transformers",
"gguf",
"UNA",
"juanako",
"mixtral",
"MoE",
"en",
"base_model:fblgit/UNAversal-8x7B-v1beta",
"base_model:quantized:fblgit/UNAversal-8x7B-v1beta",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-02T05:11:23Z | ---
base_model: fblgit/UNAversal-8x7B-v1beta
language:
- en
library_name: transformers
license: cc-by-nc-sa-4.0
quantized_by: mradermacher
tags:
- UNA
- juanako
- mixtral
- MoE
---
## About
static quants of https://huggingface.co/fblgit/UNAversal-8x7B-v1beta
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-GGUF/resolve/main/UNAversal-8x7B-v1beta.Q2_K.gguf) | Q2_K | 17.6 | |
| [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-GGUF/resolve/main/UNAversal-8x7B-v1beta.IQ3_XS.gguf) | IQ3_XS | 19.5 | |
| [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-GGUF/resolve/main/UNAversal-8x7B-v1beta.IQ3_S.gguf) | IQ3_S | 20.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-GGUF/resolve/main/UNAversal-8x7B-v1beta.Q3_K_S.gguf) | Q3_K_S | 20.7 | |
| [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-GGUF/resolve/main/UNAversal-8x7B-v1beta.IQ3_M.gguf) | IQ3_M | 21.7 | |
| [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-GGUF/resolve/main/UNAversal-8x7B-v1beta.Q3_K_M.gguf) | Q3_K_M | 22.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-GGUF/resolve/main/UNAversal-8x7B-v1beta.Q3_K_L.gguf) | Q3_K_L | 24.4 | |
| [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-GGUF/resolve/main/UNAversal-8x7B-v1beta.IQ4_XS.gguf) | IQ4_XS | 25.6 | |
| [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-GGUF/resolve/main/UNAversal-8x7B-v1beta.Q4_K_S.gguf) | Q4_K_S | 27.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-GGUF/resolve/main/UNAversal-8x7B-v1beta.Q4_K_M.gguf) | Q4_K_M | 28.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-GGUF/resolve/main/UNAversal-8x7B-v1beta.Q5_K_S.gguf) | Q5_K_S | 32.5 | |
| [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-GGUF/resolve/main/UNAversal-8x7B-v1beta.Q5_K_M.gguf) | Q5_K_M | 33.5 | |
| [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-GGUF/resolve/main/UNAversal-8x7B-v1beta.Q6_K.gguf) | Q6_K | 38.6 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-GGUF/resolve/main/UNAversal-8x7B-v1beta.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-GGUF/resolve/main/UNAversal-8x7B-v1beta.Q8_0.gguf.part2of2) | Q8_0 | 49.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/SatoshiNv5-GGUF | mradermacher | 2024-05-06T05:23:28Z | 55 | 0 | transformers | [
"transformers",
"gguf",
"finance",
"legal",
"biology",
"art",
"en",
"base_model:chrischain/SatoshiNv5",
"base_model:quantized:chrischain/SatoshiNv5",
"license:cc-by-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-02T06:23:06Z | ---
base_model: chrischain/SatoshiNv5
language:
- en
library_name: transformers
license: cc-by-2.0
quantized_by: mradermacher
tags:
- finance
- legal
- biology
- art
---
## About
static quants of https://huggingface.co/chrischain/SatoshiNv5
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SatoshiNv5-GGUF/resolve/main/SatoshiNv5.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/SatoshiNv5-GGUF/resolve/main/SatoshiNv5.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/SatoshiNv5-GGUF/resolve/main/SatoshiNv5.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/SatoshiNv5-GGUF/resolve/main/SatoshiNv5.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/SatoshiNv5-GGUF/resolve/main/SatoshiNv5.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/SatoshiNv5-GGUF/resolve/main/SatoshiNv5.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SatoshiNv5-GGUF/resolve/main/SatoshiNv5.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/SatoshiNv5-GGUF/resolve/main/SatoshiNv5.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/SatoshiNv5-GGUF/resolve/main/SatoshiNv5.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SatoshiNv5-GGUF/resolve/main/SatoshiNv5.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SatoshiNv5-GGUF/resolve/main/SatoshiNv5.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/SatoshiNv5-GGUF/resolve/main/SatoshiNv5.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/SatoshiNv5-GGUF/resolve/main/SatoshiNv5.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/SatoshiNv5-GGUF/resolve/main/SatoshiNv5.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/bagel-dpo-20b-v04-GGUF | mradermacher | 2024-05-06T05:23:25Z | 206 | 2 | transformers | [
"transformers",
"gguf",
"en",
"dataset:ai2_arc",
"dataset:allenai/ultrafeedback_binarized_cleaned",
"dataset:argilla/distilabel-intel-orca-dpo-pairs",
"dataset:jondurbin/airoboros-3.2",
"dataset:codeparrot/apps",
"dataset:facebook/belebele",
"dataset:bluemoon-fandom-1-1-rp-cleaned",
"dataset:boolq",
"dataset:camel-ai/biology",
"dataset:camel-ai/chemistry",
"dataset:camel-ai/math",
"dataset:camel-ai/physics",
"dataset:jondurbin/contextual-dpo-v0.1",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:jondurbin/py-dpo-v0.1",
"dataset:jondurbin/truthy-dpo-v0.1",
"dataset:LDJnr/Capybara",
"dataset:jondurbin/cinematika-v0.1",
"dataset:WizardLM/WizardLM_evol_instruct_70k",
"dataset:glaiveai/glaive-function-calling-v2",
"dataset:grimulkan/LimaRP-augmented",
"dataset:lmsys/lmsys-chat-1m",
"dataset:ParisNeo/lollms_aware_dataset",
"dataset:TIGER-Lab/MathInstruct",
"dataset:Muennighoff/natural-instructions",
"dataset:openbookqa",
"dataset:kingbri/PIPPA-shareGPT",
"dataset:piqa",
"dataset:Vezora/Tested-22k-Python-Alpaca",
"dataset:ropes",
"dataset:cakiki/rosetta-code",
"dataset:Open-Orca/SlimOrca",
"dataset:b-mc2/sql-create-context",
"dataset:squad_v2",
"dataset:mattpscott/airoboros-summarization",
"dataset:migtissera/Synthia-v1.3",
"dataset:unalignment/toxic-dpo-v0.2",
"dataset:WhiteRabbitNeo/WRN-Chapter-1",
"dataset:WhiteRabbitNeo/WRN-Chapter-2",
"dataset:winogrande",
"base_model:jondurbin/bagel-dpo-20b-v04",
"base_model:quantized:jondurbin/bagel-dpo-20b-v04",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-02T06:31:44Z | ---
base_model: jondurbin/bagel-dpo-20b-v04
datasets:
- ai2_arc
- allenai/ultrafeedback_binarized_cleaned
- argilla/distilabel-intel-orca-dpo-pairs
- jondurbin/airoboros-3.2
- codeparrot/apps
- facebook/belebele
- bluemoon-fandom-1-1-rp-cleaned
- boolq
- camel-ai/biology
- camel-ai/chemistry
- camel-ai/math
- camel-ai/physics
- jondurbin/contextual-dpo-v0.1
- jondurbin/gutenberg-dpo-v0.1
- jondurbin/py-dpo-v0.1
- jondurbin/truthy-dpo-v0.1
- LDJnr/Capybara
- jondurbin/cinematika-v0.1
- WizardLM/WizardLM_evol_instruct_70k
- glaiveai/glaive-function-calling-v2
- jondurbin/gutenberg-dpo-v0.1
- grimulkan/LimaRP-augmented
- lmsys/lmsys-chat-1m
- ParisNeo/lollms_aware_dataset
- TIGER-Lab/MathInstruct
- Muennighoff/natural-instructions
- openbookqa
- kingbri/PIPPA-shareGPT
- piqa
- Vezora/Tested-22k-Python-Alpaca
- ropes
- cakiki/rosetta-code
- Open-Orca/SlimOrca
- b-mc2/sql-create-context
- squad_v2
- mattpscott/airoboros-summarization
- migtissera/Synthia-v1.3
- unalignment/toxic-dpo-v0.2
- WhiteRabbitNeo/WRN-Chapter-1
- WhiteRabbitNeo/WRN-Chapter-2
- winogrande
language:
- en
library_name: transformers
license: other
license_link: https://huggingface.co/internlm/internlm2-20b#open-source-license
license_name: internlm2-20b
quantized_by: mradermacher
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/jondurbin/bagel-dpo-20b-v04
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/bagel-dpo-20b-v04-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-GGUF/resolve/main/bagel-dpo-20b-v04.Q2_K.gguf) | Q2_K | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-GGUF/resolve/main/bagel-dpo-20b-v04.IQ3_XS.gguf) | IQ3_XS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-GGUF/resolve/main/bagel-dpo-20b-v04.Q3_K_S.gguf) | Q3_K_S | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-GGUF/resolve/main/bagel-dpo-20b-v04.IQ3_S.gguf) | IQ3_S | 9.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-GGUF/resolve/main/bagel-dpo-20b-v04.IQ3_M.gguf) | IQ3_M | 9.9 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-GGUF/resolve/main/bagel-dpo-20b-v04.Q3_K_M.gguf) | Q3_K_M | 10.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-GGUF/resolve/main/bagel-dpo-20b-v04.Q3_K_L.gguf) | Q3_K_L | 11.3 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-GGUF/resolve/main/bagel-dpo-20b-v04.IQ4_XS.gguf) | IQ4_XS | 11.6 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-GGUF/resolve/main/bagel-dpo-20b-v04.Q4_K_S.gguf) | Q4_K_S | 12.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-GGUF/resolve/main/bagel-dpo-20b-v04.Q4_K_M.gguf) | Q4_K_M | 12.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-GGUF/resolve/main/bagel-dpo-20b-v04.Q5_K_S.gguf) | Q5_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-GGUF/resolve/main/bagel-dpo-20b-v04.Q5_K_M.gguf) | Q5_K_M | 14.8 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-GGUF/resolve/main/bagel-dpo-20b-v04.Q6_K.gguf) | Q6_K | 17.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-GGUF/resolve/main/bagel-dpo-20b-v04.Q8_0.gguf) | Q8_0 | 21.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-GGUF/resolve/main/bagel-dpo-20b-v04.SOURCE.gguf) | SOURCE | 39.8 | source gguf, only provided when it was hard to come by |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Yeet_51b_200k-i1-GGUF | mradermacher | 2024-05-06T05:23:14Z | 30 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:MarsupialAI/Yeet_51b_200k",
"base_model:quantized:MarsupialAI/Yeet_51b_200k",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-04-02T09:58:07Z | ---
base_model: MarsupialAI/Yeet_51b_200k
language:
- en
library_name: transformers
license: other
license_name: yi-other
no_imatrix: 'IQ3_XXS GGML_ASSERT: llama.cpp/ggml-quants.c:11239: grid_index >= 0'
quantized_by: mradermacher
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/MarsupialAI/Yeet_51b_200k
**No more quants forthcoming, as llama.cpp crashes.**
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Yeet_51b_200k-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-i1-GGUF/resolve/main/Yeet_51b_200k.i1-Q2_K.gguf) | i1-Q2_K | 19.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-i1-GGUF/resolve/main/Yeet_51b_200k.i1-Q3_K_S.gguf) | i1-Q3_K_S | 22.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-i1-GGUF/resolve/main/Yeet_51b_200k.i1-Q3_K_M.gguf) | i1-Q3_K_M | 25.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-i1-GGUF/resolve/main/Yeet_51b_200k.i1-Q3_K_L.gguf) | i1-Q3_K_L | 27.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-i1-GGUF/resolve/main/Yeet_51b_200k.i1-Q4_0.gguf) | i1-Q4_0 | 29.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-i1-GGUF/resolve/main/Yeet_51b_200k.i1-Q4_K_S.gguf) | i1-Q4_K_S | 29.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-i1-GGUF/resolve/main/Yeet_51b_200k.i1-Q4_K_M.gguf) | i1-Q4_K_M | 31.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-i1-GGUF/resolve/main/Yeet_51b_200k.i1-Q5_K_S.gguf) | i1-Q5_K_S | 35.9 | |
| [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-i1-GGUF/resolve/main/Yeet_51b_200k.i1-Q5_K_M.gguf) | i1-Q5_K_M | 36.8 | |
| [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-i1-GGUF/resolve/main/Yeet_51b_200k.i1-Q6_K.gguf) | i1-Q6_K | 42.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/NeuralSirKrishna-7b-DPO-GGUF | mradermacher | 2024-05-06T05:23:09Z | 4 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Kukedlc/NeuralSirKrishna-7b-DPO",
"base_model:quantized:Kukedlc/NeuralSirKrishna-7b-DPO",
"endpoints_compatible",
"region:us"
] | null | 2024-04-02T10:50:14Z | ---
base_model: Kukedlc/NeuralSirKrishna-7b-DPO
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Kukedlc/NeuralSirKrishna-7b-DPO
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NeuralSirKrishna-7b-DPO-GGUF/resolve/main/NeuralSirKrishna-7b-DPO.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralSirKrishna-7b-DPO-GGUF/resolve/main/NeuralSirKrishna-7b-DPO.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralSirKrishna-7b-DPO-GGUF/resolve/main/NeuralSirKrishna-7b-DPO.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralSirKrishna-7b-DPO-GGUF/resolve/main/NeuralSirKrishna-7b-DPO.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/NeuralSirKrishna-7b-DPO-GGUF/resolve/main/NeuralSirKrishna-7b-DPO.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralSirKrishna-7b-DPO-GGUF/resolve/main/NeuralSirKrishna-7b-DPO.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralSirKrishna-7b-DPO-GGUF/resolve/main/NeuralSirKrishna-7b-DPO.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralSirKrishna-7b-DPO-GGUF/resolve/main/NeuralSirKrishna-7b-DPO.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralSirKrishna-7b-DPO-GGUF/resolve/main/NeuralSirKrishna-7b-DPO.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeuralSirKrishna-7b-DPO-GGUF/resolve/main/NeuralSirKrishna-7b-DPO.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeuralSirKrishna-7b-DPO-GGUF/resolve/main/NeuralSirKrishna-7b-DPO.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralSirKrishna-7b-DPO-GGUF/resolve/main/NeuralSirKrishna-7b-DPO.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralSirKrishna-7b-DPO-GGUF/resolve/main/NeuralSirKrishna-7b-DPO.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralSirKrishna-7b-DPO-GGUF/resolve/main/NeuralSirKrishna-7b-DPO.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/bagel-dpo-34b-v0.5-GGUF | mradermacher | 2024-05-06T05:23:03Z | 80 | 8 | transformers | [
"transformers",
"gguf",
"en",
"dataset:ai2_arc",
"dataset:allenai/ultrafeedback_binarized_cleaned",
"dataset:argilla/distilabel-intel-orca-dpo-pairs",
"dataset:jondurbin/airoboros-3.2",
"dataset:codeparrot/apps",
"dataset:facebook/belebele",
"dataset:bluemoon-fandom-1-1-rp-cleaned",
"dataset:boolq",
"dataset:camel-ai/biology",
"dataset:camel-ai/chemistry",
"dataset:camel-ai/math",
"dataset:camel-ai/physics",
"dataset:jondurbin/contextual-dpo-v0.1",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:jondurbin/py-dpo-v0.1",
"dataset:jondurbin/truthy-dpo-v0.1",
"dataset:LDJnr/Capybara",
"dataset:jondurbin/cinematika-v0.1",
"dataset:WizardLM/WizardLM_evol_instruct_70k",
"dataset:glaiveai/glaive-function-calling-v2",
"dataset:grimulkan/LimaRP-augmented",
"dataset:lmsys/lmsys-chat-1m",
"dataset:ParisNeo/lollms_aware_dataset",
"dataset:TIGER-Lab/MathInstruct",
"dataset:Muennighoff/natural-instructions",
"dataset:openbookqa",
"dataset:kingbri/PIPPA-shareGPT",
"dataset:piqa",
"dataset:Vezora/Tested-22k-Python-Alpaca",
"dataset:ropes",
"dataset:cakiki/rosetta-code",
"dataset:Open-Orca/SlimOrca",
"dataset:b-mc2/sql-create-context",
"dataset:squad_v2",
"dataset:mattpscott/airoboros-summarization",
"dataset:migtissera/Synthia-v1.3",
"dataset:unalignment/toxic-dpo-v0.2",
"dataset:WhiteRabbitNeo/WRN-Chapter-1",
"dataset:WhiteRabbitNeo/WRN-Chapter-2",
"dataset:winogrande",
"base_model:jondurbin/bagel-dpo-34b-v0.5",
"base_model:quantized:jondurbin/bagel-dpo-34b-v0.5",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-04-02T12:20:42Z | ---
base_model: jondurbin/bagel-dpo-34b-v0.5
datasets:
- ai2_arc
- allenai/ultrafeedback_binarized_cleaned
- argilla/distilabel-intel-orca-dpo-pairs
- jondurbin/airoboros-3.2
- codeparrot/apps
- facebook/belebele
- bluemoon-fandom-1-1-rp-cleaned
- boolq
- camel-ai/biology
- camel-ai/chemistry
- camel-ai/math
- camel-ai/physics
- jondurbin/contextual-dpo-v0.1
- jondurbin/gutenberg-dpo-v0.1
- jondurbin/py-dpo-v0.1
- jondurbin/truthy-dpo-v0.1
- LDJnr/Capybara
- jondurbin/cinematika-v0.1
- WizardLM/WizardLM_evol_instruct_70k
- glaiveai/glaive-function-calling-v2
- jondurbin/gutenberg-dpo-v0.1
- grimulkan/LimaRP-augmented
- lmsys/lmsys-chat-1m
- ParisNeo/lollms_aware_dataset
- TIGER-Lab/MathInstruct
- Muennighoff/natural-instructions
- openbookqa
- kingbri/PIPPA-shareGPT
- piqa
- Vezora/Tested-22k-Python-Alpaca
- ropes
- cakiki/rosetta-code
- Open-Orca/SlimOrca
- b-mc2/sql-create-context
- squad_v2
- mattpscott/airoboros-summarization
- migtissera/Synthia-v1.3
- unalignment/toxic-dpo-v0.2
- WhiteRabbitNeo/WRN-Chapter-1
- WhiteRabbitNeo/WRN-Chapter-2
- winogrande
language:
- en
library_name: transformers
license: other
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
license_name: yi-license
quantized_by: mradermacher
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/jondurbin/bagel-dpo-34b-v0.5
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/bagel-dpo-34b-v0.5-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-34b-v0.5-GGUF/resolve/main/bagel-dpo-34b-v0.5.Q2_K.gguf) | Q2_K | 13.5 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-34b-v0.5-GGUF/resolve/main/bagel-dpo-34b-v0.5.IQ3_XS.gguf) | IQ3_XS | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-34b-v0.5-GGUF/resolve/main/bagel-dpo-34b-v0.5.Q3_K_S.gguf) | Q3_K_S | 15.6 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-34b-v0.5-GGUF/resolve/main/bagel-dpo-34b-v0.5.IQ3_S.gguf) | IQ3_S | 15.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-34b-v0.5-GGUF/resolve/main/bagel-dpo-34b-v0.5.IQ3_M.gguf) | IQ3_M | 16.2 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-34b-v0.5-GGUF/resolve/main/bagel-dpo-34b-v0.5.Q3_K_M.gguf) | Q3_K_M | 17.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-34b-v0.5-GGUF/resolve/main/bagel-dpo-34b-v0.5.Q3_K_L.gguf) | Q3_K_L | 18.8 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-34b-v0.5-GGUF/resolve/main/bagel-dpo-34b-v0.5.IQ4_XS.gguf) | IQ4_XS | 19.3 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-34b-v0.5-GGUF/resolve/main/bagel-dpo-34b-v0.5.Q4_K_S.gguf) | Q4_K_S | 20.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-34b-v0.5-GGUF/resolve/main/bagel-dpo-34b-v0.5.Q4_K_M.gguf) | Q4_K_M | 21.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-34b-v0.5-GGUF/resolve/main/bagel-dpo-34b-v0.5.Q5_K_S.gguf) | Q5_K_S | 24.3 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-34b-v0.5-GGUF/resolve/main/bagel-dpo-34b-v0.5.Q5_K_M.gguf) | Q5_K_M | 25.0 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-34b-v0.5-GGUF/resolve/main/bagel-dpo-34b-v0.5.Q6_K.gguf) | Q6_K | 28.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-34b-v0.5-GGUF/resolve/main/bagel-dpo-34b-v0.5.Q8_0.gguf) | Q8_0 | 37.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/bagel-34b-v0.5-GGUF | mradermacher | 2024-05-06T05:22:41Z | 3 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:ai2_arc",
"dataset:allenai/ultrafeedback_binarized_cleaned",
"dataset:argilla/distilabel-intel-orca-dpo-pairs",
"dataset:jondurbin/airoboros-3.2",
"dataset:codeparrot/apps",
"dataset:facebook/belebele",
"dataset:bluemoon-fandom-1-1-rp-cleaned",
"dataset:boolq",
"dataset:camel-ai/biology",
"dataset:camel-ai/chemistry",
"dataset:camel-ai/math",
"dataset:camel-ai/physics",
"dataset:jondurbin/contextual-dpo-v0.1",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:jondurbin/py-dpo-v0.1",
"dataset:jondurbin/truthy-dpo-v0.1",
"dataset:LDJnr/Capybara",
"dataset:jondurbin/cinematika-v0.1",
"dataset:WizardLM/WizardLM_evol_instruct_70k",
"dataset:glaiveai/glaive-function-calling-v2",
"dataset:grimulkan/LimaRP-augmented",
"dataset:lmsys/lmsys-chat-1m",
"dataset:ParisNeo/lollms_aware_dataset",
"dataset:TIGER-Lab/MathInstruct",
"dataset:Muennighoff/natural-instructions",
"dataset:openbookqa",
"dataset:kingbri/PIPPA-shareGPT",
"dataset:piqa",
"dataset:Vezora/Tested-22k-Python-Alpaca",
"dataset:ropes",
"dataset:cakiki/rosetta-code",
"dataset:Open-Orca/SlimOrca",
"dataset:b-mc2/sql-create-context",
"dataset:squad_v2",
"dataset:mattpscott/airoboros-summarization",
"dataset:migtissera/Synthia-v1.3",
"dataset:unalignment/toxic-dpo-v0.2",
"dataset:WhiteRabbitNeo/WRN-Chapter-1",
"dataset:WhiteRabbitNeo/WRN-Chapter-2",
"dataset:winogrande",
"base_model:jondurbin/bagel-34b-v0.5",
"base_model:quantized:jondurbin/bagel-34b-v0.5",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-04-02T14:45:55Z | ---
base_model: jondurbin/bagel-34b-v0.5
datasets:
- ai2_arc
- allenai/ultrafeedback_binarized_cleaned
- argilla/distilabel-intel-orca-dpo-pairs
- jondurbin/airoboros-3.2
- codeparrot/apps
- facebook/belebele
- bluemoon-fandom-1-1-rp-cleaned
- boolq
- camel-ai/biology
- camel-ai/chemistry
- camel-ai/math
- camel-ai/physics
- jondurbin/contextual-dpo-v0.1
- jondurbin/gutenberg-dpo-v0.1
- jondurbin/py-dpo-v0.1
- jondurbin/truthy-dpo-v0.1
- LDJnr/Capybara
- jondurbin/cinematika-v0.1
- WizardLM/WizardLM_evol_instruct_70k
- glaiveai/glaive-function-calling-v2
- jondurbin/gutenberg-dpo-v0.1
- grimulkan/LimaRP-augmented
- lmsys/lmsys-chat-1m
- ParisNeo/lollms_aware_dataset
- TIGER-Lab/MathInstruct
- Muennighoff/natural-instructions
- openbookqa
- kingbri/PIPPA-shareGPT
- piqa
- Vezora/Tested-22k-Python-Alpaca
- ropes
- cakiki/rosetta-code
- Open-Orca/SlimOrca
- b-mc2/sql-create-context
- squad_v2
- mattpscott/airoboros-summarization
- migtissera/Synthia-v1.3
- unalignment/toxic-dpo-v0.2
- WhiteRabbitNeo/WRN-Chapter-1
- WhiteRabbitNeo/WRN-Chapter-2
- winogrande
language:
- en
library_name: transformers
license: other
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
license_name: yi-license
quantized_by: mradermacher
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/jondurbin/bagel-34b-v0.5
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/bagel-34b-v0.5-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/bagel-34b-v0.5-GGUF/resolve/main/bagel-34b-v0.5.Q2_K.gguf) | Q2_K | 13.5 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-34b-v0.5-GGUF/resolve/main/bagel-34b-v0.5.IQ3_XS.gguf) | IQ3_XS | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-34b-v0.5-GGUF/resolve/main/bagel-34b-v0.5.Q3_K_S.gguf) | Q3_K_S | 15.6 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-34b-v0.5-GGUF/resolve/main/bagel-34b-v0.5.IQ3_S.gguf) | IQ3_S | 15.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/bagel-34b-v0.5-GGUF/resolve/main/bagel-34b-v0.5.IQ3_M.gguf) | IQ3_M | 16.2 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-34b-v0.5-GGUF/resolve/main/bagel-34b-v0.5.Q3_K_M.gguf) | Q3_K_M | 17.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/bagel-34b-v0.5-GGUF/resolve/main/bagel-34b-v0.5.Q3_K_L.gguf) | Q3_K_L | 18.8 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-34b-v0.5-GGUF/resolve/main/bagel-34b-v0.5.IQ4_XS.gguf) | IQ4_XS | 19.3 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-34b-v0.5-GGUF/resolve/main/bagel-34b-v0.5.Q4_K_S.gguf) | Q4_K_S | 20.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/bagel-34b-v0.5-GGUF/resolve/main/bagel-34b-v0.5.Q4_K_M.gguf) | Q4_K_M | 21.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/bagel-34b-v0.5-GGUF/resolve/main/bagel-34b-v0.5.Q5_K_S.gguf) | Q5_K_S | 24.3 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-34b-v0.5-GGUF/resolve/main/bagel-34b-v0.5.Q5_K_M.gguf) | Q5_K_M | 25.0 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-34b-v0.5-GGUF/resolve/main/bagel-34b-v0.5.Q6_K.gguf) | Q6_K | 28.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/bagel-34b-v0.5-GGUF/resolve/main/bagel-34b-v0.5.Q8_0.gguf) | Q8_0 | 37.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
nuebaek/komt_mistral_mss_user_111_max_steps_80 | nuebaek | 2024-05-06T05:22:39Z | 76 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-06T05:19:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/TripleMerge-7B-Ties-GGUF | mradermacher | 2024-05-06T05:22:37Z | 29 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"allknowingroger/MultiverseEx26-7B-slerp",
"allknowingroger/limyClown-7B-slerp",
"allknowingroger/LeeMerge-7B-slerp",
"en",
"base_model:allknowingroger/TripleMerge-7B-Ties",
"base_model:quantized:allknowingroger/TripleMerge-7B-Ties",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-02T16:30:30Z | ---
base_model: allknowingroger/TripleMerge-7B-Ties
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- allknowingroger/MultiverseEx26-7B-slerp
- allknowingroger/limyClown-7B-slerp
- allknowingroger/LeeMerge-7B-slerp
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/allknowingroger/TripleMerge-7B-Ties
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/TripleMerge-7B-Ties-GGUF/resolve/main/TripleMerge-7B-Ties.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/TripleMerge-7B-Ties-GGUF/resolve/main/TripleMerge-7B-Ties.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/TripleMerge-7B-Ties-GGUF/resolve/main/TripleMerge-7B-Ties.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/TripleMerge-7B-Ties-GGUF/resolve/main/TripleMerge-7B-Ties.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/TripleMerge-7B-Ties-GGUF/resolve/main/TripleMerge-7B-Ties.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/TripleMerge-7B-Ties-GGUF/resolve/main/TripleMerge-7B-Ties.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/TripleMerge-7B-Ties-GGUF/resolve/main/TripleMerge-7B-Ties.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/TripleMerge-7B-Ties-GGUF/resolve/main/TripleMerge-7B-Ties.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/TripleMerge-7B-Ties-GGUF/resolve/main/TripleMerge-7B-Ties.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TripleMerge-7B-Ties-GGUF/resolve/main/TripleMerge-7B-Ties.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TripleMerge-7B-Ties-GGUF/resolve/main/TripleMerge-7B-Ties.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/TripleMerge-7B-Ties-GGUF/resolve/main/TripleMerge-7B-Ties.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/TripleMerge-7B-Ties-GGUF/resolve/main/TripleMerge-7B-Ties.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/TripleMerge-7B-Ties-GGUF/resolve/main/TripleMerge-7B-Ties.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
kristina-shemet/Fine-Tuned_Mistral-Instruct-V2_06-05 | kristina-shemet | 2024-05-06T05:22:22Z | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] | null | 2024-05-06T05:22:02Z | ---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 |
mradermacher/DevPearl-2x7B-GGUF | mradermacher | 2024-05-06T05:22:20Z | 127 | 1 | transformers | [
"transformers",
"gguf",
"moe",
"merge",
"mergekit",
"lazymergekit",
"deepseek-ai/deepseek-coder-6.7b-instruct",
"defog/sqlcoder-7b-2",
"Python",
"Javascript",
"sql",
"en",
"base_model:louisbrulenaudet/DevPearl-2x7B",
"base_model:quantized:louisbrulenaudet/DevPearl-2x7B",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-02T18:03:56Z | ---
base_model: louisbrulenaudet/DevPearl-2x7B
language:
- en
library_name: transformers
license: cc-by-sa-4.0
quantized_by: mradermacher
tags:
- moe
- merge
- mergekit
- lazymergekit
- deepseek-ai/deepseek-coder-6.7b-instruct
- defog/sqlcoder-7b-2
- Python
- Javascript
- sql
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/louisbrulenaudet/DevPearl-2x7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DevPearl-2x7B-GGUF/resolve/main/DevPearl-2x7B.Q2_K.gguf) | Q2_K | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/DevPearl-2x7B-GGUF/resolve/main/DevPearl-2x7B.IQ3_XS.gguf) | IQ3_XS | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/DevPearl-2x7B-GGUF/resolve/main/DevPearl-2x7B.IQ3_S.gguf) | IQ3_S | 5.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/DevPearl-2x7B-GGUF/resolve/main/DevPearl-2x7B.Q3_K_S.gguf) | Q3_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/DevPearl-2x7B-GGUF/resolve/main/DevPearl-2x7B.IQ3_M.gguf) | IQ3_M | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/DevPearl-2x7B-GGUF/resolve/main/DevPearl-2x7B.Q3_K_M.gguf) | Q3_K_M | 5.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DevPearl-2x7B-GGUF/resolve/main/DevPearl-2x7B.Q3_K_L.gguf) | Q3_K_L | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/DevPearl-2x7B-GGUF/resolve/main/DevPearl-2x7B.IQ4_XS.gguf) | IQ4_XS | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/DevPearl-2x7B-GGUF/resolve/main/DevPearl-2x7B.Q4_K_S.gguf) | Q4_K_S | 6.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DevPearl-2x7B-GGUF/resolve/main/DevPearl-2x7B.Q4_K_M.gguf) | Q4_K_M | 7.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DevPearl-2x7B-GGUF/resolve/main/DevPearl-2x7B.Q5_K_S.gguf) | Q5_K_S | 7.9 | |
| [GGUF](https://huggingface.co/mradermacher/DevPearl-2x7B-GGUF/resolve/main/DevPearl-2x7B.Q5_K_M.gguf) | Q5_K_M | 8.1 | |
| [GGUF](https://huggingface.co/mradermacher/DevPearl-2x7B-GGUF/resolve/main/DevPearl-2x7B.Q6_K.gguf) | Q6_K | 9.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DevPearl-2x7B-GGUF/resolve/main/DevPearl-2x7B.Q8_0.gguf) | Q8_0 | 12.0 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/airoboros-34b-3.3-GGUF | mradermacher | 2024-05-06T05:21:59Z | 60 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:jondurbin/airoboros-3.2",
"dataset:bluemoon-fandom-1-1-rp-cleaned",
"dataset:boolq",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:LDJnr/Capybara",
"dataset:jondurbin/cinematika-v0.1",
"dataset:glaiveai/glaive-function-calling-v2",
"dataset:grimulkan/LimaRP-augmented",
"dataset:piqa",
"dataset:Vezora/Tested-22k-Python-Alpaca",
"dataset:mattpscott/airoboros-summarization",
"dataset:unalignment/toxic-dpo-v0.2",
"base_model:jondurbin/airoboros-34b-3.3",
"base_model:quantized:jondurbin/airoboros-34b-3.3",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-04-02T23:45:26Z | ---
base_model: jondurbin/airoboros-34b-3.3
datasets:
- jondurbin/airoboros-3.2
- bluemoon-fandom-1-1-rp-cleaned
- boolq
- jondurbin/gutenberg-dpo-v0.1
- LDJnr/Capybara
- jondurbin/cinematika-v0.1
- glaiveai/glaive-function-calling-v2
- grimulkan/LimaRP-augmented
- piqa
- Vezora/Tested-22k-Python-Alpaca
- mattpscott/airoboros-summarization
- unalignment/toxic-dpo-v0.2
language:
- en
library_name: transformers
license: other
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
license_name: yi-license
quantized_by: mradermacher
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/jondurbin/airoboros-34b-3.3
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-GGUF/resolve/main/airoboros-34b-3.3.Q2_K.gguf) | Q2_K | 13.5 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-GGUF/resolve/main/airoboros-34b-3.3.IQ3_XS.gguf) | IQ3_XS | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-GGUF/resolve/main/airoboros-34b-3.3.Q3_K_S.gguf) | Q3_K_S | 15.6 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-GGUF/resolve/main/airoboros-34b-3.3.IQ3_S.gguf) | IQ3_S | 15.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-GGUF/resolve/main/airoboros-34b-3.3.IQ3_M.gguf) | IQ3_M | 16.2 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-GGUF/resolve/main/airoboros-34b-3.3.Q3_K_M.gguf) | Q3_K_M | 17.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-GGUF/resolve/main/airoboros-34b-3.3.Q3_K_L.gguf) | Q3_K_L | 18.8 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-GGUF/resolve/main/airoboros-34b-3.3.IQ4_XS.gguf) | IQ4_XS | 19.3 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-GGUF/resolve/main/airoboros-34b-3.3.Q4_K_S.gguf) | Q4_K_S | 20.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-GGUF/resolve/main/airoboros-34b-3.3.Q4_K_M.gguf) | Q4_K_M | 21.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-GGUF/resolve/main/airoboros-34b-3.3.Q5_K_S.gguf) | Q5_K_S | 24.3 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-GGUF/resolve/main/airoboros-34b-3.3.Q5_K_M.gguf) | Q5_K_M | 25.0 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-GGUF/resolve/main/airoboros-34b-3.3.Q6_K.gguf) | Q6_K | 28.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-GGUF/resolve/main/airoboros-34b-3.3.Q8_0.gguf) | Q8_0 | 37.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/bagel-dpo-20b-v04-i1-GGUF | mradermacher | 2024-05-06T05:21:57Z | 81 | 1 | transformers | [
"transformers",
"gguf",
"en",
"dataset:ai2_arc",
"dataset:allenai/ultrafeedback_binarized_cleaned",
"dataset:argilla/distilabel-intel-orca-dpo-pairs",
"dataset:jondurbin/airoboros-3.2",
"dataset:codeparrot/apps",
"dataset:facebook/belebele",
"dataset:bluemoon-fandom-1-1-rp-cleaned",
"dataset:boolq",
"dataset:camel-ai/biology",
"dataset:camel-ai/chemistry",
"dataset:camel-ai/math",
"dataset:camel-ai/physics",
"dataset:jondurbin/contextual-dpo-v0.1",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:jondurbin/py-dpo-v0.1",
"dataset:jondurbin/truthy-dpo-v0.1",
"dataset:LDJnr/Capybara",
"dataset:jondurbin/cinematika-v0.1",
"dataset:WizardLM/WizardLM_evol_instruct_70k",
"dataset:glaiveai/glaive-function-calling-v2",
"dataset:grimulkan/LimaRP-augmented",
"dataset:lmsys/lmsys-chat-1m",
"dataset:ParisNeo/lollms_aware_dataset",
"dataset:TIGER-Lab/MathInstruct",
"dataset:Muennighoff/natural-instructions",
"dataset:openbookqa",
"dataset:kingbri/PIPPA-shareGPT",
"dataset:piqa",
"dataset:Vezora/Tested-22k-Python-Alpaca",
"dataset:ropes",
"dataset:cakiki/rosetta-code",
"dataset:Open-Orca/SlimOrca",
"dataset:b-mc2/sql-create-context",
"dataset:squad_v2",
"dataset:mattpscott/airoboros-summarization",
"dataset:migtissera/Synthia-v1.3",
"dataset:unalignment/toxic-dpo-v0.2",
"dataset:WhiteRabbitNeo/WRN-Chapter-1",
"dataset:WhiteRabbitNeo/WRN-Chapter-2",
"dataset:winogrande",
"base_model:jondurbin/bagel-dpo-20b-v04",
"base_model:quantized:jondurbin/bagel-dpo-20b-v04",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-02T23:46:09Z | ---
base_model: jondurbin/bagel-dpo-20b-v04
datasets:
- ai2_arc
- allenai/ultrafeedback_binarized_cleaned
- argilla/distilabel-intel-orca-dpo-pairs
- jondurbin/airoboros-3.2
- codeparrot/apps
- facebook/belebele
- bluemoon-fandom-1-1-rp-cleaned
- boolq
- camel-ai/biology
- camel-ai/chemistry
- camel-ai/math
- camel-ai/physics
- jondurbin/contextual-dpo-v0.1
- jondurbin/gutenberg-dpo-v0.1
- jondurbin/py-dpo-v0.1
- jondurbin/truthy-dpo-v0.1
- LDJnr/Capybara
- jondurbin/cinematika-v0.1
- WizardLM/WizardLM_evol_instruct_70k
- glaiveai/glaive-function-calling-v2
- jondurbin/gutenberg-dpo-v0.1
- grimulkan/LimaRP-augmented
- lmsys/lmsys-chat-1m
- ParisNeo/lollms_aware_dataset
- TIGER-Lab/MathInstruct
- Muennighoff/natural-instructions
- openbookqa
- kingbri/PIPPA-shareGPT
- piqa
- Vezora/Tested-22k-Python-Alpaca
- ropes
- cakiki/rosetta-code
- Open-Orca/SlimOrca
- b-mc2/sql-create-context
- squad_v2
- mattpscott/airoboros-summarization
- migtissera/Synthia-v1.3
- unalignment/toxic-dpo-v0.2
- WhiteRabbitNeo/WRN-Chapter-1
- WhiteRabbitNeo/WRN-Chapter-2
- winogrande
language:
- en
library_name: transformers
license: other
license_link: https://huggingface.co/internlm/internlm2-20b#open-source-license
license_name: internlm2-20b
quantized_by: mradermacher
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/jondurbin/bagel-dpo-20b-v04
**This uses only 95k tokens of my standard set, as the model overflowed with more.**
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/bagel-dpo-20b-v04-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-i1-GGUF/resolve/main/bagel-dpo-20b-v04.i1-IQ1_S.gguf) | i1-IQ1_S | 5.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-i1-GGUF/resolve/main/bagel-dpo-20b-v04.i1-IQ1_M.gguf) | i1-IQ1_M | 5.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-i1-GGUF/resolve/main/bagel-dpo-20b-v04.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.4 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-i1-GGUF/resolve/main/bagel-dpo-20b-v04.i1-IQ2_XS.gguf) | i1-IQ2_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-i1-GGUF/resolve/main/bagel-dpo-20b-v04.i1-IQ2_S.gguf) | i1-IQ2_S | 7.3 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-i1-GGUF/resolve/main/bagel-dpo-20b-v04.i1-IQ2_M.gguf) | i1-IQ2_M | 7.8 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-i1-GGUF/resolve/main/bagel-dpo-20b-v04.i1-Q2_K.gguf) | i1-Q2_K | 8.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-i1-GGUF/resolve/main/bagel-dpo-20b-v04.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 8.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-i1-GGUF/resolve/main/bagel-dpo-20b-v04.i1-IQ3_XS.gguf) | i1-IQ3_XS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-i1-GGUF/resolve/main/bagel-dpo-20b-v04.i1-Q3_K_S.gguf) | i1-Q3_K_S | 9.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-i1-GGUF/resolve/main/bagel-dpo-20b-v04.i1-IQ3_S.gguf) | i1-IQ3_S | 9.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-i1-GGUF/resolve/main/bagel-dpo-20b-v04.i1-IQ3_M.gguf) | i1-IQ3_M | 9.9 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-i1-GGUF/resolve/main/bagel-dpo-20b-v04.i1-Q3_K_M.gguf) | i1-Q3_K_M | 10.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-i1-GGUF/resolve/main/bagel-dpo-20b-v04.i1-Q3_K_L.gguf) | i1-Q3_K_L | 11.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-i1-GGUF/resolve/main/bagel-dpo-20b-v04.i1-IQ4_XS.gguf) | i1-IQ4_XS | 11.5 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-i1-GGUF/resolve/main/bagel-dpo-20b-v04.i1-Q4_0.gguf) | i1-Q4_0 | 12.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-i1-GGUF/resolve/main/bagel-dpo-20b-v04.i1-Q4_K_S.gguf) | i1-Q4_K_S | 12.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-i1-GGUF/resolve/main/bagel-dpo-20b-v04.i1-Q4_K_M.gguf) | i1-Q4_K_M | 12.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-i1-GGUF/resolve/main/bagel-dpo-20b-v04.i1-Q5_K_S.gguf) | i1-Q5_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-i1-GGUF/resolve/main/bagel-dpo-20b-v04.i1-Q5_K_M.gguf) | i1-Q5_K_M | 14.8 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-i1-GGUF/resolve/main/bagel-dpo-20b-v04.i1-Q6_K.gguf) | i1-Q6_K | 17.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Mistral-12.25B-Instruct-v0.2-GGUF | mradermacher | 2024-05-06T05:21:54Z | 17 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Joseph717171/Mistral-12.25B-Instruct-v0.2",
"base_model:quantized:Joseph717171/Mistral-12.25B-Instruct-v0.2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-03T00:13:46Z | ---
base_model: Joseph717171/Mistral-12.25B-Instruct-v0.2
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Joseph717171/Mistral-12.25B-Instruct-v0.2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-12.25B-Instruct-v0.2-GGUF/resolve/main/Mistral-12.25B-Instruct-v0.2.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-12.25B-Instruct-v0.2-GGUF/resolve/main/Mistral-12.25B-Instruct-v0.2.IQ3_XS.gguf) | IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-12.25B-Instruct-v0.2-GGUF/resolve/main/Mistral-12.25B-Instruct-v0.2.Q3_K_S.gguf) | Q3_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-12.25B-Instruct-v0.2-GGUF/resolve/main/Mistral-12.25B-Instruct-v0.2.IQ3_S.gguf) | IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mistral-12.25B-Instruct-v0.2-GGUF/resolve/main/Mistral-12.25B-Instruct-v0.2.IQ3_M.gguf) | IQ3_M | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-12.25B-Instruct-v0.2-GGUF/resolve/main/Mistral-12.25B-Instruct-v0.2.Q3_K_M.gguf) | Q3_K_M | 6.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-12.25B-Instruct-v0.2-GGUF/resolve/main/Mistral-12.25B-Instruct-v0.2.Q3_K_L.gguf) | Q3_K_L | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-12.25B-Instruct-v0.2-GGUF/resolve/main/Mistral-12.25B-Instruct-v0.2.IQ4_XS.gguf) | IQ4_XS | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-12.25B-Instruct-v0.2-GGUF/resolve/main/Mistral-12.25B-Instruct-v0.2.Q4_K_S.gguf) | Q4_K_S | 7.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-12.25B-Instruct-v0.2-GGUF/resolve/main/Mistral-12.25B-Instruct-v0.2.Q4_K_M.gguf) | Q4_K_M | 7.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-12.25B-Instruct-v0.2-GGUF/resolve/main/Mistral-12.25B-Instruct-v0.2.Q5_K_S.gguf) | Q5_K_S | 8.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-12.25B-Instruct-v0.2-GGUF/resolve/main/Mistral-12.25B-Instruct-v0.2.Q5_K_M.gguf) | Q5_K_M | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-12.25B-Instruct-v0.2-GGUF/resolve/main/Mistral-12.25B-Instruct-v0.2.Q6_K.gguf) | Q6_K | 10.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-12.25B-Instruct-v0.2-GGUF/resolve/main/Mistral-12.25B-Instruct-v0.2.Q8_0.gguf) | Q8_0 | 13.5 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Irene-RP-v4-7B-GGUF | mradermacher | 2024-05-06T05:21:45Z | 35 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"roleplay",
"mistral",
"en",
"base_model:Virt-io/Irene-RP-v4-7B",
"base_model:quantized:Virt-io/Irene-RP-v4-7B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-03T00:56:57Z | ---
base_model: Virt-io/Irene-RP-v4-7B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
- roleplay
- mistral
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Virt-io/Irene-RP-v4-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Irene-RP-v4-7B-GGUF/resolve/main/Irene-RP-v4-7B.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Irene-RP-v4-7B-GGUF/resolve/main/Irene-RP-v4-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Irene-RP-v4-7B-GGUF/resolve/main/Irene-RP-v4-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Irene-RP-v4-7B-GGUF/resolve/main/Irene-RP-v4-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Irene-RP-v4-7B-GGUF/resolve/main/Irene-RP-v4-7B.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Irene-RP-v4-7B-GGUF/resolve/main/Irene-RP-v4-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Irene-RP-v4-7B-GGUF/resolve/main/Irene-RP-v4-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Irene-RP-v4-7B-GGUF/resolve/main/Irene-RP-v4-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Irene-RP-v4-7B-GGUF/resolve/main/Irene-RP-v4-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Irene-RP-v4-7B-GGUF/resolve/main/Irene-RP-v4-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Irene-RP-v4-7B-GGUF/resolve/main/Irene-RP-v4-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Irene-RP-v4-7B-GGUF/resolve/main/Irene-RP-v4-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Irene-RP-v4-7B-GGUF/resolve/main/Irene-RP-v4-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Irene-RP-v4-7B-GGUF/resolve/main/Irene-RP-v4-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Pearl-34B-ties-GGUF | mradermacher | 2024-05-06T05:21:42Z | 19 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"jondurbin/bagel-dpo-34b-v0.2",
"abacusai/MetaMath-Bagel-DPO-34B",
"en",
"base_model:louisbrulenaudet/Pearl-34B-ties",
"base_model:quantized:louisbrulenaudet/Pearl-34B-ties",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-03T01:27:03Z | ---
base_model: louisbrulenaudet/Pearl-34B-ties
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- jondurbin/bagel-dpo-34b-v0.2
- abacusai/MetaMath-Bagel-DPO-34B
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/louisbrulenaudet/Pearl-34B-ties
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Pearl-34B-ties-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-GGUF/resolve/main/Pearl-34B-ties.Q2_K.gguf) | Q2_K | 13.5 | |
| [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-GGUF/resolve/main/Pearl-34B-ties.IQ3_XS.gguf) | IQ3_XS | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-GGUF/resolve/main/Pearl-34B-ties.Q3_K_S.gguf) | Q3_K_S | 15.6 | |
| [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-GGUF/resolve/main/Pearl-34B-ties.IQ3_S.gguf) | IQ3_S | 15.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-GGUF/resolve/main/Pearl-34B-ties.IQ3_M.gguf) | IQ3_M | 16.2 | |
| [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-GGUF/resolve/main/Pearl-34B-ties.Q3_K_M.gguf) | Q3_K_M | 17.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-GGUF/resolve/main/Pearl-34B-ties.Q3_K_L.gguf) | Q3_K_L | 18.8 | |
| [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-GGUF/resolve/main/Pearl-34B-ties.IQ4_XS.gguf) | IQ4_XS | 19.3 | |
| [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-GGUF/resolve/main/Pearl-34B-ties.Q4_K_S.gguf) | Q4_K_S | 20.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-GGUF/resolve/main/Pearl-34B-ties.Q4_K_M.gguf) | Q4_K_M | 21.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-GGUF/resolve/main/Pearl-34B-ties.Q5_K_S.gguf) | Q5_K_S | 24.3 | |
| [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-GGUF/resolve/main/Pearl-34B-ties.Q5_K_M.gguf) | Q5_K_M | 25.0 | |
| [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-GGUF/resolve/main/Pearl-34B-ties.Q6_K.gguf) | Q6_K | 28.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-GGUF/resolve/main/Pearl-34B-ties.Q8_0.gguf) | Q8_0 | 37.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/OverBloom-7b-GGUF | mradermacher | 2024-05-06T05:21:29Z | 4 | 0 | transformers | [
"transformers",
"gguf",
"en",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-03T03:57:02Z | ---
base_model: nobita3921/OverBloom-7b
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/nobita3921/OverBloom-7b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OverBloom-7b-GGUF/resolve/main/OverBloom-7b.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/OverBloom-7b-GGUF/resolve/main/OverBloom-7b.IQ3_XS.gguf) | IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/OverBloom-7b-GGUF/resolve/main/OverBloom-7b.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/OverBloom-7b-GGUF/resolve/main/OverBloom-7b.IQ3_S.gguf) | IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/OverBloom-7b-GGUF/resolve/main/OverBloom-7b.IQ3_M.gguf) | IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/OverBloom-7b-GGUF/resolve/main/OverBloom-7b.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OverBloom-7b-GGUF/resolve/main/OverBloom-7b.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/OverBloom-7b-GGUF/resolve/main/OverBloom-7b.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/OverBloom-7b-GGUF/resolve/main/OverBloom-7b.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OverBloom-7b-GGUF/resolve/main/OverBloom-7b.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OverBloom-7b-GGUF/resolve/main/OverBloom-7b.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/OverBloom-7b-GGUF/resolve/main/OverBloom-7b.Q5_K_M.gguf) | Q5_K_M | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/OverBloom-7b-GGUF/resolve/main/OverBloom-7b.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/OverBloom-7b-GGUF/resolve/main/OverBloom-7b.Q8_0.gguf) | Q8_0 | 8.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Pearl-34B-ties-i1-GGUF | mradermacher | 2024-05-06T05:21:26Z | 39 | 1 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"jondurbin/bagel-dpo-34b-v0.2",
"abacusai/MetaMath-Bagel-DPO-34B",
"en",
"base_model:louisbrulenaudet/Pearl-34B-ties",
"base_model:quantized:louisbrulenaudet/Pearl-34B-ties",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-03T04:15:00Z | ---
base_model: louisbrulenaudet/Pearl-34B-ties
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- jondurbin/bagel-dpo-34b-v0.2
- abacusai/MetaMath-Bagel-DPO-34B
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/louisbrulenaudet/Pearl-34B-ties
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Pearl-34B-ties-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-i1-GGUF/resolve/main/Pearl-34B-ties.i1-IQ1_S.gguf) | i1-IQ1_S | 8.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-i1-GGUF/resolve/main/Pearl-34B-ties.i1-IQ1_M.gguf) | i1-IQ1_M | 8.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-i1-GGUF/resolve/main/Pearl-34B-ties.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-i1-GGUF/resolve/main/Pearl-34B-ties.i1-IQ2_XS.gguf) | i1-IQ2_XS | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-i1-GGUF/resolve/main/Pearl-34B-ties.i1-IQ2_S.gguf) | i1-IQ2_S | 11.6 | |
| [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-i1-GGUF/resolve/main/Pearl-34B-ties.i1-IQ2_M.gguf) | i1-IQ2_M | 12.5 | |
| [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-i1-GGUF/resolve/main/Pearl-34B-ties.i1-Q2_K.gguf) | i1-Q2_K | 13.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-i1-GGUF/resolve/main/Pearl-34B-ties.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 14.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-i1-GGUF/resolve/main/Pearl-34B-ties.i1-IQ3_XS.gguf) | i1-IQ3_XS | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-i1-GGUF/resolve/main/Pearl-34B-ties.i1-Q3_K_S.gguf) | i1-Q3_K_S | 15.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-i1-GGUF/resolve/main/Pearl-34B-ties.i1-IQ3_S.gguf) | i1-IQ3_S | 15.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-i1-GGUF/resolve/main/Pearl-34B-ties.i1-IQ3_M.gguf) | i1-IQ3_M | 16.2 | |
| [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-i1-GGUF/resolve/main/Pearl-34B-ties.i1-Q3_K_M.gguf) | i1-Q3_K_M | 17.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-i1-GGUF/resolve/main/Pearl-34B-ties.i1-Q3_K_L.gguf) | i1-Q3_K_L | 18.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-i1-GGUF/resolve/main/Pearl-34B-ties.i1-IQ4_XS.gguf) | i1-IQ4_XS | 19.1 | |
| [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-i1-GGUF/resolve/main/Pearl-34B-ties.i1-Q4_0.gguf) | i1-Q4_0 | 20.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-i1-GGUF/resolve/main/Pearl-34B-ties.i1-Q4_K_S.gguf) | i1-Q4_K_S | 20.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-i1-GGUF/resolve/main/Pearl-34B-ties.i1-Q4_K_M.gguf) | i1-Q4_K_M | 21.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-i1-GGUF/resolve/main/Pearl-34B-ties.i1-Q5_K_S.gguf) | i1-Q5_K_S | 24.3 | |
| [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-i1-GGUF/resolve/main/Pearl-34B-ties.i1-Q5_K_M.gguf) | i1-Q5_K_M | 25.0 | |
| [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-i1-GGUF/resolve/main/Pearl-34B-ties.i1-Q6_K.gguf) | i1-Q6_K | 28.9 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/neural-chat-7b-v3-GGUF | mradermacher | 2024-05-06T05:21:16Z | 29 | 0 | transformers | [
"transformers",
"gguf",
"LLMs",
"mistral",
"Intel",
"en",
"dataset:Open-Orca/SlimOrca",
"base_model:Intel/neural-chat-7b-v3",
"base_model:quantized:Intel/neural-chat-7b-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-03T07:43:53Z | ---
base_model: Intel/neural-chat-7b-v3
datasets:
- Open-Orca/SlimOrca
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- LLMs
- mistral
- Intel
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Intel/neural-chat-7b-v3
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/neural-chat-7b-v3-GGUF/resolve/main/neural-chat-7b-v3.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/neural-chat-7b-v3-GGUF/resolve/main/neural-chat-7b-v3.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/neural-chat-7b-v3-GGUF/resolve/main/neural-chat-7b-v3.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/neural-chat-7b-v3-GGUF/resolve/main/neural-chat-7b-v3.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/neural-chat-7b-v3-GGUF/resolve/main/neural-chat-7b-v3.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/neural-chat-7b-v3-GGUF/resolve/main/neural-chat-7b-v3.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/neural-chat-7b-v3-GGUF/resolve/main/neural-chat-7b-v3.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/neural-chat-7b-v3-GGUF/resolve/main/neural-chat-7b-v3.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/neural-chat-7b-v3-GGUF/resolve/main/neural-chat-7b-v3.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/neural-chat-7b-v3-GGUF/resolve/main/neural-chat-7b-v3.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/neural-chat-7b-v3-GGUF/resolve/main/neural-chat-7b-v3.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/neural-chat-7b-v3-GGUF/resolve/main/neural-chat-7b-v3.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/neural-chat-7b-v3-GGUF/resolve/main/neural-chat-7b-v3.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/neural-chat-7b-v3-GGUF/resolve/main/neural-chat-7b-v3.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/LlaMixtral-MoE-16B-chat-GGUF | mradermacher | 2024-05-06T05:21:12Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:AstraLLMs/LlaMixtral-MoE-16B-chat",
"base_model:quantized:AstraLLMs/LlaMixtral-MoE-16B-chat",
"endpoints_compatible",
"region:us"
] | null | 2024-04-03T08:23:36Z | ---
base_model: AstraLLMs/LlaMixtral-MoE-16B-chat
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/AstraLLMs/LlaMixtral-MoE-16B-chat
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LlaMixtral-MoE-16B-chat-GGUF/resolve/main/LlaMixtral-MoE-16B-chat.Q2_K.gguf) | Q2_K | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/LlaMixtral-MoE-16B-chat-GGUF/resolve/main/LlaMixtral-MoE-16B-chat.IQ3_XS.gguf) | IQ3_XS | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/LlaMixtral-MoE-16B-chat-GGUF/resolve/main/LlaMixtral-MoE-16B-chat.Q3_K_S.gguf) | Q3_K_S | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/LlaMixtral-MoE-16B-chat-GGUF/resolve/main/LlaMixtral-MoE-16B-chat.IQ3_S.gguf) | IQ3_S | 7.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/LlaMixtral-MoE-16B-chat-GGUF/resolve/main/LlaMixtral-MoE-16B-chat.IQ3_M.gguf) | IQ3_M | 7.3 | |
| [GGUF](https://huggingface.co/mradermacher/LlaMixtral-MoE-16B-chat-GGUF/resolve/main/LlaMixtral-MoE-16B-chat.Q3_K_M.gguf) | Q3_K_M | 7.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LlaMixtral-MoE-16B-chat-GGUF/resolve/main/LlaMixtral-MoE-16B-chat.Q3_K_L.gguf) | Q3_K_L | 8.5 | |
| [GGUF](https://huggingface.co/mradermacher/LlaMixtral-MoE-16B-chat-GGUF/resolve/main/LlaMixtral-MoE-16B-chat.IQ4_XS.gguf) | IQ4_XS | 8.9 | |
| [GGUF](https://huggingface.co/mradermacher/LlaMixtral-MoE-16B-chat-GGUF/resolve/main/LlaMixtral-MoE-16B-chat.Q4_K_S.gguf) | Q4_K_S | 9.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LlaMixtral-MoE-16B-chat-GGUF/resolve/main/LlaMixtral-MoE-16B-chat.Q4_K_M.gguf) | Q4_K_M | 9.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LlaMixtral-MoE-16B-chat-GGUF/resolve/main/LlaMixtral-MoE-16B-chat.Q5_K_S.gguf) | Q5_K_S | 11.2 | |
| [GGUF](https://huggingface.co/mradermacher/LlaMixtral-MoE-16B-chat-GGUF/resolve/main/LlaMixtral-MoE-16B-chat.Q5_K_M.gguf) | Q5_K_M | 11.5 | |
| [GGUF](https://huggingface.co/mradermacher/LlaMixtral-MoE-16B-chat-GGUF/resolve/main/LlaMixtral-MoE-16B-chat.Q6_K.gguf) | Q6_K | 13.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/LlaMixtral-MoE-16B-chat-GGUF/resolve/main/LlaMixtral-MoE-16B-chat.Q8_0.gguf) | Q8_0 | 17.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/MultiVerse_70B-i1-GGUF | mradermacher | 2024-05-06T05:21:09Z | 35 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:MTSAIR/MultiVerse_70B",
"base_model:quantized:MTSAIR/MultiVerse_70B",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-04-03T08:54:49Z | ---
base_model: MTSAIR/MultiVerse_70B
language:
- en
library_name: transformers
license: other
license_link: https://huggingface.co/Qwen/Qwen1.5-72B-Chat/blob/main/LICENSE
license_name: qwen
quantized_by: mradermacher
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/MTSAIR/MultiVerse_70B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/MultiVerse_70B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-IQ1_S.gguf) | i1-IQ1_S | 18.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-IQ1_M.gguf) | i1-IQ1_M | 19.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 23.5 | |
| [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-IQ2_S.gguf) | i1-IQ2_S | 25.1 | |
| [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-IQ2_M.gguf) | i1-IQ2_M | 26.9 | |
| [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-Q2_K.gguf) | i1-Q2_K | 28.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 29.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 31.5 | |
| [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-IQ3_S.gguf) | i1-IQ3_S | 33.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 33.1 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-IQ3_M.gguf) | i1-IQ3_M | 34.8 | |
| [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 36.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 40.1 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 40.4 | |
| [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-Q4_0.gguf) | i1-Q4_0 | 42.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 42.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 45.3 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 51.5 | |
| [PART 1](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 52.9 | |
| [PART 1](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 60.9 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Synatra-7B-v0.3-RP-GGUF | mradermacher | 2024-05-06T05:21:06Z | 17 | 1 | transformers | [
"transformers",
"gguf",
"ko",
"base_model:maywell/Synatra-7B-v0.3-RP",
"base_model:quantized:maywell/Synatra-7B-v0.3-RP",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-03T09:51:38Z | ---
base_model: maywell/Synatra-7B-v0.3-RP
language:
- ko
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/maywell/Synatra-7B-v0.3-RP
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Synatra-7B-v0.3-RP-GGUF/resolve/main/Synatra-7B-v0.3-RP.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Synatra-7B-v0.3-RP-GGUF/resolve/main/Synatra-7B-v0.3-RP.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Synatra-7B-v0.3-RP-GGUF/resolve/main/Synatra-7B-v0.3-RP.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Synatra-7B-v0.3-RP-GGUF/resolve/main/Synatra-7B-v0.3-RP.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Synatra-7B-v0.3-RP-GGUF/resolve/main/Synatra-7B-v0.3-RP.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Synatra-7B-v0.3-RP-GGUF/resolve/main/Synatra-7B-v0.3-RP.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Synatra-7B-v0.3-RP-GGUF/resolve/main/Synatra-7B-v0.3-RP.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Synatra-7B-v0.3-RP-GGUF/resolve/main/Synatra-7B-v0.3-RP.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Synatra-7B-v0.3-RP-GGUF/resolve/main/Synatra-7B-v0.3-RP.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Synatra-7B-v0.3-RP-GGUF/resolve/main/Synatra-7B-v0.3-RP.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Synatra-7B-v0.3-RP-GGUF/resolve/main/Synatra-7B-v0.3-RP.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Synatra-7B-v0.3-RP-GGUF/resolve/main/Synatra-7B-v0.3-RP.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Synatra-7B-v0.3-RP-GGUF/resolve/main/Synatra-7B-v0.3-RP.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Synatra-7B-v0.3-RP-GGUF/resolve/main/Synatra-7B-v0.3-RP.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/neural-chat-7b-v3-1-GGUF | mradermacher | 2024-05-06T05:20:58Z | 90 | 0 | transformers | [
"transformers",
"gguf",
"LLMs",
"mistral",
"Intel",
"en",
"dataset:Open-Orca/SlimOrca",
"base_model:Intel/neural-chat-7b-v3-1",
"base_model:quantized:Intel/neural-chat-7b-v3-1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-03T12:09:19Z | ---
base_model: Intel/neural-chat-7b-v3-1
datasets:
- Open-Orca/SlimOrca
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- LLMs
- mistral
- Intel
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Intel/neural-chat-7b-v3-1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/neural-chat-7b-v3-1-GGUF/resolve/main/neural-chat-7b-v3-1.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/neural-chat-7b-v3-1-GGUF/resolve/main/neural-chat-7b-v3-1.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/neural-chat-7b-v3-1-GGUF/resolve/main/neural-chat-7b-v3-1.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/neural-chat-7b-v3-1-GGUF/resolve/main/neural-chat-7b-v3-1.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/neural-chat-7b-v3-1-GGUF/resolve/main/neural-chat-7b-v3-1.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/neural-chat-7b-v3-1-GGUF/resolve/main/neural-chat-7b-v3-1.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/neural-chat-7b-v3-1-GGUF/resolve/main/neural-chat-7b-v3-1.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/neural-chat-7b-v3-1-GGUF/resolve/main/neural-chat-7b-v3-1.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/neural-chat-7b-v3-1-GGUF/resolve/main/neural-chat-7b-v3-1.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/neural-chat-7b-v3-1-GGUF/resolve/main/neural-chat-7b-v3-1.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/neural-chat-7b-v3-1-GGUF/resolve/main/neural-chat-7b-v3-1.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/neural-chat-7b-v3-1-GGUF/resolve/main/neural-chat-7b-v3-1.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/neural-chat-7b-v3-1-GGUF/resolve/main/neural-chat-7b-v3-1.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/neural-chat-7b-v3-1-GGUF/resolve/main/neural-chat-7b-v3-1.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/pandafish-7b-GGUF | mradermacher | 2024-05-06T05:20:56Z | 9 | 1 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"en",
"base_model:ichigoberry/pandafish-7b",
"base_model:quantized:ichigoberry/pandafish-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-03T12:13:43Z | ---
base_model: ichigoberry/pandafish-7b
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/ichigoberry/pandafish-7b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/pandafish-7b-GGUF/resolve/main/pandafish-7b.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/pandafish-7b-GGUF/resolve/main/pandafish-7b.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/pandafish-7b-GGUF/resolve/main/pandafish-7b.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/pandafish-7b-GGUF/resolve/main/pandafish-7b.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/pandafish-7b-GGUF/resolve/main/pandafish-7b.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/pandafish-7b-GGUF/resolve/main/pandafish-7b.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/pandafish-7b-GGUF/resolve/main/pandafish-7b.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/pandafish-7b-GGUF/resolve/main/pandafish-7b.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/pandafish-7b-GGUF/resolve/main/pandafish-7b.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/pandafish-7b-GGUF/resolve/main/pandafish-7b.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/pandafish-7b-GGUF/resolve/main/pandafish-7b.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/pandafish-7b-GGUF/resolve/main/pandafish-7b.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/pandafish-7b-GGUF/resolve/main/pandafish-7b.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/pandafish-7b-GGUF/resolve/main/pandafish-7b.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Eurus-7b-sft-GGUF | mradermacher | 2024-05-06T05:20:27Z | 130 | 0 | transformers | [
"transformers",
"gguf",
"reasoning",
"en",
"dataset:openbmb/UltraInteract",
"dataset:stingning/ultrachat",
"dataset:openchat/openchat_sharegpt4_dataset",
"dataset:Open-Orca/OpenOrca",
"base_model:pharaouk/Eurus-7b-sft",
"base_model:quantized:pharaouk/Eurus-7b-sft",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-03T16:05:16Z | ---
base_model: pharaouk/Eurus-7b-sft
datasets:
- openbmb/UltraInteract
- stingning/ultrachat
- openchat/openchat_sharegpt4_dataset
- Open-Orca/OpenOrca
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- reasoning
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/pharaouk/Eurus-7b-sft
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Eurus-7b-sft-GGUF/resolve/main/Eurus-7b-sft.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Eurus-7b-sft-GGUF/resolve/main/Eurus-7b-sft.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Eurus-7b-sft-GGUF/resolve/main/Eurus-7b-sft.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Eurus-7b-sft-GGUF/resolve/main/Eurus-7b-sft.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Eurus-7b-sft-GGUF/resolve/main/Eurus-7b-sft.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Eurus-7b-sft-GGUF/resolve/main/Eurus-7b-sft.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Eurus-7b-sft-GGUF/resolve/main/Eurus-7b-sft.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Eurus-7b-sft-GGUF/resolve/main/Eurus-7b-sft.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Eurus-7b-sft-GGUF/resolve/main/Eurus-7b-sft.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Eurus-7b-sft-GGUF/resolve/main/Eurus-7b-sft.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Eurus-7b-sft-GGUF/resolve/main/Eurus-7b-sft.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Eurus-7b-sft-GGUF/resolve/main/Eurus-7b-sft.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Eurus-7b-sft-GGUF/resolve/main/Eurus-7b-sft.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Eurus-7b-sft-GGUF/resolve/main/Eurus-7b-sft.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Fireplace-34b-GGUF | mradermacher | 2024-05-06T05:20:22Z | 63 | 0 | transformers | [
"transformers",
"gguf",
"fireplace",
"function-calling",
"code",
"code-instruct",
"conversational",
"text-generation-inference",
"valiant",
"valiant-labs",
"smaug",
"yi",
"yi-34b",
"llama",
"llama-2",
"llama-2-chat",
"34b",
"en",
"base_model:ValiantLabs/Fireplace-34b",
"base_model:quantized:ValiantLabs/Fireplace-34b",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-04-03T17:29:23Z | ---
base_model: ValiantLabs/Fireplace-34b
language:
- en
library_name: transformers
license: other
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
license_name: yi-license
model_type: llama
quantized_by: mradermacher
tags:
- fireplace
- function-calling
- code
- code-instruct
- conversational
- text-generation-inference
- valiant
- valiant-labs
- smaug
- yi
- yi-34b
- llama
- llama-2
- llama-2-chat
- 34b
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/ValiantLabs/Fireplace-34b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Fireplace-34b-GGUF/resolve/main/Fireplace-34b.Q2_K.gguf) | Q2_K | 14.4 | |
| [GGUF](https://huggingface.co/mradermacher/Fireplace-34b-GGUF/resolve/main/Fireplace-34b.IQ3_XS.gguf) | IQ3_XS | 15.8 | |
| [GGUF](https://huggingface.co/mradermacher/Fireplace-34b-GGUF/resolve/main/Fireplace-34b.Q3_K_S.gguf) | Q3_K_S | 16.5 | |
| [GGUF](https://huggingface.co/mradermacher/Fireplace-34b-GGUF/resolve/main/Fireplace-34b.IQ3_S.gguf) | IQ3_S | 16.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Fireplace-34b-GGUF/resolve/main/Fireplace-34b.IQ3_M.gguf) | IQ3_M | 17.1 | |
| [GGUF](https://huggingface.co/mradermacher/Fireplace-34b-GGUF/resolve/main/Fireplace-34b.Q3_K_M.gguf) | Q3_K_M | 18.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Fireplace-34b-GGUF/resolve/main/Fireplace-34b.Q3_K_L.gguf) | Q3_K_L | 19.7 | |
| [GGUF](https://huggingface.co/mradermacher/Fireplace-34b-GGUF/resolve/main/Fireplace-34b.IQ4_XS.gguf) | IQ4_XS | 20.2 | |
| [GGUF](https://huggingface.co/mradermacher/Fireplace-34b-GGUF/resolve/main/Fireplace-34b.Q4_K_S.gguf) | Q4_K_S | 21.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Fireplace-34b-GGUF/resolve/main/Fireplace-34b.Q4_K_M.gguf) | Q4_K_M | 22.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Fireplace-34b-GGUF/resolve/main/Fireplace-34b.Q5_K_S.gguf) | Q5_K_S | 25.3 | |
| [GGUF](https://huggingface.co/mradermacher/Fireplace-34b-GGUF/resolve/main/Fireplace-34b.Q5_K_M.gguf) | Q5_K_M | 25.9 | |
| [GGUF](https://huggingface.co/mradermacher/Fireplace-34b-GGUF/resolve/main/Fireplace-34b.Q6_K.gguf) | Q6_K | 29.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Fireplace-34b-GGUF/resolve/main/Fireplace-34b.Q8_0.gguf) | Q8_0 | 38.0 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/KittyNyanster-v1-GGUF | mradermacher | 2024-05-06T05:20:17Z | 191 | 2 | transformers | [
"transformers",
"gguf",
"roleplay",
"chat",
"mistral",
"en",
"base_model:arlineka/KittyNyanster-v1",
"base_model:quantized:arlineka/KittyNyanster-v1",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-03T18:30:14Z | ---
base_model: arlineka/KittyNyanster-v1
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- roleplay
- chat
- mistral
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/arlineka/KittyNyanster-v1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/KittyNyanster-v1-GGUF/resolve/main/KittyNyanster-v1.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/KittyNyanster-v1-GGUF/resolve/main/KittyNyanster-v1.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/KittyNyanster-v1-GGUF/resolve/main/KittyNyanster-v1.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/KittyNyanster-v1-GGUF/resolve/main/KittyNyanster-v1.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/KittyNyanster-v1-GGUF/resolve/main/KittyNyanster-v1.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/KittyNyanster-v1-GGUF/resolve/main/KittyNyanster-v1.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/KittyNyanster-v1-GGUF/resolve/main/KittyNyanster-v1.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/KittyNyanster-v1-GGUF/resolve/main/KittyNyanster-v1.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/KittyNyanster-v1-GGUF/resolve/main/KittyNyanster-v1.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/KittyNyanster-v1-GGUF/resolve/main/KittyNyanster-v1.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/KittyNyanster-v1-GGUF/resolve/main/KittyNyanster-v1.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/KittyNyanster-v1-GGUF/resolve/main/KittyNyanster-v1.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/KittyNyanster-v1-GGUF/resolve/main/KittyNyanster-v1.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/KittyNyanster-v1-GGUF/resolve/main/KittyNyanster-v1.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/KoSOLAR-10.7B-DPO-v1.0-GGUF | mradermacher | 2024-05-06T05:20:15Z | 1 | 0 | transformers | [
"transformers",
"gguf",
"ko",
"base_model:rrw-x2/KoSOLAR-10.7B-DPO-v1.0",
"base_model:quantized:rrw-x2/KoSOLAR-10.7B-DPO-v1.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-03T18:43:39Z | ---
base_model: rrw-x2/KoSOLAR-10.7B-DPO-v1.0
language:
- ko
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/rrw-x2/KoSOLAR-10.7B-DPO-v1.0
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/KoSOLAR-10.7B-DPO-v1.0-GGUF/resolve/main/KoSOLAR-10.7B-DPO-v1.0.Q2_K.gguf) | Q2_K | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/KoSOLAR-10.7B-DPO-v1.0-GGUF/resolve/main/KoSOLAR-10.7B-DPO-v1.0.IQ3_XS.gguf) | IQ3_XS | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/KoSOLAR-10.7B-DPO-v1.0-GGUF/resolve/main/KoSOLAR-10.7B-DPO-v1.0.Q3_K_S.gguf) | Q3_K_S | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/KoSOLAR-10.7B-DPO-v1.0-GGUF/resolve/main/KoSOLAR-10.7B-DPO-v1.0.IQ3_S.gguf) | IQ3_S | 5.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/KoSOLAR-10.7B-DPO-v1.0-GGUF/resolve/main/KoSOLAR-10.7B-DPO-v1.0.IQ3_M.gguf) | IQ3_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/KoSOLAR-10.7B-DPO-v1.0-GGUF/resolve/main/KoSOLAR-10.7B-DPO-v1.0.Q3_K_M.gguf) | Q3_K_M | 5.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/KoSOLAR-10.7B-DPO-v1.0-GGUF/resolve/main/KoSOLAR-10.7B-DPO-v1.0.Q3_K_L.gguf) | Q3_K_L | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/KoSOLAR-10.7B-DPO-v1.0-GGUF/resolve/main/KoSOLAR-10.7B-DPO-v1.0.IQ4_XS.gguf) | IQ4_XS | 6.3 | |
| [GGUF](https://huggingface.co/mradermacher/KoSOLAR-10.7B-DPO-v1.0-GGUF/resolve/main/KoSOLAR-10.7B-DPO-v1.0.Q4_K_S.gguf) | Q4_K_S | 6.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/KoSOLAR-10.7B-DPO-v1.0-GGUF/resolve/main/KoSOLAR-10.7B-DPO-v1.0.Q4_K_M.gguf) | Q4_K_M | 7.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/KoSOLAR-10.7B-DPO-v1.0-GGUF/resolve/main/KoSOLAR-10.7B-DPO-v1.0.Q5_K_S.gguf) | Q5_K_S | 7.9 | |
| [GGUF](https://huggingface.co/mradermacher/KoSOLAR-10.7B-DPO-v1.0-GGUF/resolve/main/KoSOLAR-10.7B-DPO-v1.0.Q5_K_M.gguf) | Q5_K_M | 8.1 | |
| [GGUF](https://huggingface.co/mradermacher/KoSOLAR-10.7B-DPO-v1.0-GGUF/resolve/main/KoSOLAR-10.7B-DPO-v1.0.Q6_K.gguf) | Q6_K | 9.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/KoSOLAR-10.7B-DPO-v1.0-GGUF/resolve/main/KoSOLAR-10.7B-DPO-v1.0.Q8_0.gguf) | Q8_0 | 11.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/mistral-7b-medical-assistance-GGUF | mradermacher | 2024-05-06T05:20:12Z | 17 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"sft",
"en",
"base_model:Hdhsjfjdsj/mistral-7b-medical-assistance",
"base_model:quantized:Hdhsjfjdsj/mistral-7b-medical-assistance",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-03T18:55:27Z | ---
base_model: Hdhsjfjdsj/mistral-7b-medical-assistance
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Hdhsjfjdsj/mistral-7b-medical-assistance
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-medical-assistance-GGUF/resolve/main/mistral-7b-medical-assistance.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-medical-assistance-GGUF/resolve/main/mistral-7b-medical-assistance.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-medical-assistance-GGUF/resolve/main/mistral-7b-medical-assistance.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-medical-assistance-GGUF/resolve/main/mistral-7b-medical-assistance.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-medical-assistance-GGUF/resolve/main/mistral-7b-medical-assistance.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-medical-assistance-GGUF/resolve/main/mistral-7b-medical-assistance.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-medical-assistance-GGUF/resolve/main/mistral-7b-medical-assistance.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-medical-assistance-GGUF/resolve/main/mistral-7b-medical-assistance.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-medical-assistance-GGUF/resolve/main/mistral-7b-medical-assistance.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-medical-assistance-GGUF/resolve/main/mistral-7b-medical-assistance.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-medical-assistance-GGUF/resolve/main/mistral-7b-medical-assistance.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-medical-assistance-GGUF/resolve/main/mistral-7b-medical-assistance.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-medical-assistance-GGUF/resolve/main/mistral-7b-medical-assistance.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-medical-assistance-GGUF/resolve/main/mistral-7b-medical-assistance.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/pandafish-dt-7b-GGUF | mradermacher | 2024-05-06T05:20:10Z | 64 | 1 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"CultriX/MergeCeption-7B-v3",
"en",
"base_model:ichigoberry/pandafish-dt-7b",
"base_model:quantized:ichigoberry/pandafish-dt-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-03T19:03:45Z | ---
base_model: ichigoberry/pandafish-dt-7b
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- CultriX/MergeCeption-7B-v3
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/ichigoberry/pandafish-dt-7b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/KunoichiVerse-7B-GGUF | mradermacher | 2024-05-06T05:19:51Z | 28 | 1 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"en",
"base_model:Ppoyaa/KunoichiVerse-7B",
"base_model:quantized:Ppoyaa/KunoichiVerse-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-03T21:48:59Z | ---
base_model: Ppoyaa/KunoichiVerse-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Ppoyaa/KunoichiVerse-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/bagel-20b-v04-GGUF | mradermacher | 2024-05-06T05:19:46Z | 65 | 1 | transformers | [
"transformers",
"gguf",
"en",
"dataset:ai2_arc",
"dataset:allenai/ultrafeedback_binarized_cleaned",
"dataset:argilla/distilabel-intel-orca-dpo-pairs",
"dataset:jondurbin/airoboros-3.2",
"dataset:codeparrot/apps",
"dataset:facebook/belebele",
"dataset:bluemoon-fandom-1-1-rp-cleaned",
"dataset:boolq",
"dataset:camel-ai/biology",
"dataset:camel-ai/chemistry",
"dataset:camel-ai/math",
"dataset:camel-ai/physics",
"dataset:jondurbin/contextual-dpo-v0.1",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:jondurbin/py-dpo-v0.1",
"dataset:jondurbin/truthy-dpo-v0.1",
"dataset:LDJnr/Capybara",
"dataset:jondurbin/cinematika-v0.1",
"dataset:WizardLM/WizardLM_evol_instruct_70k",
"dataset:glaiveai/glaive-function-calling-v2",
"dataset:grimulkan/LimaRP-augmented",
"dataset:lmsys/lmsys-chat-1m",
"dataset:ParisNeo/lollms_aware_dataset",
"dataset:TIGER-Lab/MathInstruct",
"dataset:Muennighoff/natural-instructions",
"dataset:openbookqa",
"dataset:kingbri/PIPPA-shareGPT",
"dataset:piqa",
"dataset:Vezora/Tested-22k-Python-Alpaca",
"dataset:ropes",
"dataset:cakiki/rosetta-code",
"dataset:Open-Orca/SlimOrca",
"dataset:b-mc2/sql-create-context",
"dataset:squad_v2",
"dataset:mattpscott/airoboros-summarization",
"dataset:migtissera/Synthia-v1.3",
"dataset:unalignment/toxic-dpo-v0.2",
"dataset:WhiteRabbitNeo/WRN-Chapter-1",
"dataset:WhiteRabbitNeo/WRN-Chapter-2",
"dataset:winogrande",
"base_model:jondurbin/bagel-20b-v04",
"base_model:quantized:jondurbin/bagel-20b-v04",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-03T22:08:35Z | ---
base_model: jondurbin/bagel-20b-v04
datasets:
- ai2_arc
- allenai/ultrafeedback_binarized_cleaned
- argilla/distilabel-intel-orca-dpo-pairs
- jondurbin/airoboros-3.2
- codeparrot/apps
- facebook/belebele
- bluemoon-fandom-1-1-rp-cleaned
- boolq
- camel-ai/biology
- camel-ai/chemistry
- camel-ai/math
- camel-ai/physics
- jondurbin/contextual-dpo-v0.1
- jondurbin/gutenberg-dpo-v0.1
- jondurbin/py-dpo-v0.1
- jondurbin/truthy-dpo-v0.1
- LDJnr/Capybara
- jondurbin/cinematika-v0.1
- WizardLM/WizardLM_evol_instruct_70k
- glaiveai/glaive-function-calling-v2
- jondurbin/gutenberg-dpo-v0.1
- grimulkan/LimaRP-augmented
- lmsys/lmsys-chat-1m
- ParisNeo/lollms_aware_dataset
- TIGER-Lab/MathInstruct
- Muennighoff/natural-instructions
- openbookqa
- kingbri/PIPPA-shareGPT
- piqa
- Vezora/Tested-22k-Python-Alpaca
- ropes
- cakiki/rosetta-code
- Open-Orca/SlimOrca
- b-mc2/sql-create-context
- squad_v2
- mattpscott/airoboros-summarization
- migtissera/Synthia-v1.3
- unalignment/toxic-dpo-v0.2
- WhiteRabbitNeo/WRN-Chapter-1
- WhiteRabbitNeo/WRN-Chapter-2
- winogrande
language:
- en
library_name: transformers
license: other
license_link: https://huggingface.co/internlm/internlm2-20b#open-source-license
license_name: internlm2-20b
quantized_by: mradermacher
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/jondurbin/bagel-20b-v04
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.Q2_K.gguf) | Q2_K | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.IQ3_XS.gguf) | IQ3_XS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.Q3_K_S.gguf) | Q3_K_S | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.IQ3_S.gguf) | IQ3_S | 9.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.IQ3_M.gguf) | IQ3_M | 9.9 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.Q3_K_M.gguf) | Q3_K_M | 10.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.Q3_K_L.gguf) | Q3_K_L | 11.3 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.IQ4_XS.gguf) | IQ4_XS | 11.6 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.Q4_K_S.gguf) | Q4_K_S | 12.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.Q4_K_M.gguf) | Q4_K_M | 12.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.Q5_K_S.gguf) | Q5_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.Q5_K_M.gguf) | Q5_K_M | 14.8 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.Q6_K.gguf) | Q6_K | 17.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.Q8_0.gguf) | Q8_0 | 21.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Mixtral_AI_CyberLAW-GGUF | mradermacher | 2024-05-06T05:19:43Z | 108 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"Cyber-Series",
"en",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-03T22:34:07Z | ---
base_model: LeroyDyer/Mixtral_AI_CyberLAW
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
- Cyber-Series
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/LeroyDyer/Mixtral_AI_CyberLAW
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberLAW-GGUF/resolve/main/Mixtral_AI_CyberLAW.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberLAW-GGUF/resolve/main/Mixtral_AI_CyberLAW.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberLAW-GGUF/resolve/main/Mixtral_AI_CyberLAW.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberLAW-GGUF/resolve/main/Mixtral_AI_CyberLAW.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberLAW-GGUF/resolve/main/Mixtral_AI_CyberLAW.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberLAW-GGUF/resolve/main/Mixtral_AI_CyberLAW.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberLAW-GGUF/resolve/main/Mixtral_AI_CyberLAW.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberLAW-GGUF/resolve/main/Mixtral_AI_CyberLAW.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberLAW-GGUF/resolve/main/Mixtral_AI_CyberLAW.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberLAW-GGUF/resolve/main/Mixtral_AI_CyberLAW.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberLAW-GGUF/resolve/main/Mixtral_AI_CyberLAW.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberLAW-GGUF/resolve/main/Mixtral_AI_CyberLAW.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberLAW-GGUF/resolve/main/Mixtral_AI_CyberLAW.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberLAW-GGUF/resolve/main/Mixtral_AI_CyberLAW.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/StarMonarch-7B-GGUF | mradermacher | 2024-05-06T05:19:34Z | 71 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"en",
"base_model:Ppoyaa/StarMonarch-7B",
"base_model:quantized:Ppoyaa/StarMonarch-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-04T00:13:47Z | ---
base_model: Ppoyaa/StarMonarch-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Ppoyaa/StarMonarch-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Llama-3-70B-Instruct-norefusal-GGUF | mradermacher | 2024-05-06T05:19:27Z | 29 | 2 | transformers | [
"transformers",
"gguf",
"en",
"base_model:theo77186/Llama-3-70B-Instruct-norefusal",
"base_model:quantized:theo77186/Llama-3-70B-Instruct-norefusal",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-05T19:41:12Z | ---
base_model: theo77186/Llama-3-70B-Instruct-norefusal
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hfhfix -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/theo77186/Llama-3-70B-Instruct-norefusal
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3-70B-Instruct-norefusal-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Instruct-norefusal-GGUF/resolve/main/Llama-3-70B-Instruct-norefusal.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Instruct-norefusal-GGUF/resolve/main/Llama-3-70B-Instruct-norefusal.IQ3_XS.gguf) | IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Instruct-norefusal-GGUF/resolve/main/Llama-3-70B-Instruct-norefusal.IQ3_S.gguf) | IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Instruct-norefusal-GGUF/resolve/main/Llama-3-70B-Instruct-norefusal.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Instruct-norefusal-GGUF/resolve/main/Llama-3-70B-Instruct-norefusal.IQ3_M.gguf) | IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Instruct-norefusal-GGUF/resolve/main/Llama-3-70B-Instruct-norefusal.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Instruct-norefusal-GGUF/resolve/main/Llama-3-70B-Instruct-norefusal.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Instruct-norefusal-GGUF/resolve/main/Llama-3-70B-Instruct-norefusal.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Instruct-norefusal-GGUF/resolve/main/Llama-3-70B-Instruct-norefusal.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Instruct-norefusal-GGUF/resolve/main/Llama-3-70B-Instruct-norefusal.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Instruct-norefusal-GGUF/resolve/main/Llama-3-70B-Instruct-norefusal.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Instruct-norefusal-GGUF/resolve/main/Llama-3-70B-Instruct-norefusal.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Llama-3-70B-Instruct-norefusal-GGUF/resolve/main/Llama-3-70B-Instruct-norefusal.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-70B-Instruct-norefusal-GGUF/resolve/main/Llama-3-70B-Instruct-norefusal.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Llama-3-70B-Instruct-norefusal-GGUF/resolve/main/Llama-3-70B-Instruct-norefusal.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-70B-Instruct-norefusal-GGUF/resolve/main/Llama-3-70B-Instruct-norefusal.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/dragonwar-7b-s1-GGUF | mradermacher | 2024-05-06T05:18:44Z | 11 | 0 | transformers | [
"transformers",
"gguf",
"unsloth",
"book",
"en",
"base_model:maldv/dragonwar-7b-s1",
"base_model:quantized:maldv/dragonwar-7b-s1",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-04T05:42:53Z | ---
base_model: maldv/dragonwar-7b-s1
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- unsloth
- book
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/maldv/dragonwar-7b-s1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-s1-GGUF/resolve/main/dragonwar-7b-s1.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-s1-GGUF/resolve/main/dragonwar-7b-s1.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-s1-GGUF/resolve/main/dragonwar-7b-s1.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-s1-GGUF/resolve/main/dragonwar-7b-s1.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-s1-GGUF/resolve/main/dragonwar-7b-s1.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-s1-GGUF/resolve/main/dragonwar-7b-s1.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-s1-GGUF/resolve/main/dragonwar-7b-s1.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-s1-GGUF/resolve/main/dragonwar-7b-s1.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-s1-GGUF/resolve/main/dragonwar-7b-s1.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-s1-GGUF/resolve/main/dragonwar-7b-s1.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-s1-GGUF/resolve/main/dragonwar-7b-s1.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-s1-GGUF/resolve/main/dragonwar-7b-s1.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-s1-GGUF/resolve/main/dragonwar-7b-s1.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-s1-GGUF/resolve/main/dragonwar-7b-s1.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/IsenHumourAI-GGUF | mradermacher | 2024-05-06T05:18:38Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"sft",
"en",
"base_model:jberni29/IsenHumourAI",
"base_model:quantized:jberni29/IsenHumourAI",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-04T06:46:50Z | ---
base_model: jberni29/IsenHumourAI
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/jberni29/IsenHumourAI
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/IsenHumourAI-GGUF/resolve/main/IsenHumourAI.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/IsenHumourAI-GGUF/resolve/main/IsenHumourAI.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/IsenHumourAI-GGUF/resolve/main/IsenHumourAI.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/IsenHumourAI-GGUF/resolve/main/IsenHumourAI.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/IsenHumourAI-GGUF/resolve/main/IsenHumourAI.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/IsenHumourAI-GGUF/resolve/main/IsenHumourAI.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/IsenHumourAI-GGUF/resolve/main/IsenHumourAI.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/IsenHumourAI-GGUF/resolve/main/IsenHumourAI.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/IsenHumourAI-GGUF/resolve/main/IsenHumourAI.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/IsenHumourAI-GGUF/resolve/main/IsenHumourAI.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/IsenHumourAI-GGUF/resolve/main/IsenHumourAI.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/IsenHumourAI-GGUF/resolve/main/IsenHumourAI.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/IsenHumourAI-GGUF/resolve/main/IsenHumourAI.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/IsenHumourAI-GGUF/resolve/main/IsenHumourAI.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/talosian-7b-GGUF | mradermacher | 2024-05-06T05:18:30Z | 162 | 3 | transformers | [
"transformers",
"gguf",
"en",
"base_model:jspr/talosian-7b",
"base_model:quantized:jspr/talosian-7b",
"endpoints_compatible",
"region:us"
] | null | 2024-04-04T07:46:01Z | ---
base_model: jspr/talosian-7b
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/jspr/talosian-7b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/talosian-7b-GGUF/resolve/main/talosian-7b.Q2_K.gguf) | Q2_K | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/talosian-7b-GGUF/resolve/main/talosian-7b.IQ3_XS.gguf) | IQ3_XS | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/talosian-7b-GGUF/resolve/main/talosian-7b.Q3_K_S.gguf) | Q3_K_S | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/talosian-7b-GGUF/resolve/main/talosian-7b.IQ3_S.gguf) | IQ3_S | 3.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/talosian-7b-GGUF/resolve/main/talosian-7b.IQ3_M.gguf) | IQ3_M | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/talosian-7b-GGUF/resolve/main/talosian-7b.Q3_K_M.gguf) | Q3_K_M | 4.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/talosian-7b-GGUF/resolve/main/talosian-7b.Q3_K_L.gguf) | Q3_K_L | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/talosian-7b-GGUF/resolve/main/talosian-7b.IQ4_XS.gguf) | IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/talosian-7b-GGUF/resolve/main/talosian-7b.Q4_K_S.gguf) | Q4_K_S | 4.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/talosian-7b-GGUF/resolve/main/talosian-7b.Q4_K_M.gguf) | Q4_K_M | 4.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/talosian-7b-GGUF/resolve/main/talosian-7b.Q5_K_S.gguf) | Q5_K_S | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/talosian-7b-GGUF/resolve/main/talosian-7b.Q5_K_M.gguf) | Q5_K_M | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/talosian-7b-GGUF/resolve/main/talosian-7b.Q6_K.gguf) | Q6_K | 6.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/talosian-7b-GGUF/resolve/main/talosian-7b.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/MistarlingMaid-2x7B-base-GGUF | mradermacher | 2024-05-06T05:18:27Z | 74 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:dawn17/MistarlingMaid-2x7B-base",
"base_model:quantized:dawn17/MistarlingMaid-2x7B-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-04T08:20:09Z | ---
base_model: dawn17/MistarlingMaid-2x7B-base
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/dawn17/MistarlingMaid-2x7B-base
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.Q2_K.gguf) | Q2_K | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.IQ3_XS.gguf) | IQ3_XS | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.Q3_K_S.gguf) | Q3_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.IQ3_S.gguf) | IQ3_S | 5.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.IQ3_M.gguf) | IQ3_M | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.Q3_K_M.gguf) | Q3_K_M | 6.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.Q3_K_L.gguf) | Q3_K_L | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.IQ4_XS.gguf) | IQ4_XS | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.Q4_K_S.gguf) | Q4_K_S | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.Q5_K_S.gguf) | Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.Q5_K_M.gguf) | Q5_K_M | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.Q6_K.gguf) | Q6_K | 10.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/MythoMax-L2-Kimiko-v2-13b-GGUF | mradermacher | 2024-05-06T05:18:21Z | 40 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Undi95/MythoMax-L2-Kimiko-v2-13b",
"base_model:quantized:Undi95/MythoMax-L2-Kimiko-v2-13b",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-04T09:08:24Z | ---
base_model: Undi95/MythoMax-L2-Kimiko-v2-13b
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Undi95/MythoMax-L2-Kimiko-v2-13b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-Kimiko-v2-13b-GGUF/resolve/main/MythoMax-L2-Kimiko-v2-13b.Q2_K.gguf) | Q2_K | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-Kimiko-v2-13b-GGUF/resolve/main/MythoMax-L2-Kimiko-v2-13b.IQ3_XS.gguf) | IQ3_XS | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-Kimiko-v2-13b-GGUF/resolve/main/MythoMax-L2-Kimiko-v2-13b.IQ3_S.gguf) | IQ3_S | 6.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-Kimiko-v2-13b-GGUF/resolve/main/MythoMax-L2-Kimiko-v2-13b.Q3_K_S.gguf) | Q3_K_S | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-Kimiko-v2-13b-GGUF/resolve/main/MythoMax-L2-Kimiko-v2-13b.IQ3_M.gguf) | IQ3_M | 6.3 | |
| [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-Kimiko-v2-13b-GGUF/resolve/main/MythoMax-L2-Kimiko-v2-13b.Q3_K_M.gguf) | Q3_K_M | 6.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-Kimiko-v2-13b-GGUF/resolve/main/MythoMax-L2-Kimiko-v2-13b.Q3_K_L.gguf) | Q3_K_L | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-Kimiko-v2-13b-GGUF/resolve/main/MythoMax-L2-Kimiko-v2-13b.IQ4_XS.gguf) | IQ4_XS | 7.3 | |
| [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-Kimiko-v2-13b-GGUF/resolve/main/MythoMax-L2-Kimiko-v2-13b.Q4_K_S.gguf) | Q4_K_S | 7.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-Kimiko-v2-13b-GGUF/resolve/main/MythoMax-L2-Kimiko-v2-13b.Q4_K_M.gguf) | Q4_K_M | 8.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-Kimiko-v2-13b-GGUF/resolve/main/MythoMax-L2-Kimiko-v2-13b.Q5_K_S.gguf) | Q5_K_S | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-Kimiko-v2-13b-GGUF/resolve/main/MythoMax-L2-Kimiko-v2-13b.Q5_K_M.gguf) | Q5_K_M | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-Kimiko-v2-13b-GGUF/resolve/main/MythoMax-L2-Kimiko-v2-13b.Q6_K.gguf) | Q6_K | 11.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-Kimiko-v2-13b-GGUF/resolve/main/MythoMax-L2-Kimiko-v2-13b.Q8_0.gguf) | Q8_0 | 14.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/UNAversal-8x7B-v1beta-i1-GGUF | mradermacher | 2024-05-06T05:18:09Z | 61 | 1 | transformers | [
"transformers",
"gguf",
"UNA",
"juanako",
"mixtral",
"MoE",
"en",
"base_model:fblgit/UNAversal-8x7B-v1beta",
"base_model:quantized:fblgit/UNAversal-8x7B-v1beta",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-04T09:42:55Z | ---
base_model: fblgit/UNAversal-8x7B-v1beta
language:
- en
library_name: transformers
license: cc-by-nc-sa-4.0
quantized_by: mradermacher
tags:
- UNA
- juanako
- mixtral
- MoE
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/fblgit/UNAversal-8x7B-v1beta
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-IQ1_S.gguf) | i1-IQ1_S | 10.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-IQ1_M.gguf) | i1-IQ1_M | 11.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 12.8 | |
| [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-IQ2_XS.gguf) | i1-IQ2_XS | 14.2 | |
| [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-IQ2_S.gguf) | i1-IQ2_S | 14.4 | |
| [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-IQ2_M.gguf) | i1-IQ2_M | 15.8 | |
| [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-Q2_K.gguf) | i1-Q2_K | 17.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 18.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-IQ3_XS.gguf) | i1-IQ3_XS | 19.5 | |
| [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-IQ3_S.gguf) | i1-IQ3_S | 20.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-Q3_K_S.gguf) | i1-Q3_K_S | 20.7 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-IQ3_M.gguf) | i1-IQ3_M | 21.7 | |
| [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-Q3_K_M.gguf) | i1-Q3_K_M | 22.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-Q3_K_L.gguf) | i1-Q3_K_L | 24.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-IQ4_XS.gguf) | i1-IQ4_XS | 25.3 | |
| [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-Q4_0.gguf) | i1-Q4_0 | 26.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-Q4_K_S.gguf) | i1-Q4_K_S | 27.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-Q4_K_M.gguf) | i1-Q4_K_M | 28.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-Q5_K_S.gguf) | i1-Q5_K_S | 32.5 | |
| [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-Q5_K_M.gguf) | i1-Q5_K_M | 33.5 | |
| [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-Q6_K.gguf) | i1-Q6_K | 38.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Mixtral_AI_Cyber_Matrix_2_0-GGUF | mradermacher | 2024-05-06T05:18:04Z | 37 | 2 | transformers | [
"transformers",
"gguf",
"mergekit",
"megamerge",
"code",
"Cyber-Series",
"en",
"dataset:Open-Orca/OpenOrca",
"dataset:cognitivecomputations/dolphin",
"dataset:WhiteRabbitNeo/WRN-Chapter-2",
"dataset:WhiteRabbitNeo/WRN-Chapter-1",
"dataset:gate369/Alpaca-Star",
"dataset:gate369/alpaca-star-ascii",
"base_model:LeroyDyer/Mixtral_AI_Cyber_Matrix_2_0",
"base_model:quantized:LeroyDyer/Mixtral_AI_Cyber_Matrix_2_0",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-04T10:11:11Z | ---
base_model: LeroyDyer/Mixtral_AI_Cyber_Matrix_2_0
datasets:
- Open-Orca/OpenOrca
- cognitivecomputations/dolphin
- WhiteRabbitNeo/WRN-Chapter-2
- WhiteRabbitNeo/WRN-Chapter-1
- gate369/Alpaca-Star
- gate369/alpaca-star-ascii
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- mergekit
- megamerge
- code
- Cyber-Series
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/LeroyDyer/Mixtral_AI_Cyber_Matrix_2_0
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_Matrix_2_0-GGUF/resolve/main/Mixtral_AI_Cyber_Matrix_2_0.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_Matrix_2_0-GGUF/resolve/main/Mixtral_AI_Cyber_Matrix_2_0.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_Matrix_2_0-GGUF/resolve/main/Mixtral_AI_Cyber_Matrix_2_0.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_Matrix_2_0-GGUF/resolve/main/Mixtral_AI_Cyber_Matrix_2_0.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_Matrix_2_0-GGUF/resolve/main/Mixtral_AI_Cyber_Matrix_2_0.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_Matrix_2_0-GGUF/resolve/main/Mixtral_AI_Cyber_Matrix_2_0.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_Matrix_2_0-GGUF/resolve/main/Mixtral_AI_Cyber_Matrix_2_0.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_Matrix_2_0-GGUF/resolve/main/Mixtral_AI_Cyber_Matrix_2_0.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_Matrix_2_0-GGUF/resolve/main/Mixtral_AI_Cyber_Matrix_2_0.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_Matrix_2_0-GGUF/resolve/main/Mixtral_AI_Cyber_Matrix_2_0.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_Matrix_2_0-GGUF/resolve/main/Mixtral_AI_Cyber_Matrix_2_0.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_Matrix_2_0-GGUF/resolve/main/Mixtral_AI_Cyber_Matrix_2_0.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_Matrix_2_0-GGUF/resolve/main/Mixtral_AI_Cyber_Matrix_2_0.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_Matrix_2_0-GGUF/resolve/main/Mixtral_AI_Cyber_Matrix_2_0.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF | mradermacher | 2024-05-06T05:17:59Z | 1 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:ddh0/Mistral-10.7B-Instruct-v0.2",
"base_model:quantized:ddh0/Mistral-10.7B-Instruct-v0.2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-04T12:08:09Z | ---
base_model: ddh0/Mistral-10.7B-Instruct-v0.2
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/ddh0/Mistral-10.7B-Instruct-v0.2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.Q2_K.gguf) | Q2_K | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.IQ3_XS.gguf) | IQ3_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.Q3_K_S.gguf) | Q3_K_S | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.IQ3_S.gguf) | IQ3_S | 4.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.IQ3_M.gguf) | IQ3_M | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.Q3_K_M.gguf) | Q3_K_M | 5.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.Q3_K_L.gguf) | Q3_K_L | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.IQ4_XS.gguf) | IQ4_XS | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.Q4_K_S.gguf) | Q4_K_S | 6.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.Q4_K_M.gguf) | Q4_K_M | 6.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.Q5_K_S.gguf) | Q5_K_S | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.Q5_K_M.gguf) | Q5_K_M | 7.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.Q6_K.gguf) | Q6_K | 9.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.Q8_0.gguf) | Q8_0 | 11.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/megatron_2.1_MoE_2x7B-GGUF | mradermacher | 2024-05-06T05:17:51Z | 1 | 0 | transformers | [
"transformers",
"gguf",
"moe",
"merge",
"en",
"base_model:Eurdem/megatron_2.1_MoE_2x7B",
"base_model:quantized:Eurdem/megatron_2.1_MoE_2x7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-04T12:13:09Z | ---
base_model: Eurdem/megatron_2.1_MoE_2x7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- moe
- merge
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Eurdem/megatron_2.1_MoE_2x7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/megatron_2.1_MoE_2x7B-GGUF/resolve/main/megatron_2.1_MoE_2x7B.Q2_K.gguf) | Q2_K | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/megatron_2.1_MoE_2x7B-GGUF/resolve/main/megatron_2.1_MoE_2x7B.IQ3_XS.gguf) | IQ3_XS | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/megatron_2.1_MoE_2x7B-GGUF/resolve/main/megatron_2.1_MoE_2x7B.Q3_K_S.gguf) | Q3_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/megatron_2.1_MoE_2x7B-GGUF/resolve/main/megatron_2.1_MoE_2x7B.IQ3_S.gguf) | IQ3_S | 5.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/megatron_2.1_MoE_2x7B-GGUF/resolve/main/megatron_2.1_MoE_2x7B.IQ3_M.gguf) | IQ3_M | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/megatron_2.1_MoE_2x7B-GGUF/resolve/main/megatron_2.1_MoE_2x7B.Q3_K_M.gguf) | Q3_K_M | 6.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/megatron_2.1_MoE_2x7B-GGUF/resolve/main/megatron_2.1_MoE_2x7B.Q3_K_L.gguf) | Q3_K_L | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/megatron_2.1_MoE_2x7B-GGUF/resolve/main/megatron_2.1_MoE_2x7B.IQ4_XS.gguf) | IQ4_XS | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/megatron_2.1_MoE_2x7B-GGUF/resolve/main/megatron_2.1_MoE_2x7B.Q4_K_S.gguf) | Q4_K_S | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/megatron_2.1_MoE_2x7B-GGUF/resolve/main/megatron_2.1_MoE_2x7B.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/megatron_2.1_MoE_2x7B-GGUF/resolve/main/megatron_2.1_MoE_2x7B.Q5_K_S.gguf) | Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/megatron_2.1_MoE_2x7B-GGUF/resolve/main/megatron_2.1_MoE_2x7B.Q5_K_M.gguf) | Q5_K_M | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/megatron_2.1_MoE_2x7B-GGUF/resolve/main/megatron_2.1_MoE_2x7B.Q6_K.gguf) | Q6_K | 10.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/megatron_2.1_MoE_2x7B-GGUF/resolve/main/megatron_2.1_MoE_2x7B.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/athena-120b-i1-GGUF | mradermacher | 2024-05-06T05:17:47Z | 15 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"en",
"base_model:ibivibiv/athena-120b",
"base_model:quantized:ibivibiv/athena-120b",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-04-04T12:24:29Z | ---
base_model: ibivibiv/athena-120b
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
tags:
- merge
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/ibivibiv/athena-120b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/athena-120b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/athena-120b-i1-GGUF/resolve/main/athena-120b.i1-IQ1_S.gguf) | i1-IQ1_S | 26.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/athena-120b-i1-GGUF/resolve/main/athena-120b.i1-IQ1_M.gguf) | i1-IQ1_M | 28.7 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/athena-120b-i1-GGUF/resolve/main/athena-120b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 32.8 | |
| [GGUF](https://huggingface.co/mradermacher/athena-120b-i1-GGUF/resolve/main/athena-120b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 36.3 | |
| [GGUF](https://huggingface.co/mradermacher/athena-120b-i1-GGUF/resolve/main/athena-120b.i1-IQ2_S.gguf) | i1-IQ2_S | 38.1 | |
| [GGUF](https://huggingface.co/mradermacher/athena-120b-i1-GGUF/resolve/main/athena-120b.i1-IQ2_M.gguf) | i1-IQ2_M | 41.4 | |
| [GGUF](https://huggingface.co/mradermacher/athena-120b-i1-GGUF/resolve/main/athena-120b.i1-Q2_K.gguf) | i1-Q2_K | 45.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/athena-120b-i1-GGUF/resolve/main/athena-120b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 47.1 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/athena-120b-i1-GGUF/resolve/main/athena-120b.i1-IQ3_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/athena-120b-i1-GGUF/resolve/main/athena-120b.i1-IQ3_XS.gguf.part2of2) | i1-IQ3_XS | 50.1 | |
| [PART 1](https://huggingface.co/mradermacher/athena-120b-i1-GGUF/resolve/main/athena-120b.i1-Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/athena-120b-i1-GGUF/resolve/main/athena-120b.i1-Q3_K_S.gguf.part2of2) | i1-Q3_K_S | 52.7 | IQ3_XS probably better |
| [PART 1](https://huggingface.co/mradermacher/athena-120b-i1-GGUF/resolve/main/athena-120b.i1-IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/athena-120b-i1-GGUF/resolve/main/athena-120b.i1-IQ3_S.gguf.part2of2) | i1-IQ3_S | 52.9 | beats Q3_K* |
| [PART 1](https://huggingface.co/mradermacher/athena-120b-i1-GGUF/resolve/main/athena-120b.i1-IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/athena-120b-i1-GGUF/resolve/main/athena-120b.i1-IQ3_M.gguf.part2of2) | i1-IQ3_M | 54.7 | |
| [PART 1](https://huggingface.co/mradermacher/athena-120b-i1-GGUF/resolve/main/athena-120b.i1-Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/athena-120b-i1-GGUF/resolve/main/athena-120b.i1-Q3_K_M.gguf.part2of2) | i1-Q3_K_M | 58.8 | IQ3_S probably better |
| [PART 1](https://huggingface.co/mradermacher/athena-120b-i1-GGUF/resolve/main/athena-120b.i1-Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/athena-120b-i1-GGUF/resolve/main/athena-120b.i1-Q3_K_L.gguf.part2of2) | i1-Q3_K_L | 63.9 | IQ3_M probably better |
| [PART 1](https://huggingface.co/mradermacher/athena-120b-i1-GGUF/resolve/main/athena-120b.i1-IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/athena-120b-i1-GGUF/resolve/main/athena-120b.i1-IQ4_XS.gguf.part2of2) | i1-IQ4_XS | 65.1 | |
| [PART 1](https://huggingface.co/mradermacher/athena-120b-i1-GGUF/resolve/main/athena-120b.i1-Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/athena-120b-i1-GGUF/resolve/main/athena-120b.i1-Q4_0.gguf.part2of2) | i1-Q4_0 | 68.9 | fast, low quality |
| [PART 1](https://huggingface.co/mradermacher/athena-120b-i1-GGUF/resolve/main/athena-120b.i1-Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/athena-120b-i1-GGUF/resolve/main/athena-120b.i1-Q4_K_S.gguf.part2of2) | i1-Q4_K_S | 69.2 | optimal size/speed/quality |
| [PART 1](https://huggingface.co/mradermacher/athena-120b-i1-GGUF/resolve/main/athena-120b.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/athena-120b-i1-GGUF/resolve/main/athena-120b.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 73.1 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/athena-120b-i1-GGUF/resolve/main/athena-120b.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/athena-120b-i1-GGUF/resolve/main/athena-120b.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 83.7 | |
| [PART 1](https://huggingface.co/mradermacher/athena-120b-i1-GGUF/resolve/main/athena-120b.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/athena-120b-i1-GGUF/resolve/main/athena-120b.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 86.0 | |
| [PART 1](https://huggingface.co/mradermacher/athena-120b-i1-GGUF/resolve/main/athena-120b.i1-Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/athena-120b-i1-GGUF/resolve/main/athena-120b.i1-Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/athena-120b-i1-GGUF/resolve/main/athena-120b.i1-Q6_K.gguf.part3of3) | i1-Q6_K | 99.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/CatNyanster-34b-i1-GGUF | mradermacher | 2024-05-06T05:17:35Z | 4 | 0 | transformers | [
"transformers",
"gguf",
"en",
"endpoints_compatible",
"region:us"
] | null | 2024-04-04T15:33:46Z | ---
base_model: arlineka/CatNyanster-34b
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/arlineka/CatNyanster-34b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/CatNyanster-34b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-IQ1_S.gguf) | i1-IQ1_S | 8.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-IQ1_M.gguf) | i1-IQ1_M | 8.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-IQ2_S.gguf) | i1-IQ2_S | 11.6 | |
| [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-IQ2_M.gguf) | i1-IQ2_M | 12.5 | |
| [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-Q2_K.gguf) | i1-Q2_K | 13.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 14.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 15.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-IQ3_S.gguf) | i1-IQ3_S | 15.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-IQ3_M.gguf) | i1-IQ3_M | 16.2 | |
| [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 17.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 18.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 19.1 | |
| [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-Q4_0.gguf) | i1-Q4_0 | 20.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 20.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 21.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 24.3 | |
| [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 25.0 | |
| [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-Q6_K.gguf) | i1-Q6_K | 28.9 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Mermaid_11.5B-GGUF | mradermacher | 2024-05-06T05:17:13Z | 19 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:TroyDoesAI/Mermaid_11.5B",
"base_model:quantized:TroyDoesAI/Mermaid_11.5B",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-04T19:16:21Z | ---
base_model: TroyDoesAI/Mermaid_11.5B
language:
- en
library_name: transformers
license: cc-by-4.0
quantized_by: mradermacher
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/TroyDoesAI/Mermaid_11.5B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.Q2_K.gguf) | Q2_K | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.IQ3_XS.gguf) | IQ3_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.Q3_K_S.gguf) | Q3_K_S | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.IQ3_S.gguf) | IQ3_S | 5.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.IQ3_M.gguf) | IQ3_M | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.Q3_K_M.gguf) | Q3_K_M | 6.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.Q3_K_L.gguf) | Q3_K_L | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.IQ4_XS.gguf) | IQ4_XS | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.Q4_K_S.gguf) | Q4_K_S | 7.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.Q4_K_M.gguf) | Q4_K_M | 7.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.Q5_K_S.gguf) | Q5_K_S | 8.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.Q5_K_M.gguf) | Q5_K_M | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.Q6_K.gguf) | Q6_K | 9.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.Q8_0.gguf) | Q8_0 | 12.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Subsets and Splits