modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-26 06:27:38
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 496
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-26 06:27:10
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mradermacher/Eurus-70b-sft-fixed-i1-GGUF | mradermacher | 2024-05-06T05:00:08Z | 84 | 2 | transformers | [
"transformers",
"gguf",
"reasoning",
"en",
"dataset:openbmb/UltraInteract_sft",
"dataset:stingning/ultrachat",
"dataset:openchat/openchat_sharegpt4_dataset",
"dataset:Open-Orca/OpenOrca",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-12T07:14:07Z | ---
base_model: jukofyork/Eurus-70b-sft-fixed
datasets:
- openbmb/UltraInteract_sft
- stingning/ultrachat
- openchat/openchat_sharegpt4_dataset
- Open-Orca/OpenOrca
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- reasoning
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/jukofyork/Eurus-70b-sft-fixed
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Eurus-70b-sft-fixed-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Eurus-70b-sft-fixed-i1-GGUF/resolve/main/Eurus-70b-sft-fixed.i1-IQ1_S.gguf) | i1-IQ1_S | 14.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Eurus-70b-sft-fixed-i1-GGUF/resolve/main/Eurus-70b-sft-fixed.i1-IQ1_M.gguf) | i1-IQ1_M | 16.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Eurus-70b-sft-fixed-i1-GGUF/resolve/main/Eurus-70b-sft-fixed.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.4 | |
| [GGUF](https://huggingface.co/mradermacher/Eurus-70b-sft-fixed-i1-GGUF/resolve/main/Eurus-70b-sft-fixed.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.4 | |
| [GGUF](https://huggingface.co/mradermacher/Eurus-70b-sft-fixed-i1-GGUF/resolve/main/Eurus-70b-sft-fixed.i1-IQ2_S.gguf) | i1-IQ2_S | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/Eurus-70b-sft-fixed-i1-GGUF/resolve/main/Eurus-70b-sft-fixed.i1-IQ2_M.gguf) | i1-IQ2_M | 23.3 | |
| [GGUF](https://huggingface.co/mradermacher/Eurus-70b-sft-fixed-i1-GGUF/resolve/main/Eurus-70b-sft-fixed.i1-Q2_K.gguf) | i1-Q2_K | 25.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Eurus-70b-sft-fixed-i1-GGUF/resolve/main/Eurus-70b-sft-fixed.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 26.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Eurus-70b-sft-fixed-i1-GGUF/resolve/main/Eurus-70b-sft-fixed.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.4 | |
| [GGUF](https://huggingface.co/mradermacher/Eurus-70b-sft-fixed-i1-GGUF/resolve/main/Eurus-70b-sft-fixed.i1-IQ3_S.gguf) | i1-IQ3_S | 30.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Eurus-70b-sft-fixed-i1-GGUF/resolve/main/Eurus-70b-sft-fixed.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Eurus-70b-sft-fixed-i1-GGUF/resolve/main/Eurus-70b-sft-fixed.i1-IQ3_M.gguf) | i1-IQ3_M | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Eurus-70b-sft-fixed-i1-GGUF/resolve/main/Eurus-70b-sft-fixed.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Eurus-70b-sft-fixed-i1-GGUF/resolve/main/Eurus-70b-sft-fixed.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Eurus-70b-sft-fixed-i1-GGUF/resolve/main/Eurus-70b-sft-fixed.i1-IQ4_XS.gguf) | i1-IQ4_XS | 36.9 | |
| [GGUF](https://huggingface.co/mradermacher/Eurus-70b-sft-fixed-i1-GGUF/resolve/main/Eurus-70b-sft-fixed.i1-Q4_0.gguf) | i1-Q4_0 | 39.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Eurus-70b-sft-fixed-i1-GGUF/resolve/main/Eurus-70b-sft-fixed.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Eurus-70b-sft-fixed-i1-GGUF/resolve/main/Eurus-70b-sft-fixed.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Eurus-70b-sft-fixed-i1-GGUF/resolve/main/Eurus-70b-sft-fixed.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/Eurus-70b-sft-fixed-i1-GGUF/resolve/main/Eurus-70b-sft-fixed.i1-Q5_K_M.gguf) | i1-Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/Eurus-70b-sft-fixed-i1-GGUF/resolve/main/Eurus-70b-sft-fixed.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurus-70b-sft-fixed-i1-GGUF/resolve/main/Eurus-70b-sft-fixed.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 56.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/CodeZero-7B-GGUF | mradermacher | 2024-05-06T05:00:02Z | 19 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"ResplendentAI/DaturaCookie_7B",
"ResplendentAI/Flora_7B",
"en",
"base_model:bunnycore/CodeZero-7B",
"base_model:quantized:bunnycore/CodeZero-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-12T07:47:27Z | ---
base_model: bunnycore/CodeZero-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- ResplendentAI/DaturaCookie_7B
- ResplendentAI/Flora_7B
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/bunnycore/CodeZero-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/CodeZero-7B-GGUF/resolve/main/CodeZero-7B.Q2_K.gguf) | Q2_K | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/CodeZero-7B-GGUF/resolve/main/CodeZero-7B.IQ3_XS.gguf) | IQ3_XS | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/CodeZero-7B-GGUF/resolve/main/CodeZero-7B.Q3_K_S.gguf) | Q3_K_S | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/CodeZero-7B-GGUF/resolve/main/CodeZero-7B.IQ3_S.gguf) | IQ3_S | 4.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/CodeZero-7B-GGUF/resolve/main/CodeZero-7B.IQ3_M.gguf) | IQ3_M | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/CodeZero-7B-GGUF/resolve/main/CodeZero-7B.Q3_K_M.gguf) | Q3_K_M | 4.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/CodeZero-7B-GGUF/resolve/main/CodeZero-7B.Q3_K_L.gguf) | Q3_K_L | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/CodeZero-7B-GGUF/resolve/main/CodeZero-7B.IQ4_XS.gguf) | IQ4_XS | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/CodeZero-7B-GGUF/resolve/main/CodeZero-7B.Q4_K_S.gguf) | Q4_K_S | 5.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CodeZero-7B-GGUF/resolve/main/CodeZero-7B.Q4_K_M.gguf) | Q4_K_M | 5.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CodeZero-7B-GGUF/resolve/main/CodeZero-7B.Q5_K_S.gguf) | Q5_K_S | 6.3 | |
| [GGUF](https://huggingface.co/mradermacher/CodeZero-7B-GGUF/resolve/main/CodeZero-7B.Q5_K_M.gguf) | Q5_K_M | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/CodeZero-7B-GGUF/resolve/main/CodeZero-7B.Q6_K.gguf) | Q6_K | 7.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/CodeZero-7B-GGUF/resolve/main/CodeZero-7B.Q8_0.gguf) | Q8_0 | 9.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Orpomis-Prime-7B-it-GGUF | mradermacher | 2024-05-06T04:59:54Z | 16 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"kaist-ai/mistral-orpo-beta",
"NousResearch/Hermes-2-Pro-Mistral-7B",
"mistralai/Mistral-7B-Instruct-v0.2",
"en",
"base_model:saucam/Orpomis-Prime-7B-it",
"base_model:quantized:saucam/Orpomis-Prime-7B-it",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-12T13:30:28Z | ---
base_model: saucam/Orpomis-Prime-7B-it
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- kaist-ai/mistral-orpo-beta
- NousResearch/Hermes-2-Pro-Mistral-7B
- mistralai/Mistral-7B-Instruct-v0.2
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/saucam/Orpomis-Prime-7B-it
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-it-GGUF/resolve/main/Orpomis-Prime-7B-it.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-it-GGUF/resolve/main/Orpomis-Prime-7B-it.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-it-GGUF/resolve/main/Orpomis-Prime-7B-it.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-it-GGUF/resolve/main/Orpomis-Prime-7B-it.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-it-GGUF/resolve/main/Orpomis-Prime-7B-it.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-it-GGUF/resolve/main/Orpomis-Prime-7B-it.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-it-GGUF/resolve/main/Orpomis-Prime-7B-it.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-it-GGUF/resolve/main/Orpomis-Prime-7B-it.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-it-GGUF/resolve/main/Orpomis-Prime-7B-it.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-it-GGUF/resolve/main/Orpomis-Prime-7B-it.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-it-GGUF/resolve/main/Orpomis-Prime-7B-it.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-it-GGUF/resolve/main/Orpomis-Prime-7B-it.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-it-GGUF/resolve/main/Orpomis-Prime-7B-it.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-it-GGUF/resolve/main/Orpomis-Prime-7B-it.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Mistral-7B-Erebus-v3-Instruct-32k-GGUF | mradermacher | 2024-05-06T04:59:51Z | 20 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:mpasila/Mistral-7B-Erebus-v3-Instruct-32k",
"base_model:quantized:mpasila/Mistral-7B-Erebus-v3-Instruct-32k",
"endpoints_compatible",
"region:us"
] | null | 2024-04-12T15:16:22Z | ---
base_model: mpasila/Mistral-7B-Erebus-v3-Instruct-32k
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/mpasila/Mistral-7B-Erebus-v3-Instruct-32k
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-Instruct-32k-GGUF/resolve/main/Mistral-7B-Erebus-v3-Instruct-32k.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-Instruct-32k-GGUF/resolve/main/Mistral-7B-Erebus-v3-Instruct-32k.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-Instruct-32k-GGUF/resolve/main/Mistral-7B-Erebus-v3-Instruct-32k.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-Instruct-32k-GGUF/resolve/main/Mistral-7B-Erebus-v3-Instruct-32k.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-Instruct-32k-GGUF/resolve/main/Mistral-7B-Erebus-v3-Instruct-32k.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-Instruct-32k-GGUF/resolve/main/Mistral-7B-Erebus-v3-Instruct-32k.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-Instruct-32k-GGUF/resolve/main/Mistral-7B-Erebus-v3-Instruct-32k.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-Instruct-32k-GGUF/resolve/main/Mistral-7B-Erebus-v3-Instruct-32k.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-Instruct-32k-GGUF/resolve/main/Mistral-7B-Erebus-v3-Instruct-32k.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-Instruct-32k-GGUF/resolve/main/Mistral-7B-Erebus-v3-Instruct-32k.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-Instruct-32k-GGUF/resolve/main/Mistral-7B-Erebus-v3-Instruct-32k.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-Instruct-32k-GGUF/resolve/main/Mistral-7B-Erebus-v3-Instruct-32k.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-Instruct-32k-GGUF/resolve/main/Mistral-7B-Erebus-v3-Instruct-32k.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-Instruct-32k-GGUF/resolve/main/Mistral-7B-Erebus-v3-Instruct-32k.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Orpomis-Prime-7B-GGUF | mradermacher | 2024-05-06T04:59:39Z | 3 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"en",
"base_model:saucam/Orpomis-Prime-7B",
"base_model:quantized:saucam/Orpomis-Prime-7B",
"endpoints_compatible",
"region:us"
] | null | 2024-04-12T18:40:08Z | ---
base_model: saucam/Orpomis-Prime-7B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/saucam/Orpomis-Prime-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-GGUF/resolve/main/Orpomis-Prime-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-GGUF/resolve/main/Orpomis-Prime-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-GGUF/resolve/main/Orpomis-Prime-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-GGUF/resolve/main/Orpomis-Prime-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-GGUF/resolve/main/Orpomis-Prime-7B.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-GGUF/resolve/main/Orpomis-Prime-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-GGUF/resolve/main/Orpomis-Prime-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-GGUF/resolve/main/Orpomis-Prime-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-GGUF/resolve/main/Orpomis-Prime-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-GGUF/resolve/main/Orpomis-Prime-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-GGUF/resolve/main/Orpomis-Prime-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-GGUF/resolve/main/Orpomis-Prime-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-GGUF/resolve/main/Orpomis-Prime-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-GGUF/resolve/main/Orpomis-Prime-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/MysticNoromaidx-i1-GGUF | mradermacher | 2024-05-06T04:59:36Z | 57 | 0 | transformers | [
"transformers",
"gguf",
"en",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-12T19:23:42Z | ---
base_model: Fredithefish/MysticNoromaidx
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/Fredithefish/MysticNoromaidx
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/MysticNoromaidx-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-IQ1_S.gguf) | i1-IQ1_S | 9.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-IQ1_M.gguf) | i1-IQ1_M | 10.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 12.7 | |
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-IQ2_XS.gguf) | i1-IQ2_XS | 14.0 | |
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-IQ2_S.gguf) | i1-IQ2_S | 14.2 | |
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-IQ2_M.gguf) | i1-IQ2_M | 15.6 | |
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-Q2_K.gguf) | i1-Q2_K | 17.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 18.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-IQ3_XS.gguf) | i1-IQ3_XS | 19.5 | |
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-IQ3_S.gguf) | i1-IQ3_S | 20.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-Q3_K_S.gguf) | i1-Q3_K_S | 20.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-IQ3_M.gguf) | i1-IQ3_M | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-Q3_K_M.gguf) | i1-Q3_K_M | 22.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-Q3_K_L.gguf) | i1-Q3_K_L | 24.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-IQ4_XS.gguf) | i1-IQ4_XS | 25.2 | |
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-Q4_0.gguf) | i1-Q4_0 | 26.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-Q4_K_S.gguf) | i1-Q4_K_S | 26.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-Q4_K_M.gguf) | i1-Q4_K_M | 28.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-Q5_K_S.gguf) | i1-Q5_K_S | 32.3 | |
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-Q5_K_M.gguf) | i1-Q5_K_M | 33.3 | |
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-Q6_K.gguf) | i1-Q6_K | 38.5 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/MystixNoromaidx-GGUF | mradermacher | 2024-05-06T04:59:31Z | 37 | 0 | transformers | [
"transformers",
"gguf",
"en",
"endpoints_compatible",
"region:us"
] | null | 2024-04-12T21:14:17Z | ---
base_model: Fredithefish/MystixNoromaidx
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Fredithefish/MystixNoromaidx
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-GGUF/resolve/main/MystixNoromaidx.Q2_K.gguf) | Q2_K | 17.4 | |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-GGUF/resolve/main/MystixNoromaidx.IQ3_XS.gguf) | IQ3_XS | 19.5 | |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-GGUF/resolve/main/MystixNoromaidx.IQ3_S.gguf) | IQ3_S | 20.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-GGUF/resolve/main/MystixNoromaidx.Q3_K_S.gguf) | Q3_K_S | 20.5 | |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-GGUF/resolve/main/MystixNoromaidx.IQ3_M.gguf) | IQ3_M | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-GGUF/resolve/main/MystixNoromaidx.Q3_K_M.gguf) | Q3_K_M | 22.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-GGUF/resolve/main/MystixNoromaidx.Q3_K_L.gguf) | Q3_K_L | 24.3 | |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-GGUF/resolve/main/MystixNoromaidx.IQ4_XS.gguf) | IQ4_XS | 25.5 | |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-GGUF/resolve/main/MystixNoromaidx.Q4_K_S.gguf) | Q4_K_S | 26.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-GGUF/resolve/main/MystixNoromaidx.Q4_K_M.gguf) | Q4_K_M | 28.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-GGUF/resolve/main/MystixNoromaidx.Q5_K_S.gguf) | Q5_K_S | 32.3 | |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-GGUF/resolve/main/MystixNoromaidx.Q5_K_M.gguf) | Q5_K_M | 33.3 | |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-GGUF/resolve/main/MystixNoromaidx.Q6_K.gguf) | Q6_K | 38.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-GGUF/resolve/main/MystixNoromaidx.Q8_0.gguf) | Q8_0 | 49.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Experiment26-7B-GGUF | mradermacher | 2024-05-06T04:59:13Z | 74 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"en",
"base_model:yam-peleg/Experiment26-7B",
"base_model:quantized:yam-peleg/Experiment26-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-12T23:47:59Z | ---
base_model: yam-peleg/Experiment26-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- chat
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/yam-peleg/Experiment26-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Experiment26-7B-GGUF/resolve/main/Experiment26-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Experiment26-7B-GGUF/resolve/main/Experiment26-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Experiment26-7B-GGUF/resolve/main/Experiment26-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Experiment26-7B-GGUF/resolve/main/Experiment26-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Experiment26-7B-GGUF/resolve/main/Experiment26-7B.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Experiment26-7B-GGUF/resolve/main/Experiment26-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Experiment26-7B-GGUF/resolve/main/Experiment26-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Experiment26-7B-GGUF/resolve/main/Experiment26-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Experiment26-7B-GGUF/resolve/main/Experiment26-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Experiment26-7B-GGUF/resolve/main/Experiment26-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Experiment26-7B-GGUF/resolve/main/Experiment26-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Experiment26-7B-GGUF/resolve/main/Experiment26-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Experiment26-7B-GGUF/resolve/main/Experiment26-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Experiment26-7B-GGUF/resolve/main/Experiment26-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/MystixNoromaidx-i1-GGUF | mradermacher | 2024-05-06T04:59:10Z | 50 | 0 | transformers | [
"transformers",
"gguf",
"en",
"endpoints_compatible",
"region:us"
] | null | 2024-04-13T00:11:33Z | ---
base_model: Fredithefish/MystixNoromaidx
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/Fredithefish/MystixNoromaidx
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/MystixNoromaidx-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-IQ1_S.gguf) | i1-IQ1_S | 9.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-IQ1_M.gguf) | i1-IQ1_M | 10.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 12.7 | |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-IQ2_XS.gguf) | i1-IQ2_XS | 14.0 | |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-IQ2_S.gguf) | i1-IQ2_S | 14.2 | |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-IQ2_M.gguf) | i1-IQ2_M | 15.6 | |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-Q2_K.gguf) | i1-Q2_K | 17.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 18.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-IQ3_XS.gguf) | i1-IQ3_XS | 19.5 | |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-IQ3_S.gguf) | i1-IQ3_S | 20.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-Q3_K_S.gguf) | i1-Q3_K_S | 20.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-IQ3_M.gguf) | i1-IQ3_M | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-Q3_K_M.gguf) | i1-Q3_K_M | 22.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-Q3_K_L.gguf) | i1-Q3_K_L | 24.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-IQ4_XS.gguf) | i1-IQ4_XS | 25.2 | |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-Q4_0.gguf) | i1-Q4_0 | 26.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-Q4_K_S.gguf) | i1-Q4_K_S | 26.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-Q4_K_M.gguf) | i1-Q4_K_M | 28.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-Q5_K_S.gguf) | i1-Q5_K_S | 32.3 | |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-Q5_K_M.gguf) | i1-Q5_K_M | 33.3 | |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-Q6_K.gguf) | i1-Q6_K | 38.5 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/strix-rufipes-70b-GGUF | mradermacher | 2024-05-06T04:59:05Z | 102 | 0 | transformers | [
"transformers",
"gguf",
"logic",
"planning",
"en",
"base_model:ibivibiv/strix-rufipes-70b",
"base_model:quantized:ibivibiv/strix-rufipes-70b",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-04-13T01:54:59Z | ---
base_model: ibivibiv/strix-rufipes-70b
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
tags:
- logic
- planning
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/ibivibiv/strix-rufipes-70b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/strix-rufipes-70b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/strix-rufipes-70b-GGUF/resolve/main/strix-rufipes-70b.Q2_K.gguf) | Q2_K | 25.6 | |
| [GGUF](https://huggingface.co/mradermacher/strix-rufipes-70b-GGUF/resolve/main/strix-rufipes-70b.IQ3_XS.gguf) | IQ3_XS | 28.4 | |
| [GGUF](https://huggingface.co/mradermacher/strix-rufipes-70b-GGUF/resolve/main/strix-rufipes-70b.IQ3_S.gguf) | IQ3_S | 30.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/strix-rufipes-70b-GGUF/resolve/main/strix-rufipes-70b.Q3_K_S.gguf) | Q3_K_S | 30.0 | |
| [GGUF](https://huggingface.co/mradermacher/strix-rufipes-70b-GGUF/resolve/main/strix-rufipes-70b.IQ3_M.gguf) | IQ3_M | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/strix-rufipes-70b-GGUF/resolve/main/strix-rufipes-70b.Q3_K_M.gguf) | Q3_K_M | 33.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/strix-rufipes-70b-GGUF/resolve/main/strix-rufipes-70b.Q3_K_L.gguf) | Q3_K_L | 36.2 | |
| [GGUF](https://huggingface.co/mradermacher/strix-rufipes-70b-GGUF/resolve/main/strix-rufipes-70b.IQ4_XS.gguf) | IQ4_XS | 37.3 | |
| [GGUF](https://huggingface.co/mradermacher/strix-rufipes-70b-GGUF/resolve/main/strix-rufipes-70b.Q4_K_S.gguf) | Q4_K_S | 39.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/strix-rufipes-70b-GGUF/resolve/main/strix-rufipes-70b.Q4_K_M.gguf) | Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/strix-rufipes-70b-GGUF/resolve/main/strix-rufipes-70b.Q5_K_S.gguf) | Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/strix-rufipes-70b-GGUF/resolve/main/strix-rufipes-70b.Q5_K_M.gguf) | Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/strix-rufipes-70b-GGUF/resolve/main/strix-rufipes-70b.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/strix-rufipes-70b-GGUF/resolve/main/strix-rufipes-70b.Q6_K.gguf.part2of2) | Q6_K | 56.7 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/strix-rufipes-70b-GGUF/resolve/main/strix-rufipes-70b.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/strix-rufipes-70b-GGUF/resolve/main/strix-rufipes-70b.Q8_0.gguf.part2of2) | Q8_0 | 73.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Fantastica-7b-Instruct-0.2-Italian_merged-GGUF | mradermacher | 2024-05-06T04:59:02Z | 28 | 0 | transformers | [
"transformers",
"gguf",
"Italian",
"Mistral",
"finetuning",
"Text Generation",
"it",
"dataset:scribis/Wikipedia_it_Trame_Romanzi",
"dataset:scribis/Corpus-Frasi-da-Opere-Letterarie",
"dataset:scribis/Wikipedia-it-Trame-di-Film",
"dataset:scribis/Wikipedia-it-Descrizioni-di-Dipinti",
"dataset:scribis/Wikipedia-it-Mitologia-Greca",
"base_model:scribis/Fantastica-7b-Instruct-0.2-Italian_merged",
"base_model:quantized:scribis/Fantastica-7b-Instruct-0.2-Italian_merged",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-13T02:20:18Z | ---
base_model: scribis/Fantastica-7b-Instruct-0.2-Italian_merged
datasets:
- scribis/Wikipedia_it_Trame_Romanzi
- scribis/Corpus-Frasi-da-Opere-Letterarie
- scribis/Wikipedia-it-Trame-di-Film
- scribis/Wikipedia-it-Descrizioni-di-Dipinti
- scribis/Wikipedia-it-Mitologia-Greca
language:
- it
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- Italian
- Mistral
- finetuning
- Text Generation
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/scribis/Fantastica-7b-Instruct-0.2-Italian_merged
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Fantastica-7b-Instruct-0.2-Italian_merged-GGUF/resolve/main/Fantastica-7b-Instruct-0.2-Italian_merged.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Fantastica-7b-Instruct-0.2-Italian_merged-GGUF/resolve/main/Fantastica-7b-Instruct-0.2-Italian_merged.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Fantastica-7b-Instruct-0.2-Italian_merged-GGUF/resolve/main/Fantastica-7b-Instruct-0.2-Italian_merged.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Fantastica-7b-Instruct-0.2-Italian_merged-GGUF/resolve/main/Fantastica-7b-Instruct-0.2-Italian_merged.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Fantastica-7b-Instruct-0.2-Italian_merged-GGUF/resolve/main/Fantastica-7b-Instruct-0.2-Italian_merged.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Fantastica-7b-Instruct-0.2-Italian_merged-GGUF/resolve/main/Fantastica-7b-Instruct-0.2-Italian_merged.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Fantastica-7b-Instruct-0.2-Italian_merged-GGUF/resolve/main/Fantastica-7b-Instruct-0.2-Italian_merged.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Fantastica-7b-Instruct-0.2-Italian_merged-GGUF/resolve/main/Fantastica-7b-Instruct-0.2-Italian_merged.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Fantastica-7b-Instruct-0.2-Italian_merged-GGUF/resolve/main/Fantastica-7b-Instruct-0.2-Italian_merged.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Fantastica-7b-Instruct-0.2-Italian_merged-GGUF/resolve/main/Fantastica-7b-Instruct-0.2-Italian_merged.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Fantastica-7b-Instruct-0.2-Italian_merged-GGUF/resolve/main/Fantastica-7b-Instruct-0.2-Italian_merged.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Fantastica-7b-Instruct-0.2-Italian_merged-GGUF/resolve/main/Fantastica-7b-Instruct-0.2-Italian_merged.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Fantastica-7b-Instruct-0.2-Italian_merged-GGUF/resolve/main/Fantastica-7b-Instruct-0.2-Italian_merged.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Fantastica-7b-Instruct-0.2-Italian_merged-GGUF/resolve/main/Fantastica-7b-Instruct-0.2-Italian_merged.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Uoxudo_V2-GGUF | mradermacher | 2024-05-06T04:58:52Z | 76 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"sft",
"en",
"base_model:TheHappyDrone/Uoxudo_V2",
"base_model:quantized:TheHappyDrone/Uoxudo_V2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-13T02:48:10Z | ---
base_model: TheHappyDrone/Uoxudo_V2
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/TheHappyDrone/Uoxudo_V2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Uoxudo_V2-GGUF/resolve/main/Uoxudo_V2.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Uoxudo_V2-GGUF/resolve/main/Uoxudo_V2.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Uoxudo_V2-GGUF/resolve/main/Uoxudo_V2.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Uoxudo_V2-GGUF/resolve/main/Uoxudo_V2.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Uoxudo_V2-GGUF/resolve/main/Uoxudo_V2.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Uoxudo_V2-GGUF/resolve/main/Uoxudo_V2.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Uoxudo_V2-GGUF/resolve/main/Uoxudo_V2.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Uoxudo_V2-GGUF/resolve/main/Uoxudo_V2.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Uoxudo_V2-GGUF/resolve/main/Uoxudo_V2.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Uoxudo_V2-GGUF/resolve/main/Uoxudo_V2.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Uoxudo_V2-GGUF/resolve/main/Uoxudo_V2.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Uoxudo_V2-GGUF/resolve/main/Uoxudo_V2.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Uoxudo_V2-GGUF/resolve/main/Uoxudo_V2.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Uoxudo_V2-GGUF/resolve/main/Uoxudo_V2.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/PiVoT-SUS-RP-GGUF | mradermacher | 2024-05-06T04:58:49Z | 21 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:maywell/PiVoT-SUS-RP",
"base_model:quantized:maywell/PiVoT-SUS-RP",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-13T03:07:42Z | ---
base_model: maywell/PiVoT-SUS-RP
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/maywell/PiVoT-SUS-RP
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/PiVoT-SUS-RP-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/PiVoT-SUS-RP-GGUF/resolve/main/PiVoT-SUS-RP.Q2_K.gguf) | Q2_K | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/PiVoT-SUS-RP-GGUF/resolve/main/PiVoT-SUS-RP.IQ3_XS.gguf) | IQ3_XS | 14.3 | |
| [GGUF](https://huggingface.co/mradermacher/PiVoT-SUS-RP-GGUF/resolve/main/PiVoT-SUS-RP.Q3_K_S.gguf) | Q3_K_S | 15.1 | |
| [GGUF](https://huggingface.co/mradermacher/PiVoT-SUS-RP-GGUF/resolve/main/PiVoT-SUS-RP.IQ3_S.gguf) | IQ3_S | 15.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/PiVoT-SUS-RP-GGUF/resolve/main/PiVoT-SUS-RP.IQ3_M.gguf) | IQ3_M | 15.7 | |
| [GGUF](https://huggingface.co/mradermacher/PiVoT-SUS-RP-GGUF/resolve/main/PiVoT-SUS-RP.Q3_K_M.gguf) | Q3_K_M | 16.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/PiVoT-SUS-RP-GGUF/resolve/main/PiVoT-SUS-RP.Q3_K_L.gguf) | Q3_K_L | 18.2 | |
| [GGUF](https://huggingface.co/mradermacher/PiVoT-SUS-RP-GGUF/resolve/main/PiVoT-SUS-RP.IQ4_XS.gguf) | IQ4_XS | 18.7 | |
| [GGUF](https://huggingface.co/mradermacher/PiVoT-SUS-RP-GGUF/resolve/main/PiVoT-SUS-RP.Q4_K_S.gguf) | Q4_K_S | 19.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PiVoT-SUS-RP-GGUF/resolve/main/PiVoT-SUS-RP.Q4_K_M.gguf) | Q4_K_M | 20.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PiVoT-SUS-RP-GGUF/resolve/main/PiVoT-SUS-RP.Q5_K_S.gguf) | Q5_K_S | 23.8 | |
| [GGUF](https://huggingface.co/mradermacher/PiVoT-SUS-RP-GGUF/resolve/main/PiVoT-SUS-RP.Q5_K_M.gguf) | Q5_K_M | 24.4 | |
| [GGUF](https://huggingface.co/mradermacher/PiVoT-SUS-RP-GGUF/resolve/main/PiVoT-SUS-RP.Q6_K.gguf) | Q6_K | 28.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/PiVoT-SUS-RP-GGUF/resolve/main/PiVoT-SUS-RP.Q8_0.gguf) | Q8_0 | 36.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Pyrhea-72B-GGUF | mradermacher | 2024-05-06T04:58:36Z | 63 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"davidkim205/Rhea-72b-v0.5",
"abacusai/Smaug-72B-v0.1",
"en",
"base_model:saucam/Pyrhea-72B",
"base_model:quantized:saucam/Pyrhea-72B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-13T05:01:17Z | ---
base_model: saucam/Pyrhea-72B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- davidkim205/Rhea-72b-v0.5
- abacusai/Smaug-72B-v0.1
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/saucam/Pyrhea-72B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Pyrhea-72B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Pyrhea-72B-GGUF/resolve/main/Pyrhea-72B.Q2_K.gguf) | Q2_K | 27.2 | |
| [GGUF](https://huggingface.co/mradermacher/Pyrhea-72B-GGUF/resolve/main/Pyrhea-72B.IQ3_XS.gguf) | IQ3_XS | 30.0 | |
| [GGUF](https://huggingface.co/mradermacher/Pyrhea-72B-GGUF/resolve/main/Pyrhea-72B.IQ3_S.gguf) | IQ3_S | 31.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Pyrhea-72B-GGUF/resolve/main/Pyrhea-72B.Q3_K_S.gguf) | Q3_K_S | 31.7 | |
| [GGUF](https://huggingface.co/mradermacher/Pyrhea-72B-GGUF/resolve/main/Pyrhea-72B.IQ3_M.gguf) | IQ3_M | 33.4 | |
| [GGUF](https://huggingface.co/mradermacher/Pyrhea-72B-GGUF/resolve/main/Pyrhea-72B.Q3_K_M.gguf) | Q3_K_M | 35.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Pyrhea-72B-GGUF/resolve/main/Pyrhea-72B.Q3_K_L.gguf) | Q3_K_L | 38.6 | |
| [GGUF](https://huggingface.co/mradermacher/Pyrhea-72B-GGUF/resolve/main/Pyrhea-72B.IQ4_XS.gguf) | IQ4_XS | 39.2 | |
| [GGUF](https://huggingface.co/mradermacher/Pyrhea-72B-GGUF/resolve/main/Pyrhea-72B.Q4_K_S.gguf) | Q4_K_S | 41.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Pyrhea-72B-GGUF/resolve/main/Pyrhea-72B.Q4_K_M.gguf) | Q4_K_M | 43.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Pyrhea-72B-GGUF/resolve/main/Pyrhea-72B.Q5_K_S.gguf) | Q5_K_S | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Pyrhea-72B-GGUF/resolve/main/Pyrhea-72B.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Pyrhea-72B-GGUF/resolve/main/Pyrhea-72B.Q5_K_M.gguf.part2of2) | Q5_K_M | 51.4 | |
| [PART 1](https://huggingface.co/mradermacher/Pyrhea-72B-GGUF/resolve/main/Pyrhea-72B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Pyrhea-72B-GGUF/resolve/main/Pyrhea-72B.Q6_K.gguf.part2of2) | Q6_K | 59.4 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Pyrhea-72B-GGUF/resolve/main/Pyrhea-72B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Pyrhea-72B-GGUF/resolve/main/Pyrhea-72B.Q8_0.gguf.part2of2) | Q8_0 | 76.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/FashionGPT-70B-V1-i1-GGUF | mradermacher | 2024-05-06T04:58:29Z | 72 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:ehartford/samantha-data",
"dataset:Open-Orca/OpenOrca",
"dataset:jondurbin/airoboros-gpt4-1.4.1",
"base_model:ICBU-NPU/FashionGPT-70B-V1",
"base_model:quantized:ICBU-NPU/FashionGPT-70B-V1",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-04-13T10:49:37Z | ---
base_model: ICBU-NPU/FashionGPT-70B-V1
datasets:
- ehartford/samantha-data
- Open-Orca/OpenOrca
- jondurbin/airoboros-gpt4-1.4.1
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/ICBU-NPU/FashionGPT-70B-V1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/FashionGPT-70B-V1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-IQ1_S.gguf) | i1-IQ1_S | 14.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-IQ1_M.gguf) | i1-IQ1_M | 16.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.4 | |
| [GGUF](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.4 | |
| [GGUF](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-IQ2_S.gguf) | i1-IQ2_S | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-IQ2_M.gguf) | i1-IQ2_M | 23.3 | |
| [GGUF](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-Q2_K.gguf) | i1-Q2_K | 25.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 26.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.4 | |
| [GGUF](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-IQ3_S.gguf) | i1-IQ3_S | 30.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-IQ3_M.gguf) | i1-IQ3_M | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 36.9 | |
| [GGUF](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-Q4_0.gguf) | i1-Q4_0 | 39.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/FashionGPT-70B-V1-i1-GGUF/resolve/main/FashionGPT-70B-V1.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 56.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/MonaCeption-7B-SLERP-SFT-GGUF | mradermacher | 2024-05-06T04:58:15Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:CultriX/MonaCeption-7B-SLERP-SFT",
"base_model:quantized:CultriX/MonaCeption-7B-SLERP-SFT",
"endpoints_compatible",
"region:us"
] | null | 2024-04-13T12:12:34Z | ---
base_model: CultriX/MonaCeption-7B-SLERP-SFT
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/CultriX/MonaCeption-7B-SLERP-SFT
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-SFT-GGUF/resolve/main/MonaCeption-7B-SLERP-SFT.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-SFT-GGUF/resolve/main/MonaCeption-7B-SLERP-SFT.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-SFT-GGUF/resolve/main/MonaCeption-7B-SLERP-SFT.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-SFT-GGUF/resolve/main/MonaCeption-7B-SLERP-SFT.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-SFT-GGUF/resolve/main/MonaCeption-7B-SLERP-SFT.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-SFT-GGUF/resolve/main/MonaCeption-7B-SLERP-SFT.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-SFT-GGUF/resolve/main/MonaCeption-7B-SLERP-SFT.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-SFT-GGUF/resolve/main/MonaCeption-7B-SLERP-SFT.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-SFT-GGUF/resolve/main/MonaCeption-7B-SLERP-SFT.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-SFT-GGUF/resolve/main/MonaCeption-7B-SLERP-SFT.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-SFT-GGUF/resolve/main/MonaCeption-7B-SLERP-SFT.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-SFT-GGUF/resolve/main/MonaCeption-7B-SLERP-SFT.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-SFT-GGUF/resolve/main/MonaCeption-7B-SLERP-SFT.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-SFT-GGUF/resolve/main/MonaCeption-7B-SLERP-SFT.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/CreativeSmart-2x7B-GGUF | mradermacher | 2024-05-06T04:58:07Z | 21 | 1 | transformers | [
"transformers",
"gguf",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"Nexusflow/Starling-LM-7B-beta",
"bunnycore/Chimera-Apex-7B",
"en",
"base_model:bunnycore/CreativeSmart-2x7B",
"base_model:quantized:bunnycore/CreativeSmart-2x7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-13T14:05:37Z | ---
base_model: bunnycore/CreativeSmart-2x7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
- Nexusflow/Starling-LM-7B-beta
- bunnycore/Chimera-Apex-7B
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/bunnycore/CreativeSmart-2x7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/CreativeSmart-2x7B-GGUF/resolve/main/CreativeSmart-2x7B.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/CreativeSmart-2x7B-GGUF/resolve/main/CreativeSmart-2x7B.IQ3_XS.gguf) | IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/CreativeSmart-2x7B-GGUF/resolve/main/CreativeSmart-2x7B.Q3_K_S.gguf) | Q3_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/CreativeSmart-2x7B-GGUF/resolve/main/CreativeSmart-2x7B.IQ3_S.gguf) | IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/CreativeSmart-2x7B-GGUF/resolve/main/CreativeSmart-2x7B.IQ3_M.gguf) | IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/CreativeSmart-2x7B-GGUF/resolve/main/CreativeSmart-2x7B.Q3_K_M.gguf) | Q3_K_M | 6.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/CreativeSmart-2x7B-GGUF/resolve/main/CreativeSmart-2x7B.Q3_K_L.gguf) | Q3_K_L | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/CreativeSmart-2x7B-GGUF/resolve/main/CreativeSmart-2x7B.IQ4_XS.gguf) | IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/CreativeSmart-2x7B-GGUF/resolve/main/CreativeSmart-2x7B.Q4_K_S.gguf) | Q4_K_S | 7.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CreativeSmart-2x7B-GGUF/resolve/main/CreativeSmart-2x7B.Q4_K_M.gguf) | Q4_K_M | 7.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CreativeSmart-2x7B-GGUF/resolve/main/CreativeSmart-2x7B.Q5_K_S.gguf) | Q5_K_S | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/CreativeSmart-2x7B-GGUF/resolve/main/CreativeSmart-2x7B.Q5_K_M.gguf) | Q5_K_M | 9.2 | |
| [GGUF](https://huggingface.co/mradermacher/CreativeSmart-2x7B-GGUF/resolve/main/CreativeSmart-2x7B.Q6_K.gguf) | Q6_K | 10.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/CreativeSmart-2x7B-GGUF/resolve/main/CreativeSmart-2x7B.Q8_0.gguf) | Q8_0 | 13.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF | mradermacher | 2024-05-06T04:57:56Z | 88 | 0 | transformers | [
"transformers",
"gguf",
"llama-2",
"self-instruct",
"distillation",
"synthetic instruction",
"en",
"base_model:NousResearch/Nous-Hermes-Llama2-70b",
"base_model:quantized:NousResearch/Nous-Hermes-Llama2-70b",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-13T18:27:11Z | ---
base_model: NousResearch/Nous-Hermes-Llama2-70b
language:
- en
library_name: transformers
license:
- mit
quantized_by: mradermacher
tags:
- llama-2
- self-instruct
- distillation
- synthetic instruction
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/NousResearch/Nous-Hermes-Llama2-70b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-IQ1_S.gguf) | i1-IQ1_S | 14.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-IQ1_M.gguf) | i1-IQ1_M | 16.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.4 | |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.4 | |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-IQ2_S.gguf) | i1-IQ2_S | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-IQ2_M.gguf) | i1-IQ2_M | 23.3 | |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-Q2_K.gguf) | i1-Q2_K | 25.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 26.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.4 | |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-IQ3_S.gguf) | i1-IQ3_S | 30.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-IQ3_M.gguf) | i1-IQ3_M | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 36.9 | |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-Q4_0.gguf) | i1-Q4_0 | 39.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 56.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Pyrhea-72B-i1-GGUF | mradermacher | 2024-05-06T04:57:47Z | 96 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"davidkim205/Rhea-72b-v0.5",
"abacusai/Smaug-72B-v0.1",
"en",
"base_model:saucam/Pyrhea-72B",
"base_model:quantized:saucam/Pyrhea-72B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-13T19:08:47Z | ---
base_model: saucam/Pyrhea-72B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- davidkim205/Rhea-72b-v0.5
- abacusai/Smaug-72B-v0.1
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/saucam/Pyrhea-72B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Pyrhea-72B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Pyrhea-72B-i1-GGUF/resolve/main/Pyrhea-72B.i1-IQ1_S.gguf) | i1-IQ1_S | 16.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Pyrhea-72B-i1-GGUF/resolve/main/Pyrhea-72B.i1-IQ1_M.gguf) | i1-IQ1_M | 17.7 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Pyrhea-72B-i1-GGUF/resolve/main/Pyrhea-72B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.9 | |
| [GGUF](https://huggingface.co/mradermacher/Pyrhea-72B-i1-GGUF/resolve/main/Pyrhea-72B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.8 | |
| [GGUF](https://huggingface.co/mradermacher/Pyrhea-72B-i1-GGUF/resolve/main/Pyrhea-72B.i1-IQ2_S.gguf) | i1-IQ2_S | 23.5 | |
| [GGUF](https://huggingface.co/mradermacher/Pyrhea-72B-i1-GGUF/resolve/main/Pyrhea-72B.i1-IQ2_M.gguf) | i1-IQ2_M | 25.3 | |
| [GGUF](https://huggingface.co/mradermacher/Pyrhea-72B-i1-GGUF/resolve/main/Pyrhea-72B.i1-Q2_K.gguf) | i1-Q2_K | 27.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Pyrhea-72B-i1-GGUF/resolve/main/Pyrhea-72B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Pyrhea-72B-i1-GGUF/resolve/main/Pyrhea-72B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 30.0 | |
| [GGUF](https://huggingface.co/mradermacher/Pyrhea-72B-i1-GGUF/resolve/main/Pyrhea-72B.i1-IQ3_S.gguf) | i1-IQ3_S | 31.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Pyrhea-72B-i1-GGUF/resolve/main/Pyrhea-72B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.7 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Pyrhea-72B-i1-GGUF/resolve/main/Pyrhea-72B.i1-IQ3_M.gguf) | i1-IQ3_M | 33.4 | |
| [GGUF](https://huggingface.co/mradermacher/Pyrhea-72B-i1-GGUF/resolve/main/Pyrhea-72B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 35.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Pyrhea-72B-i1-GGUF/resolve/main/Pyrhea-72B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 38.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Pyrhea-72B-i1-GGUF/resolve/main/Pyrhea-72B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.9 | |
| [GGUF](https://huggingface.co/mradermacher/Pyrhea-72B-i1-GGUF/resolve/main/Pyrhea-72B.i1-Q4_0.gguf) | i1-Q4_0 | 41.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Pyrhea-72B-i1-GGUF/resolve/main/Pyrhea-72B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 41.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Pyrhea-72B-i1-GGUF/resolve/main/Pyrhea-72B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 43.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Pyrhea-72B-i1-GGUF/resolve/main/Pyrhea-72B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Pyrhea-72B-i1-GGUF/resolve/main/Pyrhea-72B.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Pyrhea-72B-i1-GGUF/resolve/main/Pyrhea-72B.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 51.4 | |
| [PART 1](https://huggingface.co/mradermacher/Pyrhea-72B-i1-GGUF/resolve/main/Pyrhea-72B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Pyrhea-72B-i1-GGUF/resolve/main/Pyrhea-72B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 59.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/athene-noctua-13b-GGUF | mradermacher | 2024-05-06T04:57:33Z | 118 | 0 | transformers | [
"transformers",
"gguf",
"logic",
"reasoning",
"en",
"base_model:ibivibiv/athene-noctua-13b",
"base_model:quantized:ibivibiv/athene-noctua-13b",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-04-14T00:13:49Z | ---
base_model: ibivibiv/athene-noctua-13b
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
tags:
- logic
- reasoning
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/ibivibiv/athene-noctua-13b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/athene-noctua-13b-GGUF/resolve/main/athene-noctua-13b.Q2_K.gguf) | Q2_K | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/athene-noctua-13b-GGUF/resolve/main/athene-noctua-13b.IQ3_XS.gguf) | IQ3_XS | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/athene-noctua-13b-GGUF/resolve/main/athene-noctua-13b.IQ3_S.gguf) | IQ3_S | 5.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/athene-noctua-13b-GGUF/resolve/main/athene-noctua-13b.Q3_K_S.gguf) | Q3_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/athene-noctua-13b-GGUF/resolve/main/athene-noctua-13b.IQ3_M.gguf) | IQ3_M | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/athene-noctua-13b-GGUF/resolve/main/athene-noctua-13b.Q3_K_M.gguf) | Q3_K_M | 6.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/athene-noctua-13b-GGUF/resolve/main/athene-noctua-13b.Q3_K_L.gguf) | Q3_K_L | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/athene-noctua-13b-GGUF/resolve/main/athene-noctua-13b.IQ4_XS.gguf) | IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/athene-noctua-13b-GGUF/resolve/main/athene-noctua-13b.Q4_K_S.gguf) | Q4_K_S | 7.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/athene-noctua-13b-GGUF/resolve/main/athene-noctua-13b.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/athene-noctua-13b-GGUF/resolve/main/athene-noctua-13b.Q5_K_S.gguf) | Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/athene-noctua-13b-GGUF/resolve/main/athene-noctua-13b.Q5_K_M.gguf) | Q5_K_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/athene-noctua-13b-GGUF/resolve/main/athene-noctua-13b.Q6_K.gguf) | Q6_K | 10.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/athene-noctua-13b-GGUF/resolve/main/athene-noctua-13b.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Mistral-22B-v0.2-GGUF | mradermacher | 2024-05-06T04:57:26Z | 31 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Vezora/Mistral-22B-v0.2",
"base_model:quantized:Vezora/Mistral-22B-v0.2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-14T00:33:11Z | ---
base_model: Vezora/Mistral-22B-v0.2
language:
- en
library_name: transformers
license: apache-2.0
no_imatrix: 'GGML_ASSERT: llama.cpp/ggml-quants.c:11239: grid_index >= 0'
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Vezora/Mistral-22B-v0.2
**No imatrix quants will be coming from me, as the model overflowed after 180k tokens and llama.cpp crashed generating most quants with smaller training data.**
weighted/imatrix quants by bartowksi (with smaller training data) can be found at https://huggingface.co/bartowski/Mistral-22B-v0.2-GGUF
<!-- provided-files -->
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-22B-v0.2-GGUF/resolve/main/Mistral-22B-v0.2.Q2_K.gguf) | Q2_K | 8.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-22B-v0.2-GGUF/resolve/main/Mistral-22B-v0.2.IQ3_XS.gguf) | IQ3_XS | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-22B-v0.2-GGUF/resolve/main/Mistral-22B-v0.2.Q3_K_S.gguf) | Q3_K_S | 9.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-22B-v0.2-GGUF/resolve/main/Mistral-22B-v0.2.IQ3_S.gguf) | IQ3_S | 9.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mistral-22B-v0.2-GGUF/resolve/main/Mistral-22B-v0.2.IQ3_M.gguf) | IQ3_M | 10.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-22B-v0.2-GGUF/resolve/main/Mistral-22B-v0.2.Q3_K_M.gguf) | Q3_K_M | 10.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-22B-v0.2-GGUF/resolve/main/Mistral-22B-v0.2.Q3_K_L.gguf) | Q3_K_L | 11.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-22B-v0.2-GGUF/resolve/main/Mistral-22B-v0.2.IQ4_XS.gguf) | IQ4_XS | 12.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-22B-v0.2-GGUF/resolve/main/Mistral-22B-v0.2.Q4_K_S.gguf) | Q4_K_S | 12.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-22B-v0.2-GGUF/resolve/main/Mistral-22B-v0.2.Q4_K_M.gguf) | Q4_K_M | 13.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-22B-v0.2-GGUF/resolve/main/Mistral-22B-v0.2.Q5_K_S.gguf) | Q5_K_S | 15.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-22B-v0.2-GGUF/resolve/main/Mistral-22B-v0.2.Q5_K_M.gguf) | Q5_K_M | 15.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-22B-v0.2-GGUF/resolve/main/Mistral-22B-v0.2.Q6_K.gguf) | Q6_K | 18.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-22B-v0.2-GGUF/resolve/main/Mistral-22B-v0.2.Q8_0.gguf) | Q8_0 | 23.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/ValidateAI-3-33B-Ties-GGUF | mradermacher | 2024-05-06T04:57:23Z | 115 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"WizardLM/WizardCoder-33B-V1.1",
"codefuse-ai/CodeFuse-DeepSeek-33B",
"deepseek-ai/deepseek-coder-33b-instruct",
"en",
"base_model:arvindanand/ValidateAI-3-33B-Ties",
"base_model:quantized:arvindanand/ValidateAI-3-33B-Ties",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-14T00:38:43Z | ---
base_model: arvindanand/ValidateAI-3-33B-Ties
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- WizardLM/WizardCoder-33B-V1.1
- codefuse-ai/CodeFuse-DeepSeek-33B
- deepseek-ai/deepseek-coder-33b-instruct
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/arvindanand/ValidateAI-3-33B-Ties
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ValidateAI-3-33B-Ties-GGUF/resolve/main/ValidateAI-3-33B-Ties.Q2_K.gguf) | Q2_K | 12.5 | |
| [GGUF](https://huggingface.co/mradermacher/ValidateAI-3-33B-Ties-GGUF/resolve/main/ValidateAI-3-33B-Ties.IQ3_XS.gguf) | IQ3_XS | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/ValidateAI-3-33B-Ties-GGUF/resolve/main/ValidateAI-3-33B-Ties.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/ValidateAI-3-33B-Ties-GGUF/resolve/main/ValidateAI-3-33B-Ties.IQ3_S.gguf) | IQ3_S | 14.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/ValidateAI-3-33B-Ties-GGUF/resolve/main/ValidateAI-3-33B-Ties.IQ3_M.gguf) | IQ3_M | 15.1 | |
| [GGUF](https://huggingface.co/mradermacher/ValidateAI-3-33B-Ties-GGUF/resolve/main/ValidateAI-3-33B-Ties.Q3_K_M.gguf) | Q3_K_M | 16.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ValidateAI-3-33B-Ties-GGUF/resolve/main/ValidateAI-3-33B-Ties.Q3_K_L.gguf) | Q3_K_L | 17.7 | |
| [GGUF](https://huggingface.co/mradermacher/ValidateAI-3-33B-Ties-GGUF/resolve/main/ValidateAI-3-33B-Ties.IQ4_XS.gguf) | IQ4_XS | 18.1 | |
| [GGUF](https://huggingface.co/mradermacher/ValidateAI-3-33B-Ties-GGUF/resolve/main/ValidateAI-3-33B-Ties.Q4_K_S.gguf) | Q4_K_S | 19.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ValidateAI-3-33B-Ties-GGUF/resolve/main/ValidateAI-3-33B-Ties.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ValidateAI-3-33B-Ties-GGUF/resolve/main/ValidateAI-3-33B-Ties.Q5_K_S.gguf) | Q5_K_S | 23.1 | |
| [GGUF](https://huggingface.co/mradermacher/ValidateAI-3-33B-Ties-GGUF/resolve/main/ValidateAI-3-33B-Ties.Q5_K_M.gguf) | Q5_K_M | 23.6 | |
| [GGUF](https://huggingface.co/mradermacher/ValidateAI-3-33B-Ties-GGUF/resolve/main/ValidateAI-3-33B-Ties.Q6_K.gguf) | Q6_K | 27.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ValidateAI-3-33B-Ties-GGUF/resolve/main/ValidateAI-3-33B-Ties.Q8_0.gguf) | Q8_0 | 35.5 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/rubra-11h-orpo-GGUF | mradermacher | 2024-05-06T04:57:20Z | 21 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:yingbei/rubra-11h-orpo",
"base_model:quantized:yingbei/rubra-11h-orpo",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-14T02:25:49Z | ---
base_model: yingbei/rubra-11h-orpo
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/yingbei/rubra-11h-orpo
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/rubra-11h-orpo-GGUF/resolve/main/rubra-11h-orpo.Q2_K.gguf) | Q2_K | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/rubra-11h-orpo-GGUF/resolve/main/rubra-11h-orpo.IQ3_XS.gguf) | IQ3_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/rubra-11h-orpo-GGUF/resolve/main/rubra-11h-orpo.Q3_K_S.gguf) | Q3_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/rubra-11h-orpo-GGUF/resolve/main/rubra-11h-orpo.IQ3_S.gguf) | IQ3_S | 4.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/rubra-11h-orpo-GGUF/resolve/main/rubra-11h-orpo.IQ3_M.gguf) | IQ3_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/rubra-11h-orpo-GGUF/resolve/main/rubra-11h-orpo.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/rubra-11h-orpo-GGUF/resolve/main/rubra-11h-orpo.Q3_K_L.gguf) | Q3_K_L | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/rubra-11h-orpo-GGUF/resolve/main/rubra-11h-orpo.IQ4_XS.gguf) | IQ4_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/rubra-11h-orpo-GGUF/resolve/main/rubra-11h-orpo.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/rubra-11h-orpo-GGUF/resolve/main/rubra-11h-orpo.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/rubra-11h-orpo-GGUF/resolve/main/rubra-11h-orpo.Q5_K_S.gguf) | Q5_K_S | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/rubra-11h-orpo-GGUF/resolve/main/rubra-11h-orpo.Q5_K_M.gguf) | Q5_K_M | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/rubra-11h-orpo-GGUF/resolve/main/rubra-11h-orpo.Q6_K.gguf) | Q6_K | 8.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/rubra-11h-orpo-GGUF/resolve/main/rubra-11h-orpo.Q8_0.gguf) | Q8_0 | 11.5 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/vidalet-alpha-GGUF | mradermacher | 2024-05-06T04:57:17Z | 23 | 0 | transformers | [
"transformers",
"gguf",
"en",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-14T03:38:19Z | ---
base_model: MarcOrfilaCarreras/vidalet-alpha
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/MarcOrfilaCarreras/vidalet-alpha
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/vidalet-alpha-GGUF/resolve/main/vidalet-alpha.Q2_K.gguf) | Q2_K | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/vidalet-alpha-GGUF/resolve/main/vidalet-alpha.IQ3_XS.gguf) | IQ3_XS | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/vidalet-alpha-GGUF/resolve/main/vidalet-alpha.Q3_K_S.gguf) | Q3_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/vidalet-alpha-GGUF/resolve/main/vidalet-alpha.IQ3_S.gguf) | IQ3_S | 1.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/vidalet-alpha-GGUF/resolve/main/vidalet-alpha.IQ3_M.gguf) | IQ3_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/vidalet-alpha-GGUF/resolve/main/vidalet-alpha.Q3_K_M.gguf) | Q3_K_M | 1.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/vidalet-alpha-GGUF/resolve/main/vidalet-alpha.Q3_K_L.gguf) | Q3_K_L | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/vidalet-alpha-GGUF/resolve/main/vidalet-alpha.IQ4_XS.gguf) | IQ4_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/vidalet-alpha-GGUF/resolve/main/vidalet-alpha.Q4_K_S.gguf) | Q4_K_S | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/vidalet-alpha-GGUF/resolve/main/vidalet-alpha.Q4_K_M.gguf) | Q4_K_M | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/vidalet-alpha-GGUF/resolve/main/vidalet-alpha.Q5_K_S.gguf) | Q5_K_S | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/vidalet-alpha-GGUF/resolve/main/vidalet-alpha.Q5_K_M.gguf) | Q5_K_M | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/vidalet-alpha-GGUF/resolve/main/vidalet-alpha.Q6_K.gguf) | Q6_K | 2.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/vidalet-alpha-GGUF/resolve/main/vidalet-alpha.Q8_0.gguf) | Q8_0 | 2.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/OpenCerebrum-2.0-7B-GGUF | mradermacher | 2024-05-06T04:57:07Z | 31 | 0 | transformers | [
"transformers",
"gguf",
"open-source",
"code",
"math",
"chemistry",
"biology",
"text-generation",
"question-answering",
"en",
"base_model:Locutusque/OpenCerebrum-2.0-7B",
"base_model:quantized:Locutusque/OpenCerebrum-2.0-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-04-14T04:08:05Z | ---
base_model: Locutusque/OpenCerebrum-2.0-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- open-source
- code
- math
- chemistry
- biology
- text-generation
- question-answering
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Locutusque/OpenCerebrum-2.0-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-2.0-7B-GGUF/resolve/main/OpenCerebrum-2.0-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-2.0-7B-GGUF/resolve/main/OpenCerebrum-2.0-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-2.0-7B-GGUF/resolve/main/OpenCerebrum-2.0-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-2.0-7B-GGUF/resolve/main/OpenCerebrum-2.0-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-2.0-7B-GGUF/resolve/main/OpenCerebrum-2.0-7B.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-2.0-7B-GGUF/resolve/main/OpenCerebrum-2.0-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-2.0-7B-GGUF/resolve/main/OpenCerebrum-2.0-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-2.0-7B-GGUF/resolve/main/OpenCerebrum-2.0-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-2.0-7B-GGUF/resolve/main/OpenCerebrum-2.0-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-2.0-7B-GGUF/resolve/main/OpenCerebrum-2.0-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-2.0-7B-GGUF/resolve/main/OpenCerebrum-2.0-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-2.0-7B-GGUF/resolve/main/OpenCerebrum-2.0-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-2.0-7B-GGUF/resolve/main/OpenCerebrum-2.0-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-2.0-7B-GGUF/resolve/main/OpenCerebrum-2.0-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/OG-SQL-7B-GGUF | mradermacher | 2024-05-06T04:56:58Z | 19 | 0 | transformers | [
"transformers",
"gguf",
"Text-to-sql",
"en",
"base_model:OneGate/OG-SQL-7B",
"base_model:quantized:OneGate/OG-SQL-7B",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-14T07:17:11Z | ---
base_model: OneGate/OG-SQL-7B
language:
- en
library_name: transformers
license: cc-by-4.0
quantized_by: mradermacher
tags:
- Text-to-sql
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/OneGate/OG-SQL-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OG-SQL-7B-GGUF/resolve/main/OG-SQL-7B.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/OG-SQL-7B-GGUF/resolve/main/OG-SQL-7B.IQ3_XS.gguf) | IQ3_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/OG-SQL-7B-GGUF/resolve/main/OG-SQL-7B.IQ3_S.gguf) | IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/OG-SQL-7B-GGUF/resolve/main/OG-SQL-7B.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/OG-SQL-7B-GGUF/resolve/main/OG-SQL-7B.IQ3_M.gguf) | IQ3_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/OG-SQL-7B-GGUF/resolve/main/OG-SQL-7B.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OG-SQL-7B-GGUF/resolve/main/OG-SQL-7B.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/OG-SQL-7B-GGUF/resolve/main/OG-SQL-7B.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/OG-SQL-7B-GGUF/resolve/main/OG-SQL-7B.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OG-SQL-7B-GGUF/resolve/main/OG-SQL-7B.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OG-SQL-7B-GGUF/resolve/main/OG-SQL-7B.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/OG-SQL-7B-GGUF/resolve/main/OG-SQL-7B.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/OG-SQL-7B-GGUF/resolve/main/OG-SQL-7B.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/OG-SQL-7B-GGUF/resolve/main/OG-SQL-7B.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
isaaclee/witness_count_mistral_train_run3 | isaaclee | 2024-05-06T04:56:54Z | 2 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-05-06T02:31:46Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.1
model-index:
- name: witness_count_mistral_train_run3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# witness_count_mistral_train_run3
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1 |
mradermacher/ANIMA-Phi-Neptune-Mistral-10.7B-GGUF | mradermacher | 2024-05-06T04:56:50Z | 90 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"chemistry",
"biology",
"climate",
"science",
"philosophy",
"nature",
"ecology",
"biomimicry",
"fauna",
"flora",
"en",
"dataset:Severian/Biomimicry",
"dataset:emrgnt-cmplxty/sciphi-textbooks-are-all-you-need",
"dataset:fmars/wiki_stem",
"dataset:fblgit/tree-of-knowledge",
"dataset:Severian/Bio-Design-Process",
"base_model:Joseph717171/ANIMA-Phi-Neptune-Mistral-10.7B",
"base_model:quantized:Joseph717171/ANIMA-Phi-Neptune-Mistral-10.7B",
"license:artistic-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-14T08:05:55Z | ---
base_model: Joseph717171/ANIMA-Phi-Neptune-Mistral-10.7B
datasets:
- Severian/Biomimicry
- emrgnt-cmplxty/sciphi-textbooks-are-all-you-need
- fmars/wiki_stem
- fblgit/tree-of-knowledge
- Severian/Bio-Design-Process
language:
- en
library_name: transformers
license: artistic-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
- chemistry
- biology
- climate
- science
- philosophy
- nature
- ecology
- biomimicry
- fauna
- flora
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Joseph717171/ANIMA-Phi-Neptune-Mistral-10.7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ANIMA-Phi-Neptune-Mistral-10.7B-GGUF/resolve/main/ANIMA-Phi-Neptune-Mistral-10.7B.Q2_K.gguf) | Q2_K | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/ANIMA-Phi-Neptune-Mistral-10.7B-GGUF/resolve/main/ANIMA-Phi-Neptune-Mistral-10.7B.IQ3_XS.gguf) | IQ3_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/ANIMA-Phi-Neptune-Mistral-10.7B-GGUF/resolve/main/ANIMA-Phi-Neptune-Mistral-10.7B.Q3_K_S.gguf) | Q3_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/ANIMA-Phi-Neptune-Mistral-10.7B-GGUF/resolve/main/ANIMA-Phi-Neptune-Mistral-10.7B.IQ3_S.gguf) | IQ3_S | 4.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/ANIMA-Phi-Neptune-Mistral-10.7B-GGUF/resolve/main/ANIMA-Phi-Neptune-Mistral-10.7B.IQ3_M.gguf) | IQ3_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/ANIMA-Phi-Neptune-Mistral-10.7B-GGUF/resolve/main/ANIMA-Phi-Neptune-Mistral-10.7B.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ANIMA-Phi-Neptune-Mistral-10.7B-GGUF/resolve/main/ANIMA-Phi-Neptune-Mistral-10.7B.Q3_K_L.gguf) | Q3_K_L | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/ANIMA-Phi-Neptune-Mistral-10.7B-GGUF/resolve/main/ANIMA-Phi-Neptune-Mistral-10.7B.IQ4_XS.gguf) | IQ4_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/ANIMA-Phi-Neptune-Mistral-10.7B-GGUF/resolve/main/ANIMA-Phi-Neptune-Mistral-10.7B.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ANIMA-Phi-Neptune-Mistral-10.7B-GGUF/resolve/main/ANIMA-Phi-Neptune-Mistral-10.7B.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ANIMA-Phi-Neptune-Mistral-10.7B-GGUF/resolve/main/ANIMA-Phi-Neptune-Mistral-10.7B.Q5_K_S.gguf) | Q5_K_S | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/ANIMA-Phi-Neptune-Mistral-10.7B-GGUF/resolve/main/ANIMA-Phi-Neptune-Mistral-10.7B.Q5_K_M.gguf) | Q5_K_M | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/ANIMA-Phi-Neptune-Mistral-10.7B-GGUF/resolve/main/ANIMA-Phi-Neptune-Mistral-10.7B.Q6_K.gguf) | Q6_K | 8.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ANIMA-Phi-Neptune-Mistral-10.7B-GGUF/resolve/main/ANIMA-Phi-Neptune-Mistral-10.7B.Q8_0.gguf) | Q8_0 | 11.5 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/aegolius-acadicus-34b-v3-GGUF | mradermacher | 2024-05-06T04:56:38Z | 67 | 0 | transformers | [
"transformers",
"gguf",
"moe",
"en",
"base_model:ibivibiv/aegolius-acadicus-34b-v3",
"base_model:quantized:ibivibiv/aegolius-acadicus-34b-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-14T09:27:12Z | ---
base_model: ibivibiv/aegolius-acadicus-34b-v3
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- moe
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/ibivibiv/aegolius-acadicus-34b-v3
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/aegolius-acadicus-34b-v3-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-34b-v3-GGUF/resolve/main/aegolius-acadicus-34b-v3.Q2_K.gguf) | Q2_K | 13.0 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-34b-v3-GGUF/resolve/main/aegolius-acadicus-34b-v3.IQ3_XS.gguf) | IQ3_XS | 14.6 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-34b-v3-GGUF/resolve/main/aegolius-acadicus-34b-v3.Q3_K_S.gguf) | Q3_K_S | 15.4 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-34b-v3-GGUF/resolve/main/aegolius-acadicus-34b-v3.IQ3_S.gguf) | IQ3_S | 15.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-34b-v3-GGUF/resolve/main/aegolius-acadicus-34b-v3.IQ3_M.gguf) | IQ3_M | 15.7 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-34b-v3-GGUF/resolve/main/aegolius-acadicus-34b-v3.Q3_K_M.gguf) | Q3_K_M | 17.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-34b-v3-GGUF/resolve/main/aegolius-acadicus-34b-v3.Q3_K_L.gguf) | Q3_K_L | 18.5 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-34b-v3-GGUF/resolve/main/aegolius-acadicus-34b-v3.IQ4_XS.gguf) | IQ4_XS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-34b-v3-GGUF/resolve/main/aegolius-acadicus-34b-v3.Q4_K_S.gguf) | Q4_K_S | 20.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-34b-v3-GGUF/resolve/main/aegolius-acadicus-34b-v3.Q4_K_M.gguf) | Q4_K_M | 21.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-34b-v3-GGUF/resolve/main/aegolius-acadicus-34b-v3.Q5_K_S.gguf) | Q5_K_S | 24.5 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-34b-v3-GGUF/resolve/main/aegolius-acadicus-34b-v3.Q5_K_M.gguf) | Q5_K_M | 25.2 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-34b-v3-GGUF/resolve/main/aegolius-acadicus-34b-v3.Q6_K.gguf) | Q6_K | 29.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-34b-v3-GGUF/resolve/main/aegolius-acadicus-34b-v3.Q8_0.gguf) | Q8_0 | 37.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Narwhal-7b-GGUF | mradermacher | 2024-05-06T04:56:18Z | 130 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"orca",
"stable",
"stability",
"bloke",
"hf",
"7b",
"13b",
"34b",
"70b",
"22b",
"60b",
"coding",
"progaming",
"logic",
"deduction",
"en",
"base_model:Vezora/Narwhal-7b",
"base_model:quantized:Vezora/Narwhal-7b",
"endpoints_compatible",
"region:us"
] | null | 2024-04-14T13:29:39Z | ---
base_model: Vezora/Narwhal-7b
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- llama
- orca
- stable
- stability
- bloke
- hf
- 7b
- 13b
- 34b
- 70b
- 22b
- 60b
- coding
- progaming
- logic
- deduction
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Vezora/Narwhal-7b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Narwhal-7b-GGUF/resolve/main/Narwhal-7b.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Narwhal-7b-GGUF/resolve/main/Narwhal-7b.IQ3_XS.gguf) | IQ3_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Narwhal-7b-GGUF/resolve/main/Narwhal-7b.IQ3_S.gguf) | IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Narwhal-7b-GGUF/resolve/main/Narwhal-7b.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Narwhal-7b-GGUF/resolve/main/Narwhal-7b.IQ3_M.gguf) | IQ3_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Narwhal-7b-GGUF/resolve/main/Narwhal-7b.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Narwhal-7b-GGUF/resolve/main/Narwhal-7b.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Narwhal-7b-GGUF/resolve/main/Narwhal-7b.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Narwhal-7b-GGUF/resolve/main/Narwhal-7b.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Narwhal-7b-GGUF/resolve/main/Narwhal-7b.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Narwhal-7b-GGUF/resolve/main/Narwhal-7b.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Narwhal-7b-GGUF/resolve/main/Narwhal-7b.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Narwhal-7b-GGUF/resolve/main/Narwhal-7b.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Narwhal-7b-GGUF/resolve/main/Narwhal-7b.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/HBDN-MoE-4x7B-GGUF | mradermacher | 2024-05-06T04:56:01Z | 25 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:NeuroDonu/HBDN-MoE-4x7B",
"base_model:quantized:NeuroDonu/HBDN-MoE-4x7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-14T15:44:51Z | ---
base_model: NeuroDonu/HBDN-MoE-4x7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/NeuroDonu/HBDN-MoE-4x7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/HBDN-MoE-4x7B-GGUF/resolve/main/HBDN-MoE-4x7B.Q2_K.gguf) | Q2_K | 8.9 | |
| [GGUF](https://huggingface.co/mradermacher/HBDN-MoE-4x7B-GGUF/resolve/main/HBDN-MoE-4x7B.IQ3_XS.gguf) | IQ3_XS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/HBDN-MoE-4x7B-GGUF/resolve/main/HBDN-MoE-4x7B.Q3_K_S.gguf) | Q3_K_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/HBDN-MoE-4x7B-GGUF/resolve/main/HBDN-MoE-4x7B.IQ3_S.gguf) | IQ3_S | 10.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/HBDN-MoE-4x7B-GGUF/resolve/main/HBDN-MoE-4x7B.IQ3_M.gguf) | IQ3_M | 10.7 | |
| [GGUF](https://huggingface.co/mradermacher/HBDN-MoE-4x7B-GGUF/resolve/main/HBDN-MoE-4x7B.Q3_K_M.gguf) | Q3_K_M | 11.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/HBDN-MoE-4x7B-GGUF/resolve/main/HBDN-MoE-4x7B.Q3_K_L.gguf) | Q3_K_L | 12.6 | |
| [GGUF](https://huggingface.co/mradermacher/HBDN-MoE-4x7B-GGUF/resolve/main/HBDN-MoE-4x7B.IQ4_XS.gguf) | IQ4_XS | 13.1 | |
| [GGUF](https://huggingface.co/mradermacher/HBDN-MoE-4x7B-GGUF/resolve/main/HBDN-MoE-4x7B.Q4_K_S.gguf) | Q4_K_S | 13.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/HBDN-MoE-4x7B-GGUF/resolve/main/HBDN-MoE-4x7B.Q4_K_M.gguf) | Q4_K_M | 14.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/HBDN-MoE-4x7B-GGUF/resolve/main/HBDN-MoE-4x7B.Q5_K_S.gguf) | Q5_K_S | 16.7 | |
| [GGUF](https://huggingface.co/mradermacher/HBDN-MoE-4x7B-GGUF/resolve/main/HBDN-MoE-4x7B.Q5_K_M.gguf) | Q5_K_M | 17.2 | |
| [GGUF](https://huggingface.co/mradermacher/HBDN-MoE-4x7B-GGUF/resolve/main/HBDN-MoE-4x7B.Q6_K.gguf) | Q6_K | 19.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/HBDN-MoE-4x7B-GGUF/resolve/main/HBDN-MoE-4x7B.Q8_0.gguf) | Q8_0 | 25.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/34b-beta2-GGUF | mradermacher | 2024-05-06T04:55:58Z | 35 | 2 | transformers | [
"transformers",
"gguf",
"en",
"zh",
"base_model:CausalLM/34b-beta2",
"base_model:quantized:CausalLM/34b-beta2",
"license:gpl-3.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-14T16:25:51Z | ---
base_model: CausalLM/34b-beta2
language:
- en
- zh
library_name: transformers
license: gpl-3.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/CausalLM/34b-beta2
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/34b-beta2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/34b-beta2-GGUF/resolve/main/34b-beta2.Q2_K.gguf) | Q2_K | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/34b-beta2-GGUF/resolve/main/34b-beta2.IQ3_XS.gguf) | IQ3_XS | 14.3 | |
| [GGUF](https://huggingface.co/mradermacher/34b-beta2-GGUF/resolve/main/34b-beta2.Q3_K_S.gguf) | Q3_K_S | 15.1 | |
| [GGUF](https://huggingface.co/mradermacher/34b-beta2-GGUF/resolve/main/34b-beta2.IQ3_S.gguf) | IQ3_S | 15.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/34b-beta2-GGUF/resolve/main/34b-beta2.IQ3_M.gguf) | IQ3_M | 15.7 | |
| [GGUF](https://huggingface.co/mradermacher/34b-beta2-GGUF/resolve/main/34b-beta2.Q3_K_M.gguf) | Q3_K_M | 16.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/34b-beta2-GGUF/resolve/main/34b-beta2.Q3_K_L.gguf) | Q3_K_L | 18.2 | |
| [GGUF](https://huggingface.co/mradermacher/34b-beta2-GGUF/resolve/main/34b-beta2.IQ4_XS.gguf) | IQ4_XS | 18.7 | |
| [GGUF](https://huggingface.co/mradermacher/34b-beta2-GGUF/resolve/main/34b-beta2.Q4_K_S.gguf) | Q4_K_S | 19.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/34b-beta2-GGUF/resolve/main/34b-beta2.Q4_K_M.gguf) | Q4_K_M | 20.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/34b-beta2-GGUF/resolve/main/34b-beta2.Q5_K_S.gguf) | Q5_K_S | 23.8 | |
| [GGUF](https://huggingface.co/mradermacher/34b-beta2-GGUF/resolve/main/34b-beta2.Q5_K_M.gguf) | Q5_K_M | 24.4 | |
| [GGUF](https://huggingface.co/mradermacher/34b-beta2-GGUF/resolve/main/34b-beta2.Q6_K.gguf) | Q6_K | 28.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/34b-beta2-GGUF/resolve/main/34b-beta2.Q8_0.gguf) | Q8_0 | 36.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
appvoid/merging-x3 | appvoid | 2024-05-06T04:55:33Z | 139 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T",
"base_model:merge:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T",
"base_model:appvoid/palmer-002",
"base_model:merge:appvoid/palmer-002",
"base_model:appvoid/palmer-003",
"base_model:merge:appvoid/palmer-003",
"base_model:vihangd/DopeyTinyLlama-1.1B-v1",
"base_model:merge:vihangd/DopeyTinyLlama-1.1B-v1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-06T04:51:03Z | ---
base_model:
- vihangd/DopeyTinyLlama-1.1B-v1
- TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
- appvoid/palmer-002
- appvoid/palmer-003
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [vihangd/DopeyTinyLlama-1.1B-v1](https://huggingface.co/vihangd/DopeyTinyLlama-1.1B-v1)
* [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T)
* [appvoid/palmer-002](https://huggingface.co/appvoid/palmer-002)
* [appvoid/palmer-003](https://huggingface.co/appvoid/palmer-003)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: appvoid/palmer-002
layer_range: [0, 5]
- sources:
- model: appvoid/palmer-003
layer_range: [3, 10]
- sources:
- model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
layer_range: [7, 15]
- sources:
- model: vihangd/DopeyTinyLlama-1.1B-v1
layer_range: [11, 20]
- sources:
- model: appvoid/palmer-003
layer_range: [15, 21]
merge_method: passthrough
dtype: float16
```
|
mradermacher/sheep-duck-llama-2-70b-v1.1-GGUF | mradermacher | 2024-05-06T04:55:29Z | 30 | 0 | transformers | [
"transformers",
"gguf",
"Riiid",
"llama-2",
"sheep-duck-llama-2",
"en",
"base_model:Riiid/sheep-duck-llama-2-70b-v1.1",
"base_model:quantized:Riiid/sheep-duck-llama-2-70b-v1.1",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-04-14T21:46:13Z | ---
base_model: Riiid/sheep-duck-llama-2-70b-v1.1
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
tags:
- Riiid
- llama-2
- sheep-duck-llama-2
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Riiid/sheep-duck-llama-2-70b-v1.1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/sheep-duck-llama-2-70b-v1.1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/sheep-duck-llama-2-70b-v1.1-GGUF/resolve/main/sheep-duck-llama-2-70b-v1.1.Q2_K.gguf) | Q2_K | 25.6 | |
| [GGUF](https://huggingface.co/mradermacher/sheep-duck-llama-2-70b-v1.1-GGUF/resolve/main/sheep-duck-llama-2-70b-v1.1.IQ3_XS.gguf) | IQ3_XS | 28.4 | |
| [GGUF](https://huggingface.co/mradermacher/sheep-duck-llama-2-70b-v1.1-GGUF/resolve/main/sheep-duck-llama-2-70b-v1.1.IQ3_S.gguf) | IQ3_S | 30.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/sheep-duck-llama-2-70b-v1.1-GGUF/resolve/main/sheep-duck-llama-2-70b-v1.1.Q3_K_S.gguf) | Q3_K_S | 30.0 | |
| [GGUF](https://huggingface.co/mradermacher/sheep-duck-llama-2-70b-v1.1-GGUF/resolve/main/sheep-duck-llama-2-70b-v1.1.IQ3_M.gguf) | IQ3_M | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/sheep-duck-llama-2-70b-v1.1-GGUF/resolve/main/sheep-duck-llama-2-70b-v1.1.Q3_K_M.gguf) | Q3_K_M | 33.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/sheep-duck-llama-2-70b-v1.1-GGUF/resolve/main/sheep-duck-llama-2-70b-v1.1.Q3_K_L.gguf) | Q3_K_L | 36.2 | |
| [GGUF](https://huggingface.co/mradermacher/sheep-duck-llama-2-70b-v1.1-GGUF/resolve/main/sheep-duck-llama-2-70b-v1.1.IQ4_XS.gguf) | IQ4_XS | 37.3 | |
| [GGUF](https://huggingface.co/mradermacher/sheep-duck-llama-2-70b-v1.1-GGUF/resolve/main/sheep-duck-llama-2-70b-v1.1.Q4_K_S.gguf) | Q4_K_S | 39.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/sheep-duck-llama-2-70b-v1.1-GGUF/resolve/main/sheep-duck-llama-2-70b-v1.1.Q4_K_M.gguf) | Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/sheep-duck-llama-2-70b-v1.1-GGUF/resolve/main/sheep-duck-llama-2-70b-v1.1.Q5_K_S.gguf) | Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/sheep-duck-llama-2-70b-v1.1-GGUF/resolve/main/sheep-duck-llama-2-70b-v1.1.Q5_K_M.gguf) | Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/sheep-duck-llama-2-70b-v1.1-GGUF/resolve/main/sheep-duck-llama-2-70b-v1.1.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/sheep-duck-llama-2-70b-v1.1-GGUF/resolve/main/sheep-duck-llama-2-70b-v1.1.Q6_K.gguf.part2of2) | Q6_K | 56.7 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/sheep-duck-llama-2-70b-v1.1-GGUF/resolve/main/sheep-duck-llama-2-70b-v1.1.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/sheep-duck-llama-2-70b-v1.1-GGUF/resolve/main/sheep-duck-llama-2-70b-v1.1.Q8_0.gguf.part2of2) | Q8_0 | 73.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
smit2911/results | smit2911 | 2024-05-06T04:55:29Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-05-06T04:51:54Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/trootech/Fine%20tuning%20mistral%207B/runs/rniuh99d)
# results
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 10
### Training results
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.19.0
- Tokenizers 0.19.1 |
mradermacher/stairolz-70b-GGUF | mradermacher | 2024-05-06T04:55:27Z | 25 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:uncensorie/stairolz-70b",
"base_model:quantized:uncensorie/stairolz-70b",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-04-14T23:03:23Z | ---
base_model: uncensorie/stairolz-70b
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/uncensorie/stairolz-70b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/stairolz-70b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/stairolz-70b-GGUF/resolve/main/stairolz-70b.Q2_K.gguf) | Q2_K | 25.6 | |
| [GGUF](https://huggingface.co/mradermacher/stairolz-70b-GGUF/resolve/main/stairolz-70b.IQ3_XS.gguf) | IQ3_XS | 28.4 | |
| [GGUF](https://huggingface.co/mradermacher/stairolz-70b-GGUF/resolve/main/stairolz-70b.IQ3_S.gguf) | IQ3_S | 30.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/stairolz-70b-GGUF/resolve/main/stairolz-70b.Q3_K_S.gguf) | Q3_K_S | 30.0 | |
| [GGUF](https://huggingface.co/mradermacher/stairolz-70b-GGUF/resolve/main/stairolz-70b.IQ3_M.gguf) | IQ3_M | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/stairolz-70b-GGUF/resolve/main/stairolz-70b.Q3_K_M.gguf) | Q3_K_M | 33.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/stairolz-70b-GGUF/resolve/main/stairolz-70b.Q3_K_L.gguf) | Q3_K_L | 36.2 | |
| [GGUF](https://huggingface.co/mradermacher/stairolz-70b-GGUF/resolve/main/stairolz-70b.IQ4_XS.gguf) | IQ4_XS | 37.3 | |
| [GGUF](https://huggingface.co/mradermacher/stairolz-70b-GGUF/resolve/main/stairolz-70b.Q4_K_S.gguf) | Q4_K_S | 39.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/stairolz-70b-GGUF/resolve/main/stairolz-70b.Q4_K_M.gguf) | Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/stairolz-70b-GGUF/resolve/main/stairolz-70b.Q5_K_S.gguf) | Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/stairolz-70b-GGUF/resolve/main/stairolz-70b.Q5_K_M.gguf) | Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/stairolz-70b-GGUF/resolve/main/stairolz-70b.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/stairolz-70b-GGUF/resolve/main/stairolz-70b.Q6_K.gguf.part2of2) | Q6_K | 56.7 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/stairolz-70b-GGUF/resolve/main/stairolz-70b.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/stairolz-70b-GGUF/resolve/main/stairolz-70b.Q8_0.gguf.part2of2) | Q8_0 | 73.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/NeuralStockFusion-7b-GGUF | mradermacher | 2024-05-06T04:55:21Z | 44 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Kukedlc/NeuralStockFusion-7b",
"base_model:quantized:Kukedlc/NeuralStockFusion-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-14T23:15:19Z | ---
base_model: Kukedlc/NeuralStockFusion-7b
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Kukedlc/NeuralStockFusion-7b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/llama-65b-instruct-GGUF | mradermacher | 2024-05-06T04:55:07Z | 24 | 0 | transformers | [
"transformers",
"gguf",
"upstage",
"llama",
"instruct",
"instruction",
"en",
"base_model:upstage/llama-65b-instruct",
"base_model:quantized:upstage/llama-65b-instruct",
"endpoints_compatible",
"region:us"
] | null | 2024-04-14T23:39:28Z | ---
base_model: upstage/llama-65b-instruct
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- upstage
- llama
- instruct
- instruction
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/upstage/llama-65b-instruct
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/llama-65b-instruct-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama-65b-instruct-GGUF/resolve/main/llama-65b-instruct.Q2_K.gguf) | Q2_K | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/llama-65b-instruct-GGUF/resolve/main/llama-65b-instruct.IQ3_XS.gguf) | IQ3_XS | 26.7 | |
| [GGUF](https://huggingface.co/mradermacher/llama-65b-instruct-GGUF/resolve/main/llama-65b-instruct.IQ3_S.gguf) | IQ3_S | 28.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/llama-65b-instruct-GGUF/resolve/main/llama-65b-instruct.Q3_K_S.gguf) | Q3_K_S | 28.3 | |
| [GGUF](https://huggingface.co/mradermacher/llama-65b-instruct-GGUF/resolve/main/llama-65b-instruct.IQ3_M.gguf) | IQ3_M | 29.9 | |
| [GGUF](https://huggingface.co/mradermacher/llama-65b-instruct-GGUF/resolve/main/llama-65b-instruct.Q3_K_M.gguf) | Q3_K_M | 31.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama-65b-instruct-GGUF/resolve/main/llama-65b-instruct.Q3_K_L.gguf) | Q3_K_L | 34.7 | |
| [GGUF](https://huggingface.co/mradermacher/llama-65b-instruct-GGUF/resolve/main/llama-65b-instruct.IQ4_XS.gguf) | IQ4_XS | 35.1 | |
| [GGUF](https://huggingface.co/mradermacher/llama-65b-instruct-GGUF/resolve/main/llama-65b-instruct.Q4_K_S.gguf) | Q4_K_S | 37.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-65b-instruct-GGUF/resolve/main/llama-65b-instruct.Q4_K_M.gguf) | Q4_K_M | 39.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-65b-instruct-GGUF/resolve/main/llama-65b-instruct.Q5_K_S.gguf) | Q5_K_S | 45.0 | |
| [GGUF](https://huggingface.co/mradermacher/llama-65b-instruct-GGUF/resolve/main/llama-65b-instruct.Q5_K_M.gguf) | Q5_K_M | 46.3 | |
| [PART 1](https://huggingface.co/mradermacher/llama-65b-instruct-GGUF/resolve/main/llama-65b-instruct.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/llama-65b-instruct-GGUF/resolve/main/llama-65b-instruct.Q6_K.gguf.part2of2) | Q6_K | 53.7 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/llama-65b-instruct-GGUF/resolve/main/llama-65b-instruct.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/llama-65b-instruct-GGUF/resolve/main/llama-65b-instruct.Q8_0.gguf.part2of2) | Q8_0 | 69.5 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/K2S3-Mistral-7b-v1.48-GGUF | mradermacher | 2024-05-06T04:54:27Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"en",
"ko",
"base_model:Changgil/K2S3-Mistral-7b-v1.48",
"base_model:quantized:Changgil/K2S3-Mistral-7b-v1.48",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-15T07:19:31Z | ---
base_model: Changgil/K2S3-Mistral-7b-v1.48
language:
- en
- ko
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Changgil/K2S3-Mistral-7b-v1.48
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.48-GGUF/resolve/main/K2S3-Mistral-7b-v1.48.Q2_K.gguf) | Q2_K | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.48-GGUF/resolve/main/K2S3-Mistral-7b-v1.48.IQ3_XS.gguf) | IQ3_XS | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.48-GGUF/resolve/main/K2S3-Mistral-7b-v1.48.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.48-GGUF/resolve/main/K2S3-Mistral-7b-v1.48.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.48-GGUF/resolve/main/K2S3-Mistral-7b-v1.48.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.48-GGUF/resolve/main/K2S3-Mistral-7b-v1.48.Q3_K_M.gguf) | Q3_K_M | 3.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.48-GGUF/resolve/main/K2S3-Mistral-7b-v1.48.Q3_K_L.gguf) | Q3_K_L | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.48-GGUF/resolve/main/K2S3-Mistral-7b-v1.48.IQ4_XS.gguf) | IQ4_XS | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.48-GGUF/resolve/main/K2S3-Mistral-7b-v1.48.Q4_K_S.gguf) | Q4_K_S | 4.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.48-GGUF/resolve/main/K2S3-Mistral-7b-v1.48.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.48-GGUF/resolve/main/K2S3-Mistral-7b-v1.48.Q5_K_S.gguf) | Q5_K_S | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.48-GGUF/resolve/main/K2S3-Mistral-7b-v1.48.Q5_K_M.gguf) | Q5_K_M | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.48-GGUF/resolve/main/K2S3-Mistral-7b-v1.48.Q6_K.gguf) | Q6_K | 6.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.48-GGUF/resolve/main/K2S3-Mistral-7b-v1.48.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Mixtral_AI_CyberTron_Ultra_SFT-GGUF | mradermacher | 2024-05-06T04:54:10Z | 33 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-15T09:35:27Z | ---
base_model: LeroyDyer/Mixtral_AI_CyberTron_Ultra_SFT
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/LeroyDyer/Mixtral_AI_CyberTron_Ultra_SFT
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_Ultra_SFT-GGUF/resolve/main/Mixtral_AI_CyberTron_Ultra_SFT.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_Ultra_SFT-GGUF/resolve/main/Mixtral_AI_CyberTron_Ultra_SFT.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_Ultra_SFT-GGUF/resolve/main/Mixtral_AI_CyberTron_Ultra_SFT.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_Ultra_SFT-GGUF/resolve/main/Mixtral_AI_CyberTron_Ultra_SFT.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_Ultra_SFT-GGUF/resolve/main/Mixtral_AI_CyberTron_Ultra_SFT.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_Ultra_SFT-GGUF/resolve/main/Mixtral_AI_CyberTron_Ultra_SFT.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_Ultra_SFT-GGUF/resolve/main/Mixtral_AI_CyberTron_Ultra_SFT.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_Ultra_SFT-GGUF/resolve/main/Mixtral_AI_CyberTron_Ultra_SFT.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_Ultra_SFT-GGUF/resolve/main/Mixtral_AI_CyberTron_Ultra_SFT.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_Ultra_SFT-GGUF/resolve/main/Mixtral_AI_CyberTron_Ultra_SFT.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_Ultra_SFT-GGUF/resolve/main/Mixtral_AI_CyberTron_Ultra_SFT.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_Ultra_SFT-GGUF/resolve/main/Mixtral_AI_CyberTron_Ultra_SFT.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_Ultra_SFT-GGUF/resolve/main/Mixtral_AI_CyberTron_Ultra_SFT.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_Ultra_SFT-GGUF/resolve/main/Mixtral_AI_CyberTron_Ultra_SFT.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Einstein_x_Dolphin-GGUF | mradermacher | 2024-05-06T04:54:03Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:bingbort/Einstein_x_Dolphin",
"base_model:quantized:bingbort/Einstein_x_Dolphin",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-15T10:35:58Z | ---
base_model: bingbort/Einstein_x_Dolphin
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/bingbort/Einstein_x_Dolphin
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Einstein_x_Dolphin-GGUF/resolve/main/Einstein_x_Dolphin.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein_x_Dolphin-GGUF/resolve/main/Einstein_x_Dolphin.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein_x_Dolphin-GGUF/resolve/main/Einstein_x_Dolphin.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein_x_Dolphin-GGUF/resolve/main/Einstein_x_Dolphin.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Einstein_x_Dolphin-GGUF/resolve/main/Einstein_x_Dolphin.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein_x_Dolphin-GGUF/resolve/main/Einstein_x_Dolphin.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Einstein_x_Dolphin-GGUF/resolve/main/Einstein_x_Dolphin.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein_x_Dolphin-GGUF/resolve/main/Einstein_x_Dolphin.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein_x_Dolphin-GGUF/resolve/main/Einstein_x_Dolphin.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Einstein_x_Dolphin-GGUF/resolve/main/Einstein_x_Dolphin.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Einstein_x_Dolphin-GGUF/resolve/main/Einstein_x_Dolphin.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein_x_Dolphin-GGUF/resolve/main/Einstein_x_Dolphin.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein_x_Dolphin-GGUF/resolve/main/Einstein_x_Dolphin.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Einstein_x_Dolphin-GGUF/resolve/main/Einstein_x_Dolphin.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/deepmoney-67b-chat-GGUF | mradermacher | 2024-05-06T04:54:01Z | 123 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:TriadParty/deepmoney-sft",
"base_model:TriadParty/deepmoney-67b-chat",
"base_model:quantized:TriadParty/deepmoney-67b-chat",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-15T10:52:11Z | ---
base_model: TriadParty/deepmoney-67b-chat
datasets:
- TriadParty/deepmoney-sft
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/TriadParty/deepmoney-67b-chat
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/deepmoney-67b-chat-GGUF/resolve/main/deepmoney-67b-chat.Q2_K.gguf) | Q2_K | 25.2 | |
| [GGUF](https://huggingface.co/mradermacher/deepmoney-67b-chat-GGUF/resolve/main/deepmoney-67b-chat.IQ3_XS.gguf) | IQ3_XS | 28.0 | |
| [GGUF](https://huggingface.co/mradermacher/deepmoney-67b-chat-GGUF/resolve/main/deepmoney-67b-chat.Q3_K_S.gguf) | Q3_K_S | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/deepmoney-67b-chat-GGUF/resolve/main/deepmoney-67b-chat.IQ3_S.gguf) | IQ3_S | 29.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/deepmoney-67b-chat-GGUF/resolve/main/deepmoney-67b-chat.IQ3_M.gguf) | IQ3_M | 30.6 | |
| [GGUF](https://huggingface.co/mradermacher/deepmoney-67b-chat-GGUF/resolve/main/deepmoney-67b-chat.Q3_K_M.gguf) | Q3_K_M | 32.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/deepmoney-67b-chat-GGUF/resolve/main/deepmoney-67b-chat.Q3_K_L.gguf) | Q3_K_L | 35.7 | |
| [GGUF](https://huggingface.co/mradermacher/deepmoney-67b-chat-GGUF/resolve/main/deepmoney-67b-chat.IQ4_XS.gguf) | IQ4_XS | 36.6 | |
| [GGUF](https://huggingface.co/mradermacher/deepmoney-67b-chat-GGUF/resolve/main/deepmoney-67b-chat.Q4_K_S.gguf) | Q4_K_S | 38.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/deepmoney-67b-chat-GGUF/resolve/main/deepmoney-67b-chat.Q4_K_M.gguf) | Q4_K_M | 40.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/deepmoney-67b-chat-GGUF/resolve/main/deepmoney-67b-chat.Q5_K_S.gguf) | Q5_K_S | 46.6 | |
| [GGUF](https://huggingface.co/mradermacher/deepmoney-67b-chat-GGUF/resolve/main/deepmoney-67b-chat.Q5_K_M.gguf) | Q5_K_M | 47.8 | |
| [PART 1](https://huggingface.co/mradermacher/deepmoney-67b-chat-GGUF/resolve/main/deepmoney-67b-chat.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/deepmoney-67b-chat-GGUF/resolve/main/deepmoney-67b-chat.Q6_K.gguf.part2of2) | Q6_K | 55.4 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/deepmoney-67b-chat-GGUF/resolve/main/deepmoney-67b-chat.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/deepmoney-67b-chat-GGUF/resolve/main/deepmoney-67b-chat.Q8_0.gguf.part2of2) | Q8_0 | 71.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/StableBeluga2-i1-GGUF | mradermacher | 2024-05-06T04:53:57Z | 96 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:conceptofmind/cot_submix_original",
"dataset:conceptofmind/flan2021_submix_original",
"dataset:conceptofmind/t0_submix_original",
"dataset:conceptofmind/niv2_submix_original",
"base_model:stabilityai/StableBeluga2",
"base_model:quantized:stabilityai/StableBeluga2",
"endpoints_compatible",
"region:us"
] | null | 2024-04-15T12:50:41Z | ---
base_model: stabilityai/StableBeluga2
datasets:
- conceptofmind/cot_submix_original
- conceptofmind/flan2021_submix_original
- conceptofmind/t0_submix_original
- conceptofmind/niv2_submix_original
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/stabilityai/StableBeluga2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/StableBeluga2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-IQ1_S.gguf) | i1-IQ1_S | 14.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-IQ1_M.gguf) | i1-IQ1_M | 16.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.4 | |
| [GGUF](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.4 | |
| [GGUF](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-IQ2_S.gguf) | i1-IQ2_S | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-IQ2_M.gguf) | i1-IQ2_M | 23.3 | |
| [GGUF](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-Q2_K.gguf) | i1-Q2_K | 25.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 26.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.4 | |
| [GGUF](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-IQ3_S.gguf) | i1-IQ3_S | 30.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-IQ3_M.gguf) | i1-IQ3_M | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 36.9 | |
| [GGUF](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-Q4_0.gguf) | i1-Q4_0 | 39.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 56.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Rava-2x7B-v0.1-GGUF | mradermacher | 2024-05-06T04:53:49Z | 4 | 0 | transformers | [
"transformers",
"gguf",
"en",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-15T17:23:17Z | ---
base_model: Novin-AI/Rava-2x7B-v0.1
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Novin-AI/Rava-2x7B-v0.1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Rava-2x7B-v0.1-GGUF/resolve/main/Rava-2x7B-v0.1.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Rava-2x7B-v0.1-GGUF/resolve/main/Rava-2x7B-v0.1.IQ3_XS.gguf) | IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Rava-2x7B-v0.1-GGUF/resolve/main/Rava-2x7B-v0.1.Q3_K_S.gguf) | Q3_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Rava-2x7B-v0.1-GGUF/resolve/main/Rava-2x7B-v0.1.IQ3_S.gguf) | IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Rava-2x7B-v0.1-GGUF/resolve/main/Rava-2x7B-v0.1.IQ3_M.gguf) | IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Rava-2x7B-v0.1-GGUF/resolve/main/Rava-2x7B-v0.1.Q3_K_M.gguf) | Q3_K_M | 6.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Rava-2x7B-v0.1-GGUF/resolve/main/Rava-2x7B-v0.1.Q3_K_L.gguf) | Q3_K_L | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Rava-2x7B-v0.1-GGUF/resolve/main/Rava-2x7B-v0.1.IQ4_XS.gguf) | IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/Rava-2x7B-v0.1-GGUF/resolve/main/Rava-2x7B-v0.1.Q4_K_S.gguf) | Q4_K_S | 7.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Rava-2x7B-v0.1-GGUF/resolve/main/Rava-2x7B-v0.1.Q4_K_M.gguf) | Q4_K_M | 7.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Rava-2x7B-v0.1-GGUF/resolve/main/Rava-2x7B-v0.1.Q5_K_S.gguf) | Q5_K_S | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/Rava-2x7B-v0.1-GGUF/resolve/main/Rava-2x7B-v0.1.Q5_K_M.gguf) | Q5_K_M | 9.2 | |
| [GGUF](https://huggingface.co/mradermacher/Rava-2x7B-v0.1-GGUF/resolve/main/Rava-2x7B-v0.1.Q6_K.gguf) | Q6_K | 10.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Rava-2x7B-v0.1-GGUF/resolve/main/Rava-2x7B-v0.1.Q8_0.gguf) | Q8_0 | 13.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Rava-3x7B-v0.1-GGUF | mradermacher | 2024-05-06T04:53:44Z | 2 | 0 | transformers | [
"transformers",
"gguf",
"en",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-15T20:14:16Z | ---
base_model: Novin-AI/Rava-3x7B-v0.1
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Novin-AI/Rava-3x7B-v0.1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Rava-3x7B-v0.1-GGUF/resolve/main/Rava-3x7B-v0.1.Q2_K.gguf) | Q2_K | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/Rava-3x7B-v0.1-GGUF/resolve/main/Rava-3x7B-v0.1.IQ3_XS.gguf) | IQ3_XS | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/Rava-3x7B-v0.1-GGUF/resolve/main/Rava-3x7B-v0.1.Q3_K_S.gguf) | Q3_K_S | 8.1 | |
| [GGUF](https://huggingface.co/mradermacher/Rava-3x7B-v0.1-GGUF/resolve/main/Rava-3x7B-v0.1.IQ3_S.gguf) | IQ3_S | 8.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Rava-3x7B-v0.1-GGUF/resolve/main/Rava-3x7B-v0.1.IQ3_M.gguf) | IQ3_M | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Rava-3x7B-v0.1-GGUF/resolve/main/Rava-3x7B-v0.1.Q3_K_M.gguf) | Q3_K_M | 9.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Rava-3x7B-v0.1-GGUF/resolve/main/Rava-3x7B-v0.1.Q3_K_L.gguf) | Q3_K_L | 9.7 | |
| [GGUF](https://huggingface.co/mradermacher/Rava-3x7B-v0.1-GGUF/resolve/main/Rava-3x7B-v0.1.IQ4_XS.gguf) | IQ4_XS | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/Rava-3x7B-v0.1-GGUF/resolve/main/Rava-3x7B-v0.1.Q4_K_S.gguf) | Q4_K_S | 10.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Rava-3x7B-v0.1-GGUF/resolve/main/Rava-3x7B-v0.1.Q4_K_M.gguf) | Q4_K_M | 11.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Rava-3x7B-v0.1-GGUF/resolve/main/Rava-3x7B-v0.1.Q5_K_S.gguf) | Q5_K_S | 12.8 | |
| [GGUF](https://huggingface.co/mradermacher/Rava-3x7B-v0.1-GGUF/resolve/main/Rava-3x7B-v0.1.Q5_K_M.gguf) | Q5_K_M | 13.2 | |
| [GGUF](https://huggingface.co/mradermacher/Rava-3x7B-v0.1-GGUF/resolve/main/Rava-3x7B-v0.1.Q6_K.gguf) | Q6_K | 15.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Rava-3x7B-v0.1-GGUF/resolve/main/Rava-3x7B-v0.1.Q8_0.gguf) | Q8_0 | 19.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/kaori-34b-v4-GGUF | mradermacher | 2024-05-06T04:53:34Z | 4 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:KaeriJenti/kaori-34b-v4",
"base_model:quantized:KaeriJenti/kaori-34b-v4",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-04-15T23:58:47Z | ---
base_model: KaeriJenti/kaori-34b-v4
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/KaeriJenti/kaori-34b-v4
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-GGUF/resolve/main/kaori-34b-v4.Q2_K.gguf) | Q2_K | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-GGUF/resolve/main/kaori-34b-v4.IQ3_XS.gguf) | IQ3_XS | 14.3 | |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-GGUF/resolve/main/kaori-34b-v4.Q3_K_S.gguf) | Q3_K_S | 15.1 | |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-GGUF/resolve/main/kaori-34b-v4.IQ3_S.gguf) | IQ3_S | 15.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-GGUF/resolve/main/kaori-34b-v4.IQ3_M.gguf) | IQ3_M | 15.7 | |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-GGUF/resolve/main/kaori-34b-v4.Q3_K_M.gguf) | Q3_K_M | 16.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-GGUF/resolve/main/kaori-34b-v4.Q3_K_L.gguf) | Q3_K_L | 18.2 | |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-GGUF/resolve/main/kaori-34b-v4.IQ4_XS.gguf) | IQ4_XS | 18.7 | |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-GGUF/resolve/main/kaori-34b-v4.Q4_K_S.gguf) | Q4_K_S | 19.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-GGUF/resolve/main/kaori-34b-v4.Q4_K_M.gguf) | Q4_K_M | 20.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-GGUF/resolve/main/kaori-34b-v4.Q5_K_S.gguf) | Q5_K_S | 23.8 | |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-GGUF/resolve/main/kaori-34b-v4.Q5_K_M.gguf) | Q5_K_M | 24.4 | |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-GGUF/resolve/main/kaori-34b-v4.Q6_K.gguf) | Q6_K | 28.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-GGUF/resolve/main/kaori-34b-v4.Q8_0.gguf) | Q8_0 | 36.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/chronob-1.4-lin-70b-GGUF | mradermacher | 2024-05-06T04:53:18Z | 1 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:uncensorie/chronob-1.4-lin-70b",
"base_model:quantized:uncensorie/chronob-1.4-lin-70b",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T01:54:51Z | ---
base_model: uncensorie/chronob-1.4-lin-70b
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/uncensorie/chronob-1.4-lin-70b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/chronob-1.4-lin-70b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-GGUF/resolve/main/chronob-1.4-lin-70b.Q2_K.gguf) | Q2_K | 25.6 | |
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-GGUF/resolve/main/chronob-1.4-lin-70b.IQ3_XS.gguf) | IQ3_XS | 28.4 | |
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-GGUF/resolve/main/chronob-1.4-lin-70b.IQ3_S.gguf) | IQ3_S | 30.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-GGUF/resolve/main/chronob-1.4-lin-70b.Q3_K_S.gguf) | Q3_K_S | 30.0 | |
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-GGUF/resolve/main/chronob-1.4-lin-70b.IQ3_M.gguf) | IQ3_M | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-GGUF/resolve/main/chronob-1.4-lin-70b.Q3_K_M.gguf) | Q3_K_M | 33.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-GGUF/resolve/main/chronob-1.4-lin-70b.Q3_K_L.gguf) | Q3_K_L | 36.2 | |
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-GGUF/resolve/main/chronob-1.4-lin-70b.IQ4_XS.gguf) | IQ4_XS | 37.3 | |
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-GGUF/resolve/main/chronob-1.4-lin-70b.Q4_K_S.gguf) | Q4_K_S | 39.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-GGUF/resolve/main/chronob-1.4-lin-70b.Q4_K_M.gguf) | Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-GGUF/resolve/main/chronob-1.4-lin-70b.Q5_K_S.gguf) | Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-GGUF/resolve/main/chronob-1.4-lin-70b.Q5_K_M.gguf) | Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-GGUF/resolve/main/chronob-1.4-lin-70b.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-GGUF/resolve/main/chronob-1.4-lin-70b.Q6_K.gguf.part2of2) | Q6_K | 56.7 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-GGUF/resolve/main/chronob-1.4-lin-70b.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-GGUF/resolve/main/chronob-1.4-lin-70b.Q8_0.gguf.part2of2) | Q8_0 | 73.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/kaori-34b-v4-i1-GGUF | mradermacher | 2024-05-06T04:53:15Z | 14 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:KaeriJenti/kaori-34b-v4",
"base_model:quantized:KaeriJenti/kaori-34b-v4",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T01:55:21Z | ---
base_model: KaeriJenti/kaori-34b-v4
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/KaeriJenti/kaori-34b-v4
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/kaori-34b-v4-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-IQ1_S.gguf) | i1-IQ1_S | 7.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-IQ1_M.gguf) | i1-IQ1_M | 8.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-IQ2_S.gguf) | i1-IQ2_S | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-IQ2_M.gguf) | i1-IQ2_M | 11.9 | |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-Q2_K.gguf) | i1-Q2_K | 12.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 13.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-IQ3_XS.gguf) | i1-IQ3_XS | 14.3 | |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-Q3_K_S.gguf) | i1-Q3_K_S | 15.1 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-IQ3_S.gguf) | i1-IQ3_S | 15.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-IQ3_M.gguf) | i1-IQ3_M | 15.7 | |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-Q3_K_L.gguf) | i1-Q3_K_L | 18.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-IQ4_XS.gguf) | i1-IQ4_XS | 18.6 | |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-Q4_0.gguf) | i1-Q4_0 | 19.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-Q4_K_S.gguf) | i1-Q4_K_S | 19.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-Q5_K_S.gguf) | i1-Q5_K_S | 23.8 | |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-Q5_K_M.gguf) | i1-Q5_K_M | 24.4 | |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-Q6_K.gguf) | i1-Q6_K | 28.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/NeuralSOTA-7B-slerp-GGUF | mradermacher | 2024-05-06T04:52:53Z | 6 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"Kukedlc/NeuralSoTa-7b-v0.1",
"Kukedlc/NeuralSynthesis-7B-v0.3",
"Kukedlc/NeuralSirKrishna-7b",
"en",
"base_model:Kukedlc/NeuralSOTA-7B-slerp",
"base_model:quantized:Kukedlc/NeuralSOTA-7B-slerp",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T05:40:06Z | ---
base_model: Kukedlc/NeuralSOTA-7B-slerp
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- Kukedlc/NeuralSoTa-7b-v0.1
- Kukedlc/NeuralSynthesis-7B-v0.3
- Kukedlc/NeuralSirKrishna-7b
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Kukedlc/NeuralSOTA-7B-slerp
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NeuralSOTA-7B-slerp-GGUF/resolve/main/NeuralSOTA-7B-slerp.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralSOTA-7B-slerp-GGUF/resolve/main/NeuralSOTA-7B-slerp.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralSOTA-7B-slerp-GGUF/resolve/main/NeuralSOTA-7B-slerp.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralSOTA-7B-slerp-GGUF/resolve/main/NeuralSOTA-7B-slerp.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/NeuralSOTA-7B-slerp-GGUF/resolve/main/NeuralSOTA-7B-slerp.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralSOTA-7B-slerp-GGUF/resolve/main/NeuralSOTA-7B-slerp.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralSOTA-7B-slerp-GGUF/resolve/main/NeuralSOTA-7B-slerp.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralSOTA-7B-slerp-GGUF/resolve/main/NeuralSOTA-7B-slerp.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralSOTA-7B-slerp-GGUF/resolve/main/NeuralSOTA-7B-slerp.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeuralSOTA-7B-slerp-GGUF/resolve/main/NeuralSOTA-7B-slerp.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeuralSOTA-7B-slerp-GGUF/resolve/main/NeuralSOTA-7B-slerp.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralSOTA-7B-slerp-GGUF/resolve/main/NeuralSOTA-7B-slerp.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralSOTA-7B-slerp-GGUF/resolve/main/NeuralSOTA-7B-slerp.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralSOTA-7B-slerp-GGUF/resolve/main/NeuralSOTA-7B-slerp.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Inspire-7B-slerp-GGUF | mradermacher | 2024-05-06T04:52:51Z | 4 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:tvkkishore/Inspire-7B-slerp",
"base_model:quantized:tvkkishore/Inspire-7B-slerp",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-16T06:08:27Z | ---
base_model: tvkkishore/Inspire-7B-slerp
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/tvkkishore/Inspire-7B-slerp
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Inspire-7B-slerp-GGUF/resolve/main/Inspire-7B-slerp.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Inspire-7B-slerp-GGUF/resolve/main/Inspire-7B-slerp.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Inspire-7B-slerp-GGUF/resolve/main/Inspire-7B-slerp.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Inspire-7B-slerp-GGUF/resolve/main/Inspire-7B-slerp.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Inspire-7B-slerp-GGUF/resolve/main/Inspire-7B-slerp.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Inspire-7B-slerp-GGUF/resolve/main/Inspire-7B-slerp.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Inspire-7B-slerp-GGUF/resolve/main/Inspire-7B-slerp.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Inspire-7B-slerp-GGUF/resolve/main/Inspire-7B-slerp.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Inspire-7B-slerp-GGUF/resolve/main/Inspire-7B-slerp.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Inspire-7B-slerp-GGUF/resolve/main/Inspire-7B-slerp.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Inspire-7B-slerp-GGUF/resolve/main/Inspire-7B-slerp.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Inspire-7B-slerp-GGUF/resolve/main/Inspire-7B-slerp.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Inspire-7B-slerp-GGUF/resolve/main/Inspire-7B-slerp.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Inspire-7B-slerp-GGUF/resolve/main/Inspire-7B-slerp.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Chimera-7B-TIES-GGUF | mradermacher | 2024-05-06T04:52:43Z | 37 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"DreadPoor/Siren-7B-slerp",
"S-miguel/The-Trinity-Coder-7B",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T07:05:14Z | ---
base_model: DreadPoor/Chimera-7B-TIES
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- DreadPoor/Siren-7B-slerp
- S-miguel/The-Trinity-Coder-7B
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/DreadPoor/Chimera-7B-TIES
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Chimera-7B-TIES-GGUF/resolve/main/Chimera-7B-TIES.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Chimera-7B-TIES-GGUF/resolve/main/Chimera-7B-TIES.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Chimera-7B-TIES-GGUF/resolve/main/Chimera-7B-TIES.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Chimera-7B-TIES-GGUF/resolve/main/Chimera-7B-TIES.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Chimera-7B-TIES-GGUF/resolve/main/Chimera-7B-TIES.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Chimera-7B-TIES-GGUF/resolve/main/Chimera-7B-TIES.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Chimera-7B-TIES-GGUF/resolve/main/Chimera-7B-TIES.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Chimera-7B-TIES-GGUF/resolve/main/Chimera-7B-TIES.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Chimera-7B-TIES-GGUF/resolve/main/Chimera-7B-TIES.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Chimera-7B-TIES-GGUF/resolve/main/Chimera-7B-TIES.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Chimera-7B-TIES-GGUF/resolve/main/Chimera-7B-TIES.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Chimera-7B-TIES-GGUF/resolve/main/Chimera-7B-TIES.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Chimera-7B-TIES-GGUF/resolve/main/Chimera-7B-TIES.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Chimera-7B-TIES-GGUF/resolve/main/Chimera-7B-TIES.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
appvoid/merging-x2 | appvoid | 2024-05-06T04:49:36Z | 141 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T",
"base_model:merge:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T",
"base_model:appvoid/palmer-002",
"base_model:merge:appvoid/palmer-002",
"base_model:appvoid/palmer-003",
"base_model:merge:appvoid/palmer-003",
"base_model:vihangd/DopeyTinyLlama-1.1B-v1",
"base_model:merge:vihangd/DopeyTinyLlama-1.1B-v1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-06T04:47:33Z | ---
base_model:
- appvoid/palmer-002
- TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
- vihangd/DopeyTinyLlama-1.1B-v1
- appvoid/palmer-003
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [appvoid/palmer-002](https://huggingface.co/appvoid/palmer-002)
* [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T)
* [vihangd/DopeyTinyLlama-1.1B-v1](https://huggingface.co/vihangd/DopeyTinyLlama-1.1B-v1)
* [appvoid/palmer-003](https://huggingface.co/appvoid/palmer-003)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: appvoid/palmer-002
layer_range: [0, 5]
- sources:
- model: appvoid/palmer-003
layer_range: [4, 10]
- sources:
- model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
layer_range: [8, 15]
- sources:
- model: vihangd/DopeyTinyLlama-1.1B-v1
layer_range: [12, 20]
- sources:
- model: appvoid/palmer-003
layer_range: [16, 21]
merge_method: passthrough
dtype: float16
```
|
Kimty/Sqlcoder_v3 | Kimty | 2024-05-06T04:46:53Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-06T04:43:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Michel-13B-GGUF | mradermacher | 2024-05-06T04:44:04Z | 193 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:PotatoOff/Michel-13B",
"base_model:quantized:PotatoOff/Michel-13B",
"license:agpl-3.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-16T07:06:16Z | ---
base_model: PotatoOff/Michel-13B
language:
- en
library_name: transformers
license: agpl-3.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/PotatoOff/Michel-13B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Michel-13B-GGUF/resolve/main/Michel-13B.Q2_K.gguf) | Q2_K | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Michel-13B-GGUF/resolve/main/Michel-13B.IQ3_XS.gguf) | IQ3_XS | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Michel-13B-GGUF/resolve/main/Michel-13B.IQ3_S.gguf) | IQ3_S | 5.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Michel-13B-GGUF/resolve/main/Michel-13B.Q3_K_S.gguf) | Q3_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Michel-13B-GGUF/resolve/main/Michel-13B.IQ3_M.gguf) | IQ3_M | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/Michel-13B-GGUF/resolve/main/Michel-13B.Q3_K_M.gguf) | Q3_K_M | 6.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Michel-13B-GGUF/resolve/main/Michel-13B.Q3_K_L.gguf) | Q3_K_L | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/Michel-13B-GGUF/resolve/main/Michel-13B.IQ4_XS.gguf) | IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/Michel-13B-GGUF/resolve/main/Michel-13B.Q4_K_S.gguf) | Q4_K_S | 7.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Michel-13B-GGUF/resolve/main/Michel-13B.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Michel-13B-GGUF/resolve/main/Michel-13B.Q5_K_S.gguf) | Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/Michel-13B-GGUF/resolve/main/Michel-13B.Q5_K_M.gguf) | Q5_K_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/Michel-13B-GGUF/resolve/main/Michel-13B.Q6_K.gguf) | Q6_K | 10.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Michel-13B-GGUF/resolve/main/Michel-13B.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/SeaMax-7B-GGUF | mradermacher | 2024-05-06T04:43:44Z | 43 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:mpasila/SeaMax-7B",
"base_model:quantized:mpasila/SeaMax-7B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-16T13:43:30Z | ---
base_model: mpasila/SeaMax-7B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/mpasila/SeaMax-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SeaMax-7B-GGUF/resolve/main/SeaMax-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/SeaMax-7B-GGUF/resolve/main/SeaMax-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/SeaMax-7B-GGUF/resolve/main/SeaMax-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/SeaMax-7B-GGUF/resolve/main/SeaMax-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/SeaMax-7B-GGUF/resolve/main/SeaMax-7B.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/SeaMax-7B-GGUF/resolve/main/SeaMax-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SeaMax-7B-GGUF/resolve/main/SeaMax-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/SeaMax-7B-GGUF/resolve/main/SeaMax-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/SeaMax-7B-GGUF/resolve/main/SeaMax-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SeaMax-7B-GGUF/resolve/main/SeaMax-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SeaMax-7B-GGUF/resolve/main/SeaMax-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/SeaMax-7B-GGUF/resolve/main/SeaMax-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/SeaMax-7B-GGUF/resolve/main/SeaMax-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/SeaMax-7B-GGUF/resolve/main/SeaMax-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/kaori-72b-v1-i1-GGUF | mradermacher | 2024-05-06T04:43:33Z | 2 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:KaeriJenti/kaori-72b-v1",
"base_model:quantized:KaeriJenti/kaori-72b-v1",
"license:unknown",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T17:29:26Z | ---
base_model: KaeriJenti/kaori-72b-v1
language:
- en
library_name: transformers
license: unknown
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/KaeriJenti/kaori-72b-v1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/kaori-72b-v1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/kaori-72b-v1-i1-GGUF/resolve/main/kaori-72b-v1.i1-IQ1_S.gguf) | i1-IQ1_S | 15.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/kaori-72b-v1-i1-GGUF/resolve/main/kaori-72b-v1.i1-IQ1_M.gguf) | i1-IQ1_M | 17.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/kaori-72b-v1-i1-GGUF/resolve/main/kaori-72b-v1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.5 | |
| [GGUF](https://huggingface.co/mradermacher/kaori-72b-v1-i1-GGUF/resolve/main/kaori-72b-v1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.6 | |
| [GGUF](https://huggingface.co/mradermacher/kaori-72b-v1-i1-GGUF/resolve/main/kaori-72b-v1.i1-IQ2_S.gguf) | i1-IQ2_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/kaori-72b-v1-i1-GGUF/resolve/main/kaori-72b-v1.i1-IQ2_M.gguf) | i1-IQ2_M | 24.7 | |
| [GGUF](https://huggingface.co/mradermacher/kaori-72b-v1-i1-GGUF/resolve/main/kaori-72b-v1.i1-Q2_K.gguf) | i1-Q2_K | 26.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/kaori-72b-v1-i1-GGUF/resolve/main/kaori-72b-v1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 28.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/kaori-72b-v1-i1-GGUF/resolve/main/kaori-72b-v1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 30.5 | |
| [GGUF](https://huggingface.co/mradermacher/kaori-72b-v1-i1-GGUF/resolve/main/kaori-72b-v1.i1-IQ3_S.gguf) | i1-IQ3_S | 31.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/kaori-72b-v1-i1-GGUF/resolve/main/kaori-72b-v1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.7 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/kaori-72b-v1-i1-GGUF/resolve/main/kaori-72b-v1.i1-IQ3_M.gguf) | i1-IQ3_M | 34.8 | |
| [GGUF](https://huggingface.co/mradermacher/kaori-72b-v1-i1-GGUF/resolve/main/kaori-72b-v1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 36.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/kaori-72b-v1-i1-GGUF/resolve/main/kaori-72b-v1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.9 | |
| [GGUF](https://huggingface.co/mradermacher/kaori-72b-v1-i1-GGUF/resolve/main/kaori-72b-v1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 39.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/kaori-72b-v1-i1-GGUF/resolve/main/kaori-72b-v1.i1-Q4_0.gguf) | i1-Q4_0 | 41.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/kaori-72b-v1-i1-GGUF/resolve/main/kaori-72b-v1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 41.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/kaori-72b-v1-i1-GGUF/resolve/main/kaori-72b-v1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 45.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/kaori-72b-v1-i1-GGUF/resolve/main/kaori-72b-v1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/kaori-72b-v1-i1-GGUF/resolve/main/kaori-72b-v1.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/kaori-72b-v1-i1-GGUF/resolve/main/kaori-72b-v1.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 53.2 | |
| [PART 1](https://huggingface.co/mradermacher/kaori-72b-v1-i1-GGUF/resolve/main/kaori-72b-v1.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/kaori-72b-v1-i1-GGUF/resolve/main/kaori-72b-v1.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 59.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/mergekit-slerp-exkkzvd-GGUF | mradermacher | 2024-05-06T04:43:31Z | 2 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:mergekit-community/mergekit-slerp-exkkzvd",
"base_model:quantized:mergekit-community/mergekit-slerp-exkkzvd",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T17:36:21Z | ---
base_model: mergekit-community/mergekit-slerp-exkkzvd
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/mergekit-community/mergekit-slerp-exkkzvd
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-exkkzvd-GGUF/resolve/main/mergekit-slerp-exkkzvd.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-exkkzvd-GGUF/resolve/main/mergekit-slerp-exkkzvd.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-exkkzvd-GGUF/resolve/main/mergekit-slerp-exkkzvd.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-exkkzvd-GGUF/resolve/main/mergekit-slerp-exkkzvd.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-exkkzvd-GGUF/resolve/main/mergekit-slerp-exkkzvd.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-exkkzvd-GGUF/resolve/main/mergekit-slerp-exkkzvd.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-exkkzvd-GGUF/resolve/main/mergekit-slerp-exkkzvd.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-exkkzvd-GGUF/resolve/main/mergekit-slerp-exkkzvd.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-exkkzvd-GGUF/resolve/main/mergekit-slerp-exkkzvd.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-exkkzvd-GGUF/resolve/main/mergekit-slerp-exkkzvd.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-exkkzvd-GGUF/resolve/main/mergekit-slerp-exkkzvd.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-exkkzvd-GGUF/resolve/main/mergekit-slerp-exkkzvd.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-exkkzvd-GGUF/resolve/main/mergekit-slerp-exkkzvd.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-exkkzvd-GGUF/resolve/main/mergekit-slerp-exkkzvd.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Lumina-3.5-GGUF | mradermacher | 2024-05-06T04:43:23Z | 39 | 0 | transformers | [
"transformers",
"gguf",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"en",
"base_model:Ppoyaa/Lumina-3.5",
"base_model:quantized:Ppoyaa/Lumina-3.5",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T18:40:36Z | ---
base_model: Ppoyaa/Lumina-3.5
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Ppoyaa/Lumina-3.5
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Lumina-3.5-GGUF/resolve/main/Lumina-3.5.Q2_K.gguf) | Q2_K | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/Lumina-3.5-GGUF/resolve/main/Lumina-3.5.IQ3_XS.gguf) | IQ3_XS | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/Lumina-3.5-GGUF/resolve/main/Lumina-3.5.Q3_K_S.gguf) | Q3_K_S | 8.1 | |
| [GGUF](https://huggingface.co/mradermacher/Lumina-3.5-GGUF/resolve/main/Lumina-3.5.IQ3_S.gguf) | IQ3_S | 8.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Lumina-3.5-GGUF/resolve/main/Lumina-3.5.IQ3_M.gguf) | IQ3_M | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Lumina-3.5-GGUF/resolve/main/Lumina-3.5.Q3_K_M.gguf) | Q3_K_M | 9.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Lumina-3.5-GGUF/resolve/main/Lumina-3.5.Q3_K_L.gguf) | Q3_K_L | 9.7 | |
| [GGUF](https://huggingface.co/mradermacher/Lumina-3.5-GGUF/resolve/main/Lumina-3.5.IQ4_XS.gguf) | IQ4_XS | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/Lumina-3.5-GGUF/resolve/main/Lumina-3.5.Q4_K_S.gguf) | Q4_K_S | 10.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Lumina-3.5-GGUF/resolve/main/Lumina-3.5.Q4_K_M.gguf) | Q4_K_M | 11.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Lumina-3.5-GGUF/resolve/main/Lumina-3.5.Q5_K_S.gguf) | Q5_K_S | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/Lumina-3.5-GGUF/resolve/main/Lumina-3.5.Q5_K_M.gguf) | Q5_K_M | 13.2 | |
| [GGUF](https://huggingface.co/mradermacher/Lumina-3.5-GGUF/resolve/main/Lumina-3.5.Q6_K.gguf) | Q6_K | 15.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Lumina-3.5-GGUF/resolve/main/Lumina-3.5.Q8_0.gguf) | Q8_0 | 19.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/mergekit-ties-vjlpsxw-GGUF | mradermacher | 2024-05-06T04:43:20Z | 19 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:mergekit-community/mergekit-ties-vjlpsxw",
"base_model:quantized:mergekit-community/mergekit-ties-vjlpsxw",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T19:30:06Z | ---
base_model: mergekit-community/mergekit-ties-vjlpsxw
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/mergekit-community/mergekit-ties-vjlpsxw
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/mergekit-ties-vjlpsxw-GGUF/resolve/main/mergekit-ties-vjlpsxw.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/mergekit-ties-vjlpsxw-GGUF/resolve/main/mergekit-ties-vjlpsxw.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/mergekit-ties-vjlpsxw-GGUF/resolve/main/mergekit-ties-vjlpsxw.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/mergekit-ties-vjlpsxw-GGUF/resolve/main/mergekit-ties-vjlpsxw.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/mergekit-ties-vjlpsxw-GGUF/resolve/main/mergekit-ties-vjlpsxw.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/mergekit-ties-vjlpsxw-GGUF/resolve/main/mergekit-ties-vjlpsxw.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/mergekit-ties-vjlpsxw-GGUF/resolve/main/mergekit-ties-vjlpsxw.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/mergekit-ties-vjlpsxw-GGUF/resolve/main/mergekit-ties-vjlpsxw.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/mergekit-ties-vjlpsxw-GGUF/resolve/main/mergekit-ties-vjlpsxw.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mergekit-ties-vjlpsxw-GGUF/resolve/main/mergekit-ties-vjlpsxw.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mergekit-ties-vjlpsxw-GGUF/resolve/main/mergekit-ties-vjlpsxw.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/mergekit-ties-vjlpsxw-GGUF/resolve/main/mergekit-ties-vjlpsxw.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/mergekit-ties-vjlpsxw-GGUF/resolve/main/mergekit-ties-vjlpsxw.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/mergekit-ties-vjlpsxw-GGUF/resolve/main/mergekit-ties-vjlpsxw.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Neversleep-11B-v0.1-GGUF | mradermacher | 2024-05-06T04:43:09Z | 4 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-16T22:47:53Z | ---
base_model: crimsonjoo/Neversleep-11B-v0.1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/crimsonjoo/Neversleep-11B-v0.1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Neversleep-11B-v0.1-GGUF/resolve/main/Neversleep-11B-v0.1.Q2_K.gguf) | Q2_K | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Neversleep-11B-v0.1-GGUF/resolve/main/Neversleep-11B-v0.1.IQ3_XS.gguf) | IQ3_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Neversleep-11B-v0.1-GGUF/resolve/main/Neversleep-11B-v0.1.Q3_K_S.gguf) | Q3_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Neversleep-11B-v0.1-GGUF/resolve/main/Neversleep-11B-v0.1.IQ3_S.gguf) | IQ3_S | 4.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Neversleep-11B-v0.1-GGUF/resolve/main/Neversleep-11B-v0.1.IQ3_M.gguf) | IQ3_M | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Neversleep-11B-v0.1-GGUF/resolve/main/Neversleep-11B-v0.1.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Neversleep-11B-v0.1-GGUF/resolve/main/Neversleep-11B-v0.1.Q3_K_L.gguf) | Q3_K_L | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Neversleep-11B-v0.1-GGUF/resolve/main/Neversleep-11B-v0.1.IQ4_XS.gguf) | IQ4_XS | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/Neversleep-11B-v0.1-GGUF/resolve/main/Neversleep-11B-v0.1.Q4_K_S.gguf) | Q4_K_S | 6.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Neversleep-11B-v0.1-GGUF/resolve/main/Neversleep-11B-v0.1.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Neversleep-11B-v0.1-GGUF/resolve/main/Neversleep-11B-v0.1.Q5_K_S.gguf) | Q5_K_S | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/Neversleep-11B-v0.1-GGUF/resolve/main/Neversleep-11B-v0.1.Q5_K_M.gguf) | Q5_K_M | 7.8 | |
| [GGUF](https://huggingface.co/mradermacher/Neversleep-11B-v0.1-GGUF/resolve/main/Neversleep-11B-v0.1.Q6_K.gguf) | Q6_K | 9.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Neversleep-11B-v0.1-GGUF/resolve/main/Neversleep-11B-v0.1.Q8_0.gguf) | Q8_0 | 11.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/NSK-128k-7B-slerp-GGUF | mradermacher | 2024-05-06T04:43:06Z | 44 | 1 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"Nitral-AI/Nyan-Stunna-7B",
"Nitral-AI/Kunocchini-7b-128k-test",
"128k",
"en",
"base_model:AlekseiPravdin/NSK-128k-7B-slerp",
"base_model:quantized:AlekseiPravdin/NSK-128k-7B-slerp",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-16T23:27:35Z | ---
base_model: AlekseiPravdin/NSK-128k-7B-slerp
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- Nitral-AI/Nyan-Stunna-7B
- Nitral-AI/Kunocchini-7b-128k-test
- 128k
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/AlekseiPravdin/NSK-128k-7B-slerp
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NSK-128k-7B-slerp-GGUF/resolve/main/NSK-128k-7B-slerp.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/NSK-128k-7B-slerp-GGUF/resolve/main/NSK-128k-7B-slerp.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/NSK-128k-7B-slerp-GGUF/resolve/main/NSK-128k-7B-slerp.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/NSK-128k-7B-slerp-GGUF/resolve/main/NSK-128k-7B-slerp.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/NSK-128k-7B-slerp-GGUF/resolve/main/NSK-128k-7B-slerp.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/NSK-128k-7B-slerp-GGUF/resolve/main/NSK-128k-7B-slerp.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NSK-128k-7B-slerp-GGUF/resolve/main/NSK-128k-7B-slerp.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/NSK-128k-7B-slerp-GGUF/resolve/main/NSK-128k-7B-slerp.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/NSK-128k-7B-slerp-GGUF/resolve/main/NSK-128k-7B-slerp.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NSK-128k-7B-slerp-GGUF/resolve/main/NSK-128k-7B-slerp.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NSK-128k-7B-slerp-GGUF/resolve/main/NSK-128k-7B-slerp.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/NSK-128k-7B-slerp-GGUF/resolve/main/NSK-128k-7B-slerp.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/NSK-128k-7B-slerp-GGUF/resolve/main/NSK-128k-7B-slerp.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/NSK-128k-7B-slerp-GGUF/resolve/main/NSK-128k-7B-slerp.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
rahul9699/wav2vec2-base-gig-demo-colab | rahul9699 | 2024-05-06T04:42:51Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T05:19:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/MeowGPT-3.5-GGUF | mradermacher | 2024-05-06T04:42:45Z | 91 | 0 | transformers | [
"transformers",
"gguf",
"freeai",
"conversational",
"meowgpt",
"gpt",
"free",
"opensource",
"splittic",
"ai",
"en",
"base_model:cutycat2000x/MeowGPT-3.5",
"base_model:quantized:cutycat2000x/MeowGPT-3.5",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T03:15:54Z | ---
base_model: cutycat2000x/MeowGPT-3.5
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- freeai
- conversational
- meowgpt
- gpt
- free
- opensource
- splittic
- ai
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/cutycat2000x/MeowGPT-3.5
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-3.5-GGUF/resolve/main/MeowGPT-3.5.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-3.5-GGUF/resolve/main/MeowGPT-3.5.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-3.5-GGUF/resolve/main/MeowGPT-3.5.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-3.5-GGUF/resolve/main/MeowGPT-3.5.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-3.5-GGUF/resolve/main/MeowGPT-3.5.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-3.5-GGUF/resolve/main/MeowGPT-3.5.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-3.5-GGUF/resolve/main/MeowGPT-3.5.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-3.5-GGUF/resolve/main/MeowGPT-3.5.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-3.5-GGUF/resolve/main/MeowGPT-3.5.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-3.5-GGUF/resolve/main/MeowGPT-3.5.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-3.5-GGUF/resolve/main/MeowGPT-3.5.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-3.5-GGUF/resolve/main/MeowGPT-3.5.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-3.5-GGUF/resolve/main/MeowGPT-3.5.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-3.5-GGUF/resolve/main/MeowGPT-3.5.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Wiz2Beagle-7b-v1-GGUF | mradermacher | 2024-05-06T04:42:35Z | 53 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"vortexmergekit",
"amazingvince/Not-WizardLM-2-7B",
"mlabonne/NeuralBeagle14-7B",
"en",
"base_model:eldogbbhed/Wiz2Beagle-7b-v1",
"base_model:quantized:eldogbbhed/Wiz2Beagle-7b-v1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T05:25:21Z | ---
base_model: eldogbbhed/Wiz2Beagle-7b-v1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- vortexmergekit
- amazingvince/Not-WizardLM-2-7B
- mlabonne/NeuralBeagle14-7B
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/eldogbbhed/Wiz2Beagle-7b-v1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Wiz2Beagle-7b-v1-GGUF/resolve/main/Wiz2Beagle-7b-v1.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Wiz2Beagle-7b-v1-GGUF/resolve/main/Wiz2Beagle-7b-v1.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Wiz2Beagle-7b-v1-GGUF/resolve/main/Wiz2Beagle-7b-v1.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Wiz2Beagle-7b-v1-GGUF/resolve/main/Wiz2Beagle-7b-v1.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Wiz2Beagle-7b-v1-GGUF/resolve/main/Wiz2Beagle-7b-v1.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Wiz2Beagle-7b-v1-GGUF/resolve/main/Wiz2Beagle-7b-v1.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Wiz2Beagle-7b-v1-GGUF/resolve/main/Wiz2Beagle-7b-v1.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Wiz2Beagle-7b-v1-GGUF/resolve/main/Wiz2Beagle-7b-v1.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Wiz2Beagle-7b-v1-GGUF/resolve/main/Wiz2Beagle-7b-v1.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Wiz2Beagle-7b-v1-GGUF/resolve/main/Wiz2Beagle-7b-v1.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Wiz2Beagle-7b-v1-GGUF/resolve/main/Wiz2Beagle-7b-v1.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Wiz2Beagle-7b-v1-GGUF/resolve/main/Wiz2Beagle-7b-v1.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Wiz2Beagle-7b-v1-GGUF/resolve/main/Wiz2Beagle-7b-v1.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Wiz2Beagle-7b-v1-GGUF/resolve/main/Wiz2Beagle-7b-v1.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/MoMo-70B-V1.1-i1-GGUF | mradermacher | 2024-05-06T04:42:33Z | 13 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:bongchoi/MoMo-70B-V1.1",
"base_model:quantized:bongchoi/MoMo-70B-V1.1",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T05:37:47Z | ---
base_model: bongchoi/MoMo-70B-V1.1
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/bongchoi/MoMo-70B-V1.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/MoMo-70B-V1.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-IQ1_S.gguf) | i1-IQ1_S | 14.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-IQ1_M.gguf) | i1-IQ1_M | 16.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.4 | |
| [GGUF](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.4 | |
| [GGUF](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-IQ2_S.gguf) | i1-IQ2_S | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-IQ2_M.gguf) | i1-IQ2_M | 23.3 | |
| [GGUF](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-Q2_K.gguf) | i1-Q2_K | 25.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 26.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.4 | |
| [GGUF](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-IQ3_S.gguf) | i1-IQ3_S | 30.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-IQ3_M.gguf) | i1-IQ3_M | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 36.9 | |
| [GGUF](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-Q4_0.gguf) | i1-Q4_0 | 39.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/MoMo-70B-V1.1-i1-GGUF/resolve/main/MoMo-70B-V1.1.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 56.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Dolph-Lund-Wizard-7B-GGUF | mradermacher | 2024-05-06T04:42:23Z | 50 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Noodlz/Dolph-Lund-Wizard-7B",
"base_model:quantized:Noodlz/Dolph-Lund-Wizard-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T10:46:42Z | ---
base_model: Noodlz/Dolph-Lund-Wizard-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Noodlz/Dolph-Lund-Wizard-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Dolph-Lund-Wizard-7B-GGUF/resolve/main/Dolph-Lund-Wizard-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Dolph-Lund-Wizard-7B-GGUF/resolve/main/Dolph-Lund-Wizard-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Dolph-Lund-Wizard-7B-GGUF/resolve/main/Dolph-Lund-Wizard-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Dolph-Lund-Wizard-7B-GGUF/resolve/main/Dolph-Lund-Wizard-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Dolph-Lund-Wizard-7B-GGUF/resolve/main/Dolph-Lund-Wizard-7B.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Dolph-Lund-Wizard-7B-GGUF/resolve/main/Dolph-Lund-Wizard-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Dolph-Lund-Wizard-7B-GGUF/resolve/main/Dolph-Lund-Wizard-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Dolph-Lund-Wizard-7B-GGUF/resolve/main/Dolph-Lund-Wizard-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Dolph-Lund-Wizard-7B-GGUF/resolve/main/Dolph-Lund-Wizard-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Dolph-Lund-Wizard-7B-GGUF/resolve/main/Dolph-Lund-Wizard-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Dolph-Lund-Wizard-7B-GGUF/resolve/main/Dolph-Lund-Wizard-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Dolph-Lund-Wizard-7B-GGUF/resolve/main/Dolph-Lund-Wizard-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Dolph-Lund-Wizard-7B-GGUF/resolve/main/Dolph-Lund-Wizard-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Dolph-Lund-Wizard-7B-GGUF/resolve/main/Dolph-Lund-Wizard-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/WizardLaker-7B-GGUF | mradermacher | 2024-05-06T04:42:20Z | 512 | 2 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Noodlz/WizardLaker-7B",
"base_model:quantized:Noodlz/WizardLaker-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T10:56:45Z | ---
base_model: Noodlz/WizardLaker-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Noodlz/WizardLaker-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/WizardLaker-7B-GGUF/resolve/main/WizardLaker-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/WizardLaker-7B-GGUF/resolve/main/WizardLaker-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/WizardLaker-7B-GGUF/resolve/main/WizardLaker-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/WizardLaker-7B-GGUF/resolve/main/WizardLaker-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/WizardLaker-7B-GGUF/resolve/main/WizardLaker-7B.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/WizardLaker-7B-GGUF/resolve/main/WizardLaker-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/WizardLaker-7B-GGUF/resolve/main/WizardLaker-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/WizardLaker-7B-GGUF/resolve/main/WizardLaker-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/WizardLaker-7B-GGUF/resolve/main/WizardLaker-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/WizardLaker-7B-GGUF/resolve/main/WizardLaker-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/WizardLaker-7B-GGUF/resolve/main/WizardLaker-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/WizardLaker-7B-GGUF/resolve/main/WizardLaker-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/WizardLaker-7B-GGUF/resolve/main/WizardLaker-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/WizardLaker-7B-GGUF/resolve/main/WizardLaker-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/CeptrixBeagle-12B-MoE-GGUF | mradermacher | 2024-05-06T04:42:17Z | 75 | 0 | transformers | [
"transformers",
"gguf",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"allknowingroger/NeuralCeptrix-7B-slerp",
"paulml/OmniBeagleSquaredMBX-v3-7B",
"en",
"base_model:allknowingroger/CeptrixBeagle-12B-MoE",
"base_model:quantized:allknowingroger/CeptrixBeagle-12B-MoE",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T11:24:00Z | ---
base_model: allknowingroger/CeptrixBeagle-12B-MoE
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
- allknowingroger/NeuralCeptrix-7B-slerp
- paulml/OmniBeagleSquaredMBX-v3-7B
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/allknowingroger/CeptrixBeagle-12B-MoE
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/CeptrixBeagle-12B-MoE-GGUF/resolve/main/CeptrixBeagle-12B-MoE.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/CeptrixBeagle-12B-MoE-GGUF/resolve/main/CeptrixBeagle-12B-MoE.IQ3_XS.gguf) | IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/CeptrixBeagle-12B-MoE-GGUF/resolve/main/CeptrixBeagle-12B-MoE.Q3_K_S.gguf) | Q3_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/CeptrixBeagle-12B-MoE-GGUF/resolve/main/CeptrixBeagle-12B-MoE.IQ3_S.gguf) | IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/CeptrixBeagle-12B-MoE-GGUF/resolve/main/CeptrixBeagle-12B-MoE.IQ3_M.gguf) | IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/CeptrixBeagle-12B-MoE-GGUF/resolve/main/CeptrixBeagle-12B-MoE.Q3_K_M.gguf) | Q3_K_M | 6.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/CeptrixBeagle-12B-MoE-GGUF/resolve/main/CeptrixBeagle-12B-MoE.Q3_K_L.gguf) | Q3_K_L | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/CeptrixBeagle-12B-MoE-GGUF/resolve/main/CeptrixBeagle-12B-MoE.IQ4_XS.gguf) | IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/CeptrixBeagle-12B-MoE-GGUF/resolve/main/CeptrixBeagle-12B-MoE.Q4_K_S.gguf) | Q4_K_S | 7.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CeptrixBeagle-12B-MoE-GGUF/resolve/main/CeptrixBeagle-12B-MoE.Q4_K_M.gguf) | Q4_K_M | 7.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CeptrixBeagle-12B-MoE-GGUF/resolve/main/CeptrixBeagle-12B-MoE.Q5_K_S.gguf) | Q5_K_S | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/CeptrixBeagle-12B-MoE-GGUF/resolve/main/CeptrixBeagle-12B-MoE.Q5_K_M.gguf) | Q5_K_M | 9.2 | |
| [GGUF](https://huggingface.co/mradermacher/CeptrixBeagle-12B-MoE-GGUF/resolve/main/CeptrixBeagle-12B-MoE.Q6_K.gguf) | Q6_K | 10.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/CeptrixBeagle-12B-MoE-GGUF/resolve/main/CeptrixBeagle-12B-MoE.Q8_0.gguf) | Q8_0 | 13.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/MonaTrix-v4-7B-DPO-GGUF | mradermacher | 2024-05-06T04:42:07Z | 6 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:CultriX/MonaTrix-v4-7B-DPO",
"base_model:quantized:CultriX/MonaTrix-v4-7B-DPO",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T16:10:46Z | ---
base_model: CultriX/MonaTrix-v4-7B-DPO
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/CultriX/MonaTrix-v4-7B-DPO
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MonaTrix-v4-7B-DPO-GGUF/resolve/main/MonaTrix-v4-7B-DPO.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/MonaTrix-v4-7B-DPO-GGUF/resolve/main/MonaTrix-v4-7B-DPO.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/MonaTrix-v4-7B-DPO-GGUF/resolve/main/MonaTrix-v4-7B-DPO.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/MonaTrix-v4-7B-DPO-GGUF/resolve/main/MonaTrix-v4-7B-DPO.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MonaTrix-v4-7B-DPO-GGUF/resolve/main/MonaTrix-v4-7B-DPO.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/MonaTrix-v4-7B-DPO-GGUF/resolve/main/MonaTrix-v4-7B-DPO.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MonaTrix-v4-7B-DPO-GGUF/resolve/main/MonaTrix-v4-7B-DPO.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/MonaTrix-v4-7B-DPO-GGUF/resolve/main/MonaTrix-v4-7B-DPO.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/MonaTrix-v4-7B-DPO-GGUF/resolve/main/MonaTrix-v4-7B-DPO.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MonaTrix-v4-7B-DPO-GGUF/resolve/main/MonaTrix-v4-7B-DPO.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MonaTrix-v4-7B-DPO-GGUF/resolve/main/MonaTrix-v4-7B-DPO.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/MonaTrix-v4-7B-DPO-GGUF/resolve/main/MonaTrix-v4-7B-DPO.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/MonaTrix-v4-7B-DPO-GGUF/resolve/main/MonaTrix-v4-7B-DPO.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MonaTrix-v4-7B-DPO-GGUF/resolve/main/MonaTrix-v4-7B-DPO.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/KSI-RP-NSK-128k-7B-GGUF | mradermacher | 2024-05-06T04:41:50Z | 31 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"AlekseiPravdin/KukulStanta-InfinityRP-7B-slerp",
"AlekseiPravdin/NSK-128k-7B-slerp",
"en",
"base_model:AlekseiPravdin/KSI-RP-NSK-128k-7B",
"base_model:quantized:AlekseiPravdin/KSI-RP-NSK-128k-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-17T19:28:43Z | ---
base_model: AlekseiPravdin/KSI-RP-NSK-128k-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- AlekseiPravdin/KukulStanta-InfinityRP-7B-slerp
- AlekseiPravdin/NSK-128k-7B-slerp
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/AlekseiPravdin/KSI-RP-NSK-128k-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/KSI-RP-NSK-128k-7B-GGUF/resolve/main/KSI-RP-NSK-128k-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/KSI-RP-NSK-128k-7B-GGUF/resolve/main/KSI-RP-NSK-128k-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/KSI-RP-NSK-128k-7B-GGUF/resolve/main/KSI-RP-NSK-128k-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/KSI-RP-NSK-128k-7B-GGUF/resolve/main/KSI-RP-NSK-128k-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/KSI-RP-NSK-128k-7B-GGUF/resolve/main/KSI-RP-NSK-128k-7B.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/KSI-RP-NSK-128k-7B-GGUF/resolve/main/KSI-RP-NSK-128k-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/KSI-RP-NSK-128k-7B-GGUF/resolve/main/KSI-RP-NSK-128k-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/KSI-RP-NSK-128k-7B-GGUF/resolve/main/KSI-RP-NSK-128k-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/KSI-RP-NSK-128k-7B-GGUF/resolve/main/KSI-RP-NSK-128k-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/KSI-RP-NSK-128k-7B-GGUF/resolve/main/KSI-RP-NSK-128k-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/KSI-RP-NSK-128k-7B-GGUF/resolve/main/KSI-RP-NSK-128k-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/KSI-RP-NSK-128k-7B-GGUF/resolve/main/KSI-RP-NSK-128k-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/KSI-RP-NSK-128k-7B-GGUF/resolve/main/KSI-RP-NSK-128k-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/KSI-RP-NSK-128k-7B-GGUF/resolve/main/KSI-RP-NSK-128k-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Sappho_V0.0.4-GGUF | mradermacher | 2024-05-06T04:41:32Z | 18 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"Jakolo121/Sappho_V0.0.3",
"VAGOsolutions/SauerkrautLM-7b-HerO",
"en",
"base_model:Jakolo121/Sappho_V0.0.4",
"base_model:quantized:Jakolo121/Sappho_V0.0.4",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T23:03:08Z | ---
base_model: Jakolo121/Sappho_V0.0.4
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- Jakolo121/Sappho_V0.0.3
- VAGOsolutions/SauerkrautLM-7b-HerO
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Jakolo121/Sappho_V0.0.4
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Sappho_V0.0.4-GGUF/resolve/main/Sappho_V0.0.4.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Sappho_V0.0.4-GGUF/resolve/main/Sappho_V0.0.4.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Sappho_V0.0.4-GGUF/resolve/main/Sappho_V0.0.4.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Sappho_V0.0.4-GGUF/resolve/main/Sappho_V0.0.4.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Sappho_V0.0.4-GGUF/resolve/main/Sappho_V0.0.4.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Sappho_V0.0.4-GGUF/resolve/main/Sappho_V0.0.4.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Sappho_V0.0.4-GGUF/resolve/main/Sappho_V0.0.4.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Sappho_V0.0.4-GGUF/resolve/main/Sappho_V0.0.4.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Sappho_V0.0.4-GGUF/resolve/main/Sappho_V0.0.4.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Sappho_V0.0.4-GGUF/resolve/main/Sappho_V0.0.4.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Sappho_V0.0.4-GGUF/resolve/main/Sappho_V0.0.4.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Sappho_V0.0.4-GGUF/resolve/main/Sappho_V0.0.4.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Sappho_V0.0.4-GGUF/resolve/main/Sappho_V0.0.4.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Sappho_V0.0.4-GGUF/resolve/main/Sappho_V0.0.4.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Camel-Platypus2-70B-GGUF | mradermacher | 2024-05-06T04:41:08Z | 19 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:garage-bAInd/Open-Platypus",
"base_model:garage-bAInd/Camel-Platypus2-70B",
"base_model:quantized:garage-bAInd/Camel-Platypus2-70B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T06:59:36Z | ---
base_model: garage-bAInd/Camel-Platypus2-70B
datasets:
- garage-bAInd/Open-Platypus
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/garage-bAInd/Camel-Platypus2-70B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Camel-Platypus2-70B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Camel-Platypus2-70B-GGUF/resolve/main/Camel-Platypus2-70B.Q2_K.gguf) | Q2_K | 25.6 | |
| [GGUF](https://huggingface.co/mradermacher/Camel-Platypus2-70B-GGUF/resolve/main/Camel-Platypus2-70B.IQ3_XS.gguf) | IQ3_XS | 28.4 | |
| [GGUF](https://huggingface.co/mradermacher/Camel-Platypus2-70B-GGUF/resolve/main/Camel-Platypus2-70B.IQ3_S.gguf) | IQ3_S | 30.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Camel-Platypus2-70B-GGUF/resolve/main/Camel-Platypus2-70B.Q3_K_S.gguf) | Q3_K_S | 30.0 | |
| [GGUF](https://huggingface.co/mradermacher/Camel-Platypus2-70B-GGUF/resolve/main/Camel-Platypus2-70B.IQ3_M.gguf) | IQ3_M | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Camel-Platypus2-70B-GGUF/resolve/main/Camel-Platypus2-70B.Q3_K_M.gguf) | Q3_K_M | 33.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Camel-Platypus2-70B-GGUF/resolve/main/Camel-Platypus2-70B.Q3_K_L.gguf) | Q3_K_L | 36.2 | |
| [GGUF](https://huggingface.co/mradermacher/Camel-Platypus2-70B-GGUF/resolve/main/Camel-Platypus2-70B.IQ4_XS.gguf) | IQ4_XS | 37.3 | |
| [GGUF](https://huggingface.co/mradermacher/Camel-Platypus2-70B-GGUF/resolve/main/Camel-Platypus2-70B.Q4_K_S.gguf) | Q4_K_S | 39.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Camel-Platypus2-70B-GGUF/resolve/main/Camel-Platypus2-70B.Q4_K_M.gguf) | Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Camel-Platypus2-70B-GGUF/resolve/main/Camel-Platypus2-70B.Q5_K_S.gguf) | Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/Camel-Platypus2-70B-GGUF/resolve/main/Camel-Platypus2-70B.Q5_K_M.gguf) | Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/Camel-Platypus2-70B-GGUF/resolve/main/Camel-Platypus2-70B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Camel-Platypus2-70B-GGUF/resolve/main/Camel-Platypus2-70B.Q6_K.gguf.part2of2) | Q6_K | 56.7 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Camel-Platypus2-70B-GGUF/resolve/main/Camel-Platypus2-70B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Camel-Platypus2-70B-GGUF/resolve/main/Camel-Platypus2-70B.Q8_0.gguf.part2of2) | Q8_0 | 73.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Boundary-4x7b-MoE-i1-GGUF | mradermacher | 2024-05-06T04:40:55Z | 75 | 0 | transformers | [
"transformers",
"gguf",
"moe",
"merge",
"mergekit",
"HuggingFaceH4/zephyr-7b-beta",
"mistralai/Mistral-7B-Instruct-v0.2",
"teknium/OpenHermes-2.5-Mistral-7B",
"meta-math/MetaMath-Mistral-7B",
"Mistral",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-18T08:30:12Z | ---
base_model: NotAiLOL/Boundary-4x7b-MoE
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- moe
- merge
- mergekit
- HuggingFaceH4/zephyr-7b-beta
- mistralai/Mistral-7B-Instruct-v0.2
- teknium/OpenHermes-2.5-Mistral-7B
- meta-math/MetaMath-Mistral-7B
- Mistral
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/NotAiLOL/Boundary-4x7b-MoE
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Boundary-4x7b-MoE-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Boundary-4x7b-MoE-i1-GGUF/resolve/main/Boundary-4x7b-MoE.i1-IQ1_S.gguf) | i1-IQ1_S | 5.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Boundary-4x7b-MoE-i1-GGUF/resolve/main/Boundary-4x7b-MoE.i1-IQ1_M.gguf) | i1-IQ1_M | 5.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Boundary-4x7b-MoE-i1-GGUF/resolve/main/Boundary-4x7b-MoE.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/Boundary-4x7b-MoE-i1-GGUF/resolve/main/Boundary-4x7b-MoE.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/Boundary-4x7b-MoE-i1-GGUF/resolve/main/Boundary-4x7b-MoE.i1-IQ2_S.gguf) | i1-IQ2_S | 7.4 | |
| [GGUF](https://huggingface.co/mradermacher/Boundary-4x7b-MoE-i1-GGUF/resolve/main/Boundary-4x7b-MoE.i1-IQ2_M.gguf) | i1-IQ2_M | 8.1 | |
| [GGUF](https://huggingface.co/mradermacher/Boundary-4x7b-MoE-i1-GGUF/resolve/main/Boundary-4x7b-MoE.i1-Q2_K.gguf) | i1-Q2_K | 8.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Boundary-4x7b-MoE-i1-GGUF/resolve/main/Boundary-4x7b-MoE.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Boundary-4x7b-MoE-i1-GGUF/resolve/main/Boundary-4x7b-MoE.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/Boundary-4x7b-MoE-i1-GGUF/resolve/main/Boundary-4x7b-MoE.i1-Q3_K_S.gguf) | i1-Q3_K_S | 10.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Boundary-4x7b-MoE-i1-GGUF/resolve/main/Boundary-4x7b-MoE.i1-IQ3_S.gguf) | i1-IQ3_S | 10.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Boundary-4x7b-MoE-i1-GGUF/resolve/main/Boundary-4x7b-MoE.i1-IQ3_M.gguf) | i1-IQ3_M | 10.7 | |
| [GGUF](https://huggingface.co/mradermacher/Boundary-4x7b-MoE-i1-GGUF/resolve/main/Boundary-4x7b-MoE.i1-Q3_K_M.gguf) | i1-Q3_K_M | 11.7 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Boundary-4x7b-MoE-i1-GGUF/resolve/main/Boundary-4x7b-MoE.i1-Q3_K_L.gguf) | i1-Q3_K_L | 12.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Boundary-4x7b-MoE-i1-GGUF/resolve/main/Boundary-4x7b-MoE.i1-IQ4_XS.gguf) | i1-IQ4_XS | 13.0 | |
| [GGUF](https://huggingface.co/mradermacher/Boundary-4x7b-MoE-i1-GGUF/resolve/main/Boundary-4x7b-MoE.i1-Q4_0.gguf) | i1-Q4_0 | 13.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Boundary-4x7b-MoE-i1-GGUF/resolve/main/Boundary-4x7b-MoE.i1-Q4_K_S.gguf) | i1-Q4_K_S | 13.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Boundary-4x7b-MoE-i1-GGUF/resolve/main/Boundary-4x7b-MoE.i1-Q4_K_M.gguf) | i1-Q4_K_M | 14.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Boundary-4x7b-MoE-i1-GGUF/resolve/main/Boundary-4x7b-MoE.i1-Q5_K_S.gguf) | i1-Q5_K_S | 16.7 | |
| [GGUF](https://huggingface.co/mradermacher/Boundary-4x7b-MoE-i1-GGUF/resolve/main/Boundary-4x7b-MoE.i1-Q5_K_M.gguf) | i1-Q5_K_M | 17.2 | |
| [GGUF](https://huggingface.co/mradermacher/Boundary-4x7b-MoE-i1-GGUF/resolve/main/Boundary-4x7b-MoE.i1-Q6_K.gguf) | i1-Q6_K | 19.9 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Coxcomb-GGUF | mradermacher | 2024-05-06T04:40:36Z | 141 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:N8Programs/CreativeGPT",
"base_model:N8Programs/Coxcomb",
"base_model:quantized:N8Programs/Coxcomb",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-18T12:17:13Z | ---
base_model: N8Programs/Coxcomb
datasets:
- N8Programs/CreativeGPT
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/N8Programs/Coxcomb
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Coxcomb-GGUF/resolve/main/Coxcomb.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Coxcomb-GGUF/resolve/main/Coxcomb.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Coxcomb-GGUF/resolve/main/Coxcomb.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Coxcomb-GGUF/resolve/main/Coxcomb.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Coxcomb-GGUF/resolve/main/Coxcomb.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Coxcomb-GGUF/resolve/main/Coxcomb.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Coxcomb-GGUF/resolve/main/Coxcomb.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Coxcomb-GGUF/resolve/main/Coxcomb.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Coxcomb-GGUF/resolve/main/Coxcomb.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Coxcomb-GGUF/resolve/main/Coxcomb.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Coxcomb-GGUF/resolve/main/Coxcomb.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Coxcomb-GGUF/resolve/main/Coxcomb.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Coxcomb-GGUF/resolve/main/Coxcomb.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Coxcomb-GGUF/resolve/main/Coxcomb.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Lila-70B-L2-GGUF | mradermacher | 2024-05-06T04:40:25Z | 2 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Sao10K/Lila-70B-L2",
"base_model:quantized:Sao10K/Lila-70B-L2",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T14:54:25Z | ---
base_model: Sao10K/Lila-70B-L2
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Sao10K/Lila-70B-L2
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Lila-70B-L2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Lila-70B-L2-GGUF/resolve/main/Lila-70B-L2.Q2_K.gguf) | Q2_K | 25.6 | |
| [GGUF](https://huggingface.co/mradermacher/Lila-70B-L2-GGUF/resolve/main/Lila-70B-L2.IQ3_XS.gguf) | IQ3_XS | 28.4 | |
| [GGUF](https://huggingface.co/mradermacher/Lila-70B-L2-GGUF/resolve/main/Lila-70B-L2.IQ3_S.gguf) | IQ3_S | 30.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Lila-70B-L2-GGUF/resolve/main/Lila-70B-L2.Q3_K_S.gguf) | Q3_K_S | 30.0 | |
| [GGUF](https://huggingface.co/mradermacher/Lila-70B-L2-GGUF/resolve/main/Lila-70B-L2.IQ3_M.gguf) | IQ3_M | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Lila-70B-L2-GGUF/resolve/main/Lila-70B-L2.Q3_K_M.gguf) | Q3_K_M | 33.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Lila-70B-L2-GGUF/resolve/main/Lila-70B-L2.Q3_K_L.gguf) | Q3_K_L | 36.2 | |
| [GGUF](https://huggingface.co/mradermacher/Lila-70B-L2-GGUF/resolve/main/Lila-70B-L2.IQ4_XS.gguf) | IQ4_XS | 37.3 | |
| [GGUF](https://huggingface.co/mradermacher/Lila-70B-L2-GGUF/resolve/main/Lila-70B-L2.Q4_K_S.gguf) | Q4_K_S | 39.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Lila-70B-L2-GGUF/resolve/main/Lila-70B-L2.Q4_K_M.gguf) | Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Lila-70B-L2-GGUF/resolve/main/Lila-70B-L2.Q5_K_S.gguf) | Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/Lila-70B-L2-GGUF/resolve/main/Lila-70B-L2.Q5_K_M.gguf) | Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/Lila-70B-L2-GGUF/resolve/main/Lila-70B-L2.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Lila-70B-L2-GGUF/resolve/main/Lila-70B-L2.Q6_K.gguf.part2of2) | Q6_K | 56.7 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Lila-70B-L2-GGUF/resolve/main/Lila-70B-L2.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Lila-70B-L2-GGUF/resolve/main/Lila-70B-L2.Q8_0.gguf.part2of2) | Q8_0 | 73.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/chronob-1.4-lin-70b-i1-GGUF | mradermacher | 2024-05-06T04:40:14Z | 28 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:uncensorie/chronob-1.4-lin-70b",
"base_model:quantized:uncensorie/chronob-1.4-lin-70b",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T16:17:36Z | ---
base_model: uncensorie/chronob-1.4-lin-70b
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/uncensorie/chronob-1.4-lin-70b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/chronob-1.4-lin-70b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-i1-GGUF/resolve/main/chronob-1.4-lin-70b.i1-IQ1_S.gguf) | i1-IQ1_S | 14.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-i1-GGUF/resolve/main/chronob-1.4-lin-70b.i1-IQ1_M.gguf) | i1-IQ1_M | 16.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-i1-GGUF/resolve/main/chronob-1.4-lin-70b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.4 | |
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-i1-GGUF/resolve/main/chronob-1.4-lin-70b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.4 | |
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-i1-GGUF/resolve/main/chronob-1.4-lin-70b.i1-IQ2_S.gguf) | i1-IQ2_S | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-i1-GGUF/resolve/main/chronob-1.4-lin-70b.i1-IQ2_M.gguf) | i1-IQ2_M | 23.3 | |
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-i1-GGUF/resolve/main/chronob-1.4-lin-70b.i1-Q2_K.gguf) | i1-Q2_K | 25.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-i1-GGUF/resolve/main/chronob-1.4-lin-70b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 26.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-i1-GGUF/resolve/main/chronob-1.4-lin-70b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.4 | |
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-i1-GGUF/resolve/main/chronob-1.4-lin-70b.i1-IQ3_S.gguf) | i1-IQ3_S | 30.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-i1-GGUF/resolve/main/chronob-1.4-lin-70b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-i1-GGUF/resolve/main/chronob-1.4-lin-70b.i1-IQ3_M.gguf) | i1-IQ3_M | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-i1-GGUF/resolve/main/chronob-1.4-lin-70b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-i1-GGUF/resolve/main/chronob-1.4-lin-70b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-i1-GGUF/resolve/main/chronob-1.4-lin-70b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 36.9 | |
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-i1-GGUF/resolve/main/chronob-1.4-lin-70b.i1-Q4_0.gguf) | i1-Q4_0 | 39.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-i1-GGUF/resolve/main/chronob-1.4-lin-70b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-i1-GGUF/resolve/main/chronob-1.4-lin-70b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-i1-GGUF/resolve/main/chronob-1.4-lin-70b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-i1-GGUF/resolve/main/chronob-1.4-lin-70b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-i1-GGUF/resolve/main/chronob-1.4-lin-70b.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-i1-GGUF/resolve/main/chronob-1.4-lin-70b.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 56.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/tulu-2-70b-i1-GGUF | mradermacher | 2024-05-06T04:40:12Z | 15 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:allenai/tulu-v2-sft-mixture",
"base_model:allenai/tulu-2-70b",
"base_model:quantized:allenai/tulu-2-70b",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T16:17:36Z | ---
base_model: allenai/tulu-2-70b
datasets:
- allenai/tulu-v2-sft-mixture
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/allenai/tulu-2-70b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/tulu-2-70b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/tulu-2-70b-i1-GGUF/resolve/main/tulu-2-70b.i1-IQ1_S.gguf) | i1-IQ1_S | 14.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/tulu-2-70b-i1-GGUF/resolve/main/tulu-2-70b.i1-IQ1_M.gguf) | i1-IQ1_M | 16.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/tulu-2-70b-i1-GGUF/resolve/main/tulu-2-70b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.4 | |
| [GGUF](https://huggingface.co/mradermacher/tulu-2-70b-i1-GGUF/resolve/main/tulu-2-70b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.4 | |
| [GGUF](https://huggingface.co/mradermacher/tulu-2-70b-i1-GGUF/resolve/main/tulu-2-70b.i1-IQ2_S.gguf) | i1-IQ2_S | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/tulu-2-70b-i1-GGUF/resolve/main/tulu-2-70b.i1-IQ2_M.gguf) | i1-IQ2_M | 23.3 | |
| [GGUF](https://huggingface.co/mradermacher/tulu-2-70b-i1-GGUF/resolve/main/tulu-2-70b.i1-Q2_K.gguf) | i1-Q2_K | 25.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/tulu-2-70b-i1-GGUF/resolve/main/tulu-2-70b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 26.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/tulu-2-70b-i1-GGUF/resolve/main/tulu-2-70b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.4 | |
| [GGUF](https://huggingface.co/mradermacher/tulu-2-70b-i1-GGUF/resolve/main/tulu-2-70b.i1-IQ3_S.gguf) | i1-IQ3_S | 30.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/tulu-2-70b-i1-GGUF/resolve/main/tulu-2-70b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/tulu-2-70b-i1-GGUF/resolve/main/tulu-2-70b.i1-IQ3_M.gguf) | i1-IQ3_M | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/tulu-2-70b-i1-GGUF/resolve/main/tulu-2-70b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/tulu-2-70b-i1-GGUF/resolve/main/tulu-2-70b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/tulu-2-70b-i1-GGUF/resolve/main/tulu-2-70b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 36.9 | |
| [GGUF](https://huggingface.co/mradermacher/tulu-2-70b-i1-GGUF/resolve/main/tulu-2-70b.i1-Q4_0.gguf) | i1-Q4_0 | 39.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/tulu-2-70b-i1-GGUF/resolve/main/tulu-2-70b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/tulu-2-70b-i1-GGUF/resolve/main/tulu-2-70b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/tulu-2-70b-i1-GGUF/resolve/main/tulu-2-70b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/tulu-2-70b-i1-GGUF/resolve/main/tulu-2-70b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/tulu-2-70b-i1-GGUF/resolve/main/tulu-2-70b.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/tulu-2-70b-i1-GGUF/resolve/main/tulu-2-70b.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 56.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/mistral-7b-orpo-v5.0-GGUF | mradermacher | 2024-05-06T04:39:59Z | 68 | 0 | transformers | [
"transformers",
"gguf",
"alignment-handbook",
"trl",
"orpo",
"generated_from_trainer",
"en",
"dataset:argilla/Capybara-Preferences",
"base_model:orpo-explorers/mistral-7b-orpo-v5.0",
"base_model:quantized:orpo-explorers/mistral-7b-orpo-v5.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-18T21:50:10Z | ---
base_model: orpo-explorers/mistral-7b-orpo-v5.0
datasets:
- argilla/Capybara-Preferences
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- alignment-handbook
- trl
- orpo
- generated_from_trainer
- trl
- orpo
- generated_from_trainer
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/orpo-explorers/mistral-7b-orpo-v5.0
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v5.0-GGUF/resolve/main/mistral-7b-orpo-v5.0.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v5.0-GGUF/resolve/main/mistral-7b-orpo-v5.0.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v5.0-GGUF/resolve/main/mistral-7b-orpo-v5.0.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v5.0-GGUF/resolve/main/mistral-7b-orpo-v5.0.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v5.0-GGUF/resolve/main/mistral-7b-orpo-v5.0.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v5.0-GGUF/resolve/main/mistral-7b-orpo-v5.0.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v5.0-GGUF/resolve/main/mistral-7b-orpo-v5.0.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v5.0-GGUF/resolve/main/mistral-7b-orpo-v5.0.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v5.0-GGUF/resolve/main/mistral-7b-orpo-v5.0.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v5.0-GGUF/resolve/main/mistral-7b-orpo-v5.0.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v5.0-GGUF/resolve/main/mistral-7b-orpo-v5.0.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v5.0-GGUF/resolve/main/mistral-7b-orpo-v5.0.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v5.0-GGUF/resolve/main/mistral-7b-orpo-v5.0.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v5.0-GGUF/resolve/main/mistral-7b-orpo-v5.0.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/BRisa-7B-Instruct-v0.2-GGUF | mradermacher | 2024-05-06T04:39:56Z | 10 | 1 | transformers | [
"transformers",
"gguf",
"JJhooww/Mistral-7B-v0.2-Base_ptbr",
"J-LAB/BRisa",
"en",
"base_model:J-LAB/BRisa-7B-Instruct-v0.2",
"base_model:quantized:J-LAB/BRisa-7B-Instruct-v0.2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-18T22:09:02Z | ---
base_model: J-LAB/BRisa-7B-Instruct-v0.2
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- JJhooww/Mistral-7B-v0.2-Base_ptbr
- J-LAB/BRisa
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/J-LAB/BRisa-7B-Instruct-v0.2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/BRisa-7B-Instruct-v0.2-GGUF/resolve/main/BRisa-7B-Instruct-v0.2.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/BRisa-7B-Instruct-v0.2-GGUF/resolve/main/BRisa-7B-Instruct-v0.2.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/BRisa-7B-Instruct-v0.2-GGUF/resolve/main/BRisa-7B-Instruct-v0.2.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/BRisa-7B-Instruct-v0.2-GGUF/resolve/main/BRisa-7B-Instruct-v0.2.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/BRisa-7B-Instruct-v0.2-GGUF/resolve/main/BRisa-7B-Instruct-v0.2.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/BRisa-7B-Instruct-v0.2-GGUF/resolve/main/BRisa-7B-Instruct-v0.2.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/BRisa-7B-Instruct-v0.2-GGUF/resolve/main/BRisa-7B-Instruct-v0.2.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/BRisa-7B-Instruct-v0.2-GGUF/resolve/main/BRisa-7B-Instruct-v0.2.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/BRisa-7B-Instruct-v0.2-GGUF/resolve/main/BRisa-7B-Instruct-v0.2.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BRisa-7B-Instruct-v0.2-GGUF/resolve/main/BRisa-7B-Instruct-v0.2.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BRisa-7B-Instruct-v0.2-GGUF/resolve/main/BRisa-7B-Instruct-v0.2.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/BRisa-7B-Instruct-v0.2-GGUF/resolve/main/BRisa-7B-Instruct-v0.2.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/BRisa-7B-Instruct-v0.2-GGUF/resolve/main/BRisa-7B-Instruct-v0.2.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/BRisa-7B-Instruct-v0.2-GGUF/resolve/main/BRisa-7B-Instruct-v0.2.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Boundary-Hermes-Chat-2x7B-MoE-GGUF | mradermacher | 2024-05-06T04:39:54Z | 98 | 0 | transformers | [
"transformers",
"gguf",
"moe",
"merge",
"mergekit",
"NousResearch/Hermes-2-Pro-Mistral-7B",
"Nexusflow/Starling-LM-7B-beta",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-18T23:56:27Z | ---
base_model: NotAiLOL/Boundary-Hermes-Chat-2x7B-MoE
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- moe
- merge
- mergekit
- NousResearch/Hermes-2-Pro-Mistral-7B
- Nexusflow/Starling-LM-7B-beta
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/NotAiLOL/Boundary-Hermes-Chat-2x7B-MoE
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Boundary-Hermes-Chat-2x7B-MoE-GGUF/resolve/main/Boundary-Hermes-Chat-2x7B-MoE.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Boundary-Hermes-Chat-2x7B-MoE-GGUF/resolve/main/Boundary-Hermes-Chat-2x7B-MoE.IQ3_XS.gguf) | IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Boundary-Hermes-Chat-2x7B-MoE-GGUF/resolve/main/Boundary-Hermes-Chat-2x7B-MoE.Q3_K_S.gguf) | Q3_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Boundary-Hermes-Chat-2x7B-MoE-GGUF/resolve/main/Boundary-Hermes-Chat-2x7B-MoE.IQ3_S.gguf) | IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Boundary-Hermes-Chat-2x7B-MoE-GGUF/resolve/main/Boundary-Hermes-Chat-2x7B-MoE.IQ3_M.gguf) | IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Boundary-Hermes-Chat-2x7B-MoE-GGUF/resolve/main/Boundary-Hermes-Chat-2x7B-MoE.Q3_K_M.gguf) | Q3_K_M | 6.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Boundary-Hermes-Chat-2x7B-MoE-GGUF/resolve/main/Boundary-Hermes-Chat-2x7B-MoE.Q3_K_L.gguf) | Q3_K_L | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Boundary-Hermes-Chat-2x7B-MoE-GGUF/resolve/main/Boundary-Hermes-Chat-2x7B-MoE.IQ4_XS.gguf) | IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/Boundary-Hermes-Chat-2x7B-MoE-GGUF/resolve/main/Boundary-Hermes-Chat-2x7B-MoE.Q4_K_S.gguf) | Q4_K_S | 7.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Boundary-Hermes-Chat-2x7B-MoE-GGUF/resolve/main/Boundary-Hermes-Chat-2x7B-MoE.Q4_K_M.gguf) | Q4_K_M | 7.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Boundary-Hermes-Chat-2x7B-MoE-GGUF/resolve/main/Boundary-Hermes-Chat-2x7B-MoE.Q5_K_S.gguf) | Q5_K_S | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/Boundary-Hermes-Chat-2x7B-MoE-GGUF/resolve/main/Boundary-Hermes-Chat-2x7B-MoE.Q5_K_M.gguf) | Q5_K_M | 9.2 | |
| [GGUF](https://huggingface.co/mradermacher/Boundary-Hermes-Chat-2x7B-MoE-GGUF/resolve/main/Boundary-Hermes-Chat-2x7B-MoE.Q6_K.gguf) | Q6_K | 10.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Boundary-Hermes-Chat-2x7B-MoE-GGUF/resolve/main/Boundary-Hermes-Chat-2x7B-MoE.Q8_0.gguf) | Q8_0 | 13.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Admiral-Llama-3-8B-GGUF | mradermacher | 2024-05-06T04:39:46Z | 53 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"alpaca",
"en",
"dataset:vicgalle/alpaca-gpt4",
"base_model:mayacinka/Admiral-Llama-3-8B",
"base_model:quantized:mayacinka/Admiral-Llama-3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-19T06:39:36Z | ---
base_model: mayacinka/Admiral-Llama-3-8B
datasets:
- vicgalle/alpaca-gpt4
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- alpaca
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/mayacinka/Admiral-Llama-3-8B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Admiral-Llama-3-8B-GGUF/resolve/main/Admiral-Llama-3-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Admiral-Llama-3-8B-GGUF/resolve/main/Admiral-Llama-3-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Admiral-Llama-3-8B-GGUF/resolve/main/Admiral-Llama-3-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Admiral-Llama-3-8B-GGUF/resolve/main/Admiral-Llama-3-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Admiral-Llama-3-8B-GGUF/resolve/main/Admiral-Llama-3-8B.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Admiral-Llama-3-8B-GGUF/resolve/main/Admiral-Llama-3-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Admiral-Llama-3-8B-GGUF/resolve/main/Admiral-Llama-3-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Admiral-Llama-3-8B-GGUF/resolve/main/Admiral-Llama-3-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Admiral-Llama-3-8B-GGUF/resolve/main/Admiral-Llama-3-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Admiral-Llama-3-8B-GGUF/resolve/main/Admiral-Llama-3-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Admiral-Llama-3-8B-GGUF/resolve/main/Admiral-Llama-3-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Admiral-Llama-3-8B-GGUF/resolve/main/Admiral-Llama-3-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Admiral-Llama-3-8B-GGUF/resolve/main/Admiral-Llama-3-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Admiral-Llama-3-8B-GGUF/resolve/main/Admiral-Llama-3-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/MermaidMoE-19B-GGUF | mradermacher | 2024-05-06T04:39:39Z | 16 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:TroyDoesAI/MermaidMoE-19B",
"base_model:quantized:TroyDoesAI/MermaidMoE-19B",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-19T06:42:11Z | ---
base_model: TroyDoesAI/MermaidMoE-19B
language:
- en
library_name: transformers
license: cc-by-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/TroyDoesAI/MermaidMoE-19B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MermaidMoE-19B-GGUF/resolve/main/MermaidMoE-19B.Q2_K.gguf) | Q2_K | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/MermaidMoE-19B-GGUF/resolve/main/MermaidMoE-19B.IQ3_XS.gguf) | IQ3_XS | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/MermaidMoE-19B-GGUF/resolve/main/MermaidMoE-19B.Q3_K_S.gguf) | Q3_K_S | 8.4 | |
| [GGUF](https://huggingface.co/mradermacher/MermaidMoE-19B-GGUF/resolve/main/MermaidMoE-19B.IQ3_S.gguf) | IQ3_S | 8.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MermaidMoE-19B-GGUF/resolve/main/MermaidMoE-19B.IQ3_M.gguf) | IQ3_M | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/MermaidMoE-19B-GGUF/resolve/main/MermaidMoE-19B.Q3_K_M.gguf) | Q3_K_M | 9.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MermaidMoE-19B-GGUF/resolve/main/MermaidMoE-19B.Q3_K_L.gguf) | Q3_K_L | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/MermaidMoE-19B-GGUF/resolve/main/MermaidMoE-19B.IQ4_XS.gguf) | IQ4_XS | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/MermaidMoE-19B-GGUF/resolve/main/MermaidMoE-19B.Q4_K_S.gguf) | Q4_K_S | 11.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MermaidMoE-19B-GGUF/resolve/main/MermaidMoE-19B.Q4_K_M.gguf) | Q4_K_M | 11.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MermaidMoE-19B-GGUF/resolve/main/MermaidMoE-19B.Q5_K_S.gguf) | Q5_K_S | 13.3 | |
| [GGUF](https://huggingface.co/mradermacher/MermaidMoE-19B-GGUF/resolve/main/MermaidMoE-19B.Q5_K_M.gguf) | Q5_K_M | 13.7 | |
| [GGUF](https://huggingface.co/mradermacher/MermaidMoE-19B-GGUF/resolve/main/MermaidMoE-19B.Q6_K.gguf) | Q6_K | 15.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MermaidMoE-19B-GGUF/resolve/main/MermaidMoE-19B.Q8_0.gguf) | Q8_0 | 20.5 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Mermaid-Llama-3-8B-GGUF | mradermacher | 2024-05-06T04:39:37Z | 50 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:TroyDoesAI/Mermaid-Llama-3-8B",
"base_model:quantized:TroyDoesAI/Mermaid-Llama-3-8B",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-19T07:06:01Z | ---
base_model: TroyDoesAI/Mermaid-Llama-3-8B
language:
- en
library_name: transformers
license: cc-by-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/TroyDoesAI/Mermaid-Llama-3-8B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-3-8B-GGUF/resolve/main/Mermaid-Llama-3-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-3-8B-GGUF/resolve/main/Mermaid-Llama-3-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-3-8B-GGUF/resolve/main/Mermaid-Llama-3-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-3-8B-GGUF/resolve/main/Mermaid-Llama-3-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-3-8B-GGUF/resolve/main/Mermaid-Llama-3-8B.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-3-8B-GGUF/resolve/main/Mermaid-Llama-3-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-3-8B-GGUF/resolve/main/Mermaid-Llama-3-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-3-8B-GGUF/resolve/main/Mermaid-Llama-3-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-3-8B-GGUF/resolve/main/Mermaid-Llama-3-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-3-8B-GGUF/resolve/main/Mermaid-Llama-3-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-3-8B-GGUF/resolve/main/Mermaid-Llama-3-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-3-8B-GGUF/resolve/main/Mermaid-Llama-3-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-3-8B-GGUF/resolve/main/Mermaid-Llama-3-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-3-8B-GGUF/resolve/main/Mermaid-Llama-3-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Configurable-Llama-3-8B-v0.2-GGUF | mradermacher | 2024-05-06T04:39:31Z | 62 | 1 | transformers | [
"transformers",
"gguf",
"en",
"dataset:vicgalle/configurable-system-prompt-multitask",
"base_model:vicgalle/Configurable-Llama-3-8B-v0.2",
"base_model:quantized:vicgalle/Configurable-Llama-3-8B-v0.2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-19T10:42:34Z | ---
base_model: vicgalle/Configurable-Llama-3-8B-v0.2
datasets:
- vicgalle/configurable-system-prompt-multitask
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/vicgalle/Configurable-Llama-3-8B-v0.2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Configurable-Llama-3-8B-v0.2-GGUF/resolve/main/Configurable-Llama-3-8B-v0.2.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Configurable-Llama-3-8B-v0.2-GGUF/resolve/main/Configurable-Llama-3-8B-v0.2.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Configurable-Llama-3-8B-v0.2-GGUF/resolve/main/Configurable-Llama-3-8B-v0.2.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Configurable-Llama-3-8B-v0.2-GGUF/resolve/main/Configurable-Llama-3-8B-v0.2.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Configurable-Llama-3-8B-v0.2-GGUF/resolve/main/Configurable-Llama-3-8B-v0.2.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Configurable-Llama-3-8B-v0.2-GGUF/resolve/main/Configurable-Llama-3-8B-v0.2.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Configurable-Llama-3-8B-v0.2-GGUF/resolve/main/Configurable-Llama-3-8B-v0.2.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Configurable-Llama-3-8B-v0.2-GGUF/resolve/main/Configurable-Llama-3-8B-v0.2.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Configurable-Llama-3-8B-v0.2-GGUF/resolve/main/Configurable-Llama-3-8B-v0.2.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Configurable-Llama-3-8B-v0.2-GGUF/resolve/main/Configurable-Llama-3-8B-v0.2.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Configurable-Llama-3-8B-v0.2-GGUF/resolve/main/Configurable-Llama-3-8B-v0.2.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Configurable-Llama-3-8B-v0.2-GGUF/resolve/main/Configurable-Llama-3-8B-v0.2.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Configurable-Llama-3-8B-v0.2-GGUF/resolve/main/Configurable-Llama-3-8B-v0.2.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Configurable-Llama-3-8B-v0.2-GGUF/resolve/main/Configurable-Llama-3-8B-v0.2.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Waktaverse-Llama-3-KO-8B-Instruct-GGUF | mradermacher | 2024-05-06T04:39:29Z | 30 | 1 | transformers | [
"transformers",
"gguf",
"llama",
"llama-3",
"ko",
"en",
"dataset:MarkrAI/KoCommercial-Dataset",
"base_model:PathFinderKR/Waktaverse-Llama-3-KO-8B-Instruct",
"base_model:quantized:PathFinderKR/Waktaverse-Llama-3-KO-8B-Instruct",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-19T10:42:56Z | ---
base_model: PathFinderKR/Waktaverse-Llama-3-KO-8B-Instruct
datasets:
- MarkrAI/KoCommercial-Dataset
language:
- ko
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- llama
- llama-3
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/PathFinderKR/Waktaverse-Llama-3-KO-8B-Instruct
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Waktaverse-Llama-3-KO-8B-Instruct-GGUF/resolve/main/Waktaverse-Llama-3-KO-8B-Instruct.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Waktaverse-Llama-3-KO-8B-Instruct-GGUF/resolve/main/Waktaverse-Llama-3-KO-8B-Instruct.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Waktaverse-Llama-3-KO-8B-Instruct-GGUF/resolve/main/Waktaverse-Llama-3-KO-8B-Instruct.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Waktaverse-Llama-3-KO-8B-Instruct-GGUF/resolve/main/Waktaverse-Llama-3-KO-8B-Instruct.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Waktaverse-Llama-3-KO-8B-Instruct-GGUF/resolve/main/Waktaverse-Llama-3-KO-8B-Instruct.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Waktaverse-Llama-3-KO-8B-Instruct-GGUF/resolve/main/Waktaverse-Llama-3-KO-8B-Instruct.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Waktaverse-Llama-3-KO-8B-Instruct-GGUF/resolve/main/Waktaverse-Llama-3-KO-8B-Instruct.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Waktaverse-Llama-3-KO-8B-Instruct-GGUF/resolve/main/Waktaverse-Llama-3-KO-8B-Instruct.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Waktaverse-Llama-3-KO-8B-Instruct-GGUF/resolve/main/Waktaverse-Llama-3-KO-8B-Instruct.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Waktaverse-Llama-3-KO-8B-Instruct-GGUF/resolve/main/Waktaverse-Llama-3-KO-8B-Instruct.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Waktaverse-Llama-3-KO-8B-Instruct-GGUF/resolve/main/Waktaverse-Llama-3-KO-8B-Instruct.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Waktaverse-Llama-3-KO-8B-Instruct-GGUF/resolve/main/Waktaverse-Llama-3-KO-8B-Instruct.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Waktaverse-Llama-3-KO-8B-Instruct-GGUF/resolve/main/Waktaverse-Llama-3-KO-8B-Instruct.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Waktaverse-Llama-3-KO-8B-Instruct-GGUF/resolve/main/Waktaverse-Llama-3-KO-8B-Instruct.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/OpenHermes-2.5-Mistral-7B_toxic-removed-dpo-v0.2-GGUF | mradermacher | 2024-05-06T04:39:26Z | 3 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:anik424/OpenHermes-2.5-Mistral-7B_toxic-removed-dpo-v0.2",
"base_model:quantized:anik424/OpenHermes-2.5-Mistral-7B_toxic-removed-dpo-v0.2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-19T11:15:26Z | ---
base_model: anik424/OpenHermes-2.5-Mistral-7B_toxic-removed-dpo-v0.2
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/anik424/OpenHermes-2.5-Mistral-7B_toxic-removed-dpo-v0.2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Mistral-7B_toxic-removed-dpo-v0.2-GGUF/resolve/main/OpenHermes-2.5-Mistral-7B_toxic-removed-dpo-v0.2.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Mistral-7B_toxic-removed-dpo-v0.2-GGUF/resolve/main/OpenHermes-2.5-Mistral-7B_toxic-removed-dpo-v0.2.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Mistral-7B_toxic-removed-dpo-v0.2-GGUF/resolve/main/OpenHermes-2.5-Mistral-7B_toxic-removed-dpo-v0.2.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Mistral-7B_toxic-removed-dpo-v0.2-GGUF/resolve/main/OpenHermes-2.5-Mistral-7B_toxic-removed-dpo-v0.2.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Mistral-7B_toxic-removed-dpo-v0.2-GGUF/resolve/main/OpenHermes-2.5-Mistral-7B_toxic-removed-dpo-v0.2.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Mistral-7B_toxic-removed-dpo-v0.2-GGUF/resolve/main/OpenHermes-2.5-Mistral-7B_toxic-removed-dpo-v0.2.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Mistral-7B_toxic-removed-dpo-v0.2-GGUF/resolve/main/OpenHermes-2.5-Mistral-7B_toxic-removed-dpo-v0.2.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Mistral-7B_toxic-removed-dpo-v0.2-GGUF/resolve/main/OpenHermes-2.5-Mistral-7B_toxic-removed-dpo-v0.2.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Mistral-7B_toxic-removed-dpo-v0.2-GGUF/resolve/main/OpenHermes-2.5-Mistral-7B_toxic-removed-dpo-v0.2.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Mistral-7B_toxic-removed-dpo-v0.2-GGUF/resolve/main/OpenHermes-2.5-Mistral-7B_toxic-removed-dpo-v0.2.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Mistral-7B_toxic-removed-dpo-v0.2-GGUF/resolve/main/OpenHermes-2.5-Mistral-7B_toxic-removed-dpo-v0.2.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Mistral-7B_toxic-removed-dpo-v0.2-GGUF/resolve/main/OpenHermes-2.5-Mistral-7B_toxic-removed-dpo-v0.2.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Mistral-7B_toxic-removed-dpo-v0.2-GGUF/resolve/main/OpenHermes-2.5-Mistral-7B_toxic-removed-dpo-v0.2.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Mistral-7B_toxic-removed-dpo-v0.2-GGUF/resolve/main/OpenHermes-2.5-Mistral-7B_toxic-removed-dpo-v0.2.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/OpenHermes-2.5-Mistral-7B-toxic-GGUF | mradermacher | 2024-05-06T04:39:24Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:irthpe/OpenHermes-2.5-Mistral-7B-toxic",
"base_model:quantized:irthpe/OpenHermes-2.5-Mistral-7B-toxic",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-19T11:29:56Z | ---
base_model: irthpe/OpenHermes-2.5-Mistral-7B-toxic
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/irthpe/OpenHermes-2.5-Mistral-7B-toxic
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Mistral-7B-toxic-GGUF/resolve/main/OpenHermes-2.5-Mistral-7B-toxic.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Mistral-7B-toxic-GGUF/resolve/main/OpenHermes-2.5-Mistral-7B-toxic.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Mistral-7B-toxic-GGUF/resolve/main/OpenHermes-2.5-Mistral-7B-toxic.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Mistral-7B-toxic-GGUF/resolve/main/OpenHermes-2.5-Mistral-7B-toxic.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Mistral-7B-toxic-GGUF/resolve/main/OpenHermes-2.5-Mistral-7B-toxic.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Mistral-7B-toxic-GGUF/resolve/main/OpenHermes-2.5-Mistral-7B-toxic.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Mistral-7B-toxic-GGUF/resolve/main/OpenHermes-2.5-Mistral-7B-toxic.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Mistral-7B-toxic-GGUF/resolve/main/OpenHermes-2.5-Mistral-7B-toxic.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Mistral-7B-toxic-GGUF/resolve/main/OpenHermes-2.5-Mistral-7B-toxic.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Mistral-7B-toxic-GGUF/resolve/main/OpenHermes-2.5-Mistral-7B-toxic.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Mistral-7B-toxic-GGUF/resolve/main/OpenHermes-2.5-Mistral-7B-toxic.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Mistral-7B-toxic-GGUF/resolve/main/OpenHermes-2.5-Mistral-7B-toxic.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Mistral-7B-toxic-GGUF/resolve/main/OpenHermes-2.5-Mistral-7B-toxic.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Mistral-7B-toxic-GGUF/resolve/main/OpenHermes-2.5-Mistral-7B-toxic.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Llama-3-DARE-8B-GGUF | mradermacher | 2024-05-06T04:39:21Z | 90 | 8 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"en",
"base_model:mlabonne/Llama-3-DARE-8B",
"base_model:quantized:mlabonne/Llama-3-DARE-8B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-19T11:38:08Z | ---
base_model: mlabonne/Llama-3-DARE-8B
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/mlabonne/Llama-3-DARE-8B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-DARE-8B-GGUF/resolve/main/Llama-3-DARE-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-DARE-8B-GGUF/resolve/main/Llama-3-DARE-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-DARE-8B-GGUF/resolve/main/Llama-3-DARE-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-DARE-8B-GGUF/resolve/main/Llama-3-DARE-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-DARE-8B-GGUF/resolve/main/Llama-3-DARE-8B.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-DARE-8B-GGUF/resolve/main/Llama-3-DARE-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-DARE-8B-GGUF/resolve/main/Llama-3-DARE-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-DARE-8B-GGUF/resolve/main/Llama-3-DARE-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-DARE-8B-GGUF/resolve/main/Llama-3-DARE-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-DARE-8B-GGUF/resolve/main/Llama-3-DARE-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-DARE-8B-GGUF/resolve/main/Llama-3-DARE-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-DARE-8B-GGUF/resolve/main/Llama-3-DARE-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-DARE-8B-GGUF/resolve/main/Llama-3-DARE-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-DARE-8B-GGUF/resolve/main/Llama-3-DARE-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Mermaid-Solar-GGUF | mradermacher | 2024-05-06T04:39:18Z | 1 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:TroyDoesAI/Mermaid-Solar",
"base_model:quantized:TroyDoesAI/Mermaid-Solar",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-19T11:59:42Z | ---
base_model: TroyDoesAI/Mermaid-Solar
language:
- en
library_name: transformers
license: cc-by-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/TroyDoesAI/Mermaid-Solar
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Solar-GGUF/resolve/main/Mermaid-Solar.Q2_K.gguf) | Q2_K | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Solar-GGUF/resolve/main/Mermaid-Solar.IQ3_XS.gguf) | IQ3_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Solar-GGUF/resolve/main/Mermaid-Solar.Q3_K_S.gguf) | Q3_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Solar-GGUF/resolve/main/Mermaid-Solar.IQ3_S.gguf) | IQ3_S | 4.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Solar-GGUF/resolve/main/Mermaid-Solar.IQ3_M.gguf) | IQ3_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Solar-GGUF/resolve/main/Mermaid-Solar.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Solar-GGUF/resolve/main/Mermaid-Solar.Q3_K_L.gguf) | Q3_K_L | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Solar-GGUF/resolve/main/Mermaid-Solar.IQ4_XS.gguf) | IQ4_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Solar-GGUF/resolve/main/Mermaid-Solar.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Solar-GGUF/resolve/main/Mermaid-Solar.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Solar-GGUF/resolve/main/Mermaid-Solar.Q5_K_S.gguf) | Q5_K_S | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Solar-GGUF/resolve/main/Mermaid-Solar.Q5_K_M.gguf) | Q5_K_M | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Solar-GGUF/resolve/main/Mermaid-Solar.Q6_K.gguf) | Q6_K | 8.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Solar-GGUF/resolve/main/Mermaid-Solar.Q8_0.gguf) | Q8_0 | 11.5 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Skadi-Mixtral-v1-GGUF | mradermacher | 2024-05-06T04:39:13Z | 2 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"en",
"base_model:Sao10K/Skadi-Mixtral-v1",
"base_model:quantized:Sao10K/Skadi-Mixtral-v1",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-19T12:41:18Z | ---
base_model: Sao10K/Skadi-Mixtral-v1
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- merge
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Sao10K/Skadi-Mixtral-v1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Skadi-Mixtral-v1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Skadi-Mixtral-v1-GGUF/resolve/main/Skadi-Mixtral-v1.Q2_K.gguf) | Q2_K | 17.4 | |
| [GGUF](https://huggingface.co/mradermacher/Skadi-Mixtral-v1-GGUF/resolve/main/Skadi-Mixtral-v1.IQ3_XS.gguf) | IQ3_XS | 19.5 | |
| [GGUF](https://huggingface.co/mradermacher/Skadi-Mixtral-v1-GGUF/resolve/main/Skadi-Mixtral-v1.IQ3_S.gguf) | IQ3_S | 20.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Skadi-Mixtral-v1-GGUF/resolve/main/Skadi-Mixtral-v1.Q3_K_S.gguf) | Q3_K_S | 20.5 | |
| [GGUF](https://huggingface.co/mradermacher/Skadi-Mixtral-v1-GGUF/resolve/main/Skadi-Mixtral-v1.IQ3_M.gguf) | IQ3_M | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/Skadi-Mixtral-v1-GGUF/resolve/main/Skadi-Mixtral-v1.Q3_K_M.gguf) | Q3_K_M | 22.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Skadi-Mixtral-v1-GGUF/resolve/main/Skadi-Mixtral-v1.Q3_K_L.gguf) | Q3_K_L | 24.3 | |
| [GGUF](https://huggingface.co/mradermacher/Skadi-Mixtral-v1-GGUF/resolve/main/Skadi-Mixtral-v1.IQ4_XS.gguf) | IQ4_XS | 25.5 | |
| [GGUF](https://huggingface.co/mradermacher/Skadi-Mixtral-v1-GGUF/resolve/main/Skadi-Mixtral-v1.Q4_K_S.gguf) | Q4_K_S | 26.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Skadi-Mixtral-v1-GGUF/resolve/main/Skadi-Mixtral-v1.Q4_K_M.gguf) | Q4_K_M | 28.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Skadi-Mixtral-v1-GGUF/resolve/main/Skadi-Mixtral-v1.Q5_K_S.gguf) | Q5_K_S | 32.3 | |
| [GGUF](https://huggingface.co/mradermacher/Skadi-Mixtral-v1-GGUF/resolve/main/Skadi-Mixtral-v1.Q5_K_M.gguf) | Q5_K_M | 33.3 | |
| [GGUF](https://huggingface.co/mradermacher/Skadi-Mixtral-v1-GGUF/resolve/main/Skadi-Mixtral-v1.Q6_K.gguf) | Q6_K | 38.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Skadi-Mixtral-v1-GGUF/resolve/main/Skadi-Mixtral-v1.Q8_0.gguf) | Q8_0 | 49.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
tsavage68/chat_400_STEPS_05beta_1e6rate_CDPOSFT | tsavage68 | 2024-05-06T04:39:10Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:tsavage68/chat_600STEPS_1e8rate_SFT",
"base_model:finetune:tsavage68/chat_600STEPS_1e8rate_SFT",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-06T04:35:40Z | ---
base_model: tsavage68/chat_600STEPS_1e8rate_SFT
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: chat_400_STEPS_05beta_1e6rate_CDPOSFT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chat_400_STEPS_05beta_1e6rate_CDPOSFT
This model is a fine-tuned version of [tsavage68/chat_600STEPS_1e8rate_SFT](https://huggingface.co/tsavage68/chat_600STEPS_1e8rate_SFT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6853
- Rewards/chosen: -0.1288
- Rewards/rejected: -0.2807
- Rewards/accuracies: 0.5143
- Rewards/margins: 0.1518
- Logps/rejected: -19.3633
- Logps/chosen: -17.0123
- Logits/rejected: -0.5890
- Logits/chosen: -0.5888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6871 | 0.0977 | 50 | 0.6897 | 0.0517 | 0.0417 | 0.4352 | 0.0100 | -18.7185 | -16.6512 | -0.6010 | -0.6009 |
| 0.6399 | 0.1953 | 100 | 0.6728 | -0.1560 | -0.2548 | 0.5099 | 0.0989 | -19.3116 | -17.0666 | -0.6090 | -0.6089 |
| 0.752 | 0.2930 | 150 | 0.6985 | -0.1949 | -0.2845 | 0.4505 | 0.0896 | -19.3710 | -17.1445 | -0.5936 | -0.5934 |
| 0.713 | 0.3906 | 200 | 0.6945 | -0.1538 | -0.2727 | 0.4923 | 0.1188 | -19.3473 | -17.0623 | -0.5881 | -0.5879 |
| 0.7476 | 0.4883 | 250 | 0.6974 | -0.1319 | -0.2605 | 0.5165 | 0.1286 | -19.3230 | -17.0185 | -0.5854 | -0.5852 |
| 0.6906 | 0.5859 | 300 | 0.6883 | -0.1320 | -0.2782 | 0.5165 | 0.1461 | -19.3583 | -17.0187 | -0.5910 | -0.5909 |
| 0.6808 | 0.6836 | 350 | 0.6861 | -0.1290 | -0.2784 | 0.5077 | 0.1494 | -19.3587 | -17.0125 | -0.5888 | -0.5887 |
| 0.6476 | 0.7812 | 400 | 0.6853 | -0.1288 | -0.2807 | 0.5143 | 0.1518 | -19.3633 | -17.0123 | -0.5890 | -0.5888 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.0.0+cu117
- Datasets 2.19.0
- Tokenizers 0.19.1
|
mradermacher/Franziska-Mixtral-v1-i1-GGUF | mradermacher | 2024-05-06T04:39:07Z | 12 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Sao10K/Franziska-Mixtral-v1",
"base_model:quantized:Sao10K/Franziska-Mixtral-v1",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-19T14:04:16Z | ---
base_model: Sao10K/Franziska-Mixtral-v1
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/Sao10K/Franziska-Mixtral-v1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Franziska-Mixtral-v1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Franziska-Mixtral-v1-i1-GGUF/resolve/main/Franziska-Mixtral-v1.i1-IQ1_S.gguf) | i1-IQ1_S | 9.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Franziska-Mixtral-v1-i1-GGUF/resolve/main/Franziska-Mixtral-v1.i1-IQ1_M.gguf) | i1-IQ1_M | 10.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Franziska-Mixtral-v1-i1-GGUF/resolve/main/Franziska-Mixtral-v1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 12.7 | |
| [GGUF](https://huggingface.co/mradermacher/Franziska-Mixtral-v1-i1-GGUF/resolve/main/Franziska-Mixtral-v1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 14.0 | |
| [GGUF](https://huggingface.co/mradermacher/Franziska-Mixtral-v1-i1-GGUF/resolve/main/Franziska-Mixtral-v1.i1-IQ2_S.gguf) | i1-IQ2_S | 14.2 | |
| [GGUF](https://huggingface.co/mradermacher/Franziska-Mixtral-v1-i1-GGUF/resolve/main/Franziska-Mixtral-v1.i1-IQ2_M.gguf) | i1-IQ2_M | 15.6 | |
| [GGUF](https://huggingface.co/mradermacher/Franziska-Mixtral-v1-i1-GGUF/resolve/main/Franziska-Mixtral-v1.i1-Q2_K.gguf) | i1-Q2_K | 17.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Franziska-Mixtral-v1-i1-GGUF/resolve/main/Franziska-Mixtral-v1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 18.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Franziska-Mixtral-v1-i1-GGUF/resolve/main/Franziska-Mixtral-v1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 19.5 | |
| [GGUF](https://huggingface.co/mradermacher/Franziska-Mixtral-v1-i1-GGUF/resolve/main/Franziska-Mixtral-v1.i1-IQ3_S.gguf) | i1-IQ3_S | 20.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Franziska-Mixtral-v1-i1-GGUF/resolve/main/Franziska-Mixtral-v1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 20.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Franziska-Mixtral-v1-i1-GGUF/resolve/main/Franziska-Mixtral-v1.i1-IQ3_M.gguf) | i1-IQ3_M | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/Franziska-Mixtral-v1-i1-GGUF/resolve/main/Franziska-Mixtral-v1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 22.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Franziska-Mixtral-v1-i1-GGUF/resolve/main/Franziska-Mixtral-v1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 24.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Franziska-Mixtral-v1-i1-GGUF/resolve/main/Franziska-Mixtral-v1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 25.2 | |
| [GGUF](https://huggingface.co/mradermacher/Franziska-Mixtral-v1-i1-GGUF/resolve/main/Franziska-Mixtral-v1.i1-Q4_0.gguf) | i1-Q4_0 | 26.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Franziska-Mixtral-v1-i1-GGUF/resolve/main/Franziska-Mixtral-v1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 26.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Franziska-Mixtral-v1-i1-GGUF/resolve/main/Franziska-Mixtral-v1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 28.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Franziska-Mixtral-v1-i1-GGUF/resolve/main/Franziska-Mixtral-v1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 32.3 | |
| [GGUF](https://huggingface.co/mradermacher/Franziska-Mixtral-v1-i1-GGUF/resolve/main/Franziska-Mixtral-v1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 33.3 | |
| [GGUF](https://huggingface.co/mradermacher/Franziska-Mixtral-v1-i1-GGUF/resolve/main/Franziska-Mixtral-v1.i1-Q6_K.gguf) | i1-Q6_K | 38.5 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/OkapiLlama-3-dpo-GGUF | mradermacher | 2024-05-06T04:39:02Z | 47 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"dpo",
"en",
"dataset:mlabonne/orpo-dpo-mix-40k",
"base_model:mayacinka/OkapiLlama-3-dpo",
"base_model:quantized:mayacinka/OkapiLlama-3-dpo",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-19T16:55:53Z | ---
base_model: mayacinka/OkapiLlama-3-dpo
datasets:
- mlabonne/orpo-dpo-mix-40k
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- dpo
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/mayacinka/OkapiLlama-3-dpo
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OkapiLlama-3-dpo-GGUF/resolve/main/OkapiLlama-3-dpo.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/OkapiLlama-3-dpo-GGUF/resolve/main/OkapiLlama-3-dpo.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/OkapiLlama-3-dpo-GGUF/resolve/main/OkapiLlama-3-dpo.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/OkapiLlama-3-dpo-GGUF/resolve/main/OkapiLlama-3-dpo.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/OkapiLlama-3-dpo-GGUF/resolve/main/OkapiLlama-3-dpo.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/OkapiLlama-3-dpo-GGUF/resolve/main/OkapiLlama-3-dpo.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OkapiLlama-3-dpo-GGUF/resolve/main/OkapiLlama-3-dpo.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/OkapiLlama-3-dpo-GGUF/resolve/main/OkapiLlama-3-dpo.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/OkapiLlama-3-dpo-GGUF/resolve/main/OkapiLlama-3-dpo.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OkapiLlama-3-dpo-GGUF/resolve/main/OkapiLlama-3-dpo.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OkapiLlama-3-dpo-GGUF/resolve/main/OkapiLlama-3-dpo.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/OkapiLlama-3-dpo-GGUF/resolve/main/OkapiLlama-3-dpo.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/OkapiLlama-3-dpo-GGUF/resolve/main/OkapiLlama-3-dpo.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/OkapiLlama-3-dpo-GGUF/resolve/main/OkapiLlama-3-dpo.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Aetheria-L2-70B-GGUF | mradermacher | 2024-05-06T04:38:56Z | 36 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"llama 2",
"en",
"base_model:royallab/Aetheria-L2-70B",
"base_model:quantized:royallab/Aetheria-L2-70B",
"endpoints_compatible",
"region:us"
] | null | 2024-04-19T17:01:01Z | ---
base_model: royallab/Aetheria-L2-70B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- llama
- llama 2
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/royallab/Aetheria-L2-70B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Aetheria-L2-70B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Aetheria-L2-70B-GGUF/resolve/main/Aetheria-L2-70B.Q2_K.gguf) | Q2_K | 25.6 | |
| [GGUF](https://huggingface.co/mradermacher/Aetheria-L2-70B-GGUF/resolve/main/Aetheria-L2-70B.IQ3_XS.gguf) | IQ3_XS | 28.4 | |
| [GGUF](https://huggingface.co/mradermacher/Aetheria-L2-70B-GGUF/resolve/main/Aetheria-L2-70B.IQ3_S.gguf) | IQ3_S | 30.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Aetheria-L2-70B-GGUF/resolve/main/Aetheria-L2-70B.Q3_K_S.gguf) | Q3_K_S | 30.0 | |
| [GGUF](https://huggingface.co/mradermacher/Aetheria-L2-70B-GGUF/resolve/main/Aetheria-L2-70B.IQ3_M.gguf) | IQ3_M | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Aetheria-L2-70B-GGUF/resolve/main/Aetheria-L2-70B.Q3_K_M.gguf) | Q3_K_M | 33.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Aetheria-L2-70B-GGUF/resolve/main/Aetheria-L2-70B.Q3_K_L.gguf) | Q3_K_L | 36.2 | |
| [GGUF](https://huggingface.co/mradermacher/Aetheria-L2-70B-GGUF/resolve/main/Aetheria-L2-70B.IQ4_XS.gguf) | IQ4_XS | 37.3 | |
| [GGUF](https://huggingface.co/mradermacher/Aetheria-L2-70B-GGUF/resolve/main/Aetheria-L2-70B.Q4_K_S.gguf) | Q4_K_S | 39.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Aetheria-L2-70B-GGUF/resolve/main/Aetheria-L2-70B.Q4_K_M.gguf) | Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Aetheria-L2-70B-GGUF/resolve/main/Aetheria-L2-70B.Q5_K_S.gguf) | Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/Aetheria-L2-70B-GGUF/resolve/main/Aetheria-L2-70B.Q5_K_M.gguf) | Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/Aetheria-L2-70B-GGUF/resolve/main/Aetheria-L2-70B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Aetheria-L2-70B-GGUF/resolve/main/Aetheria-L2-70B.Q6_K.gguf.part2of2) | Q6_K | 56.7 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Aetheria-L2-70B-GGUF/resolve/main/Aetheria-L2-70B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Aetheria-L2-70B-GGUF/resolve/main/Aetheria-L2-70B.Q8_0.gguf.part2of2) | Q8_0 | 73.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/cyber-risk-llama-3-8b-GGUF | mradermacher | 2024-05-06T04:38:48Z | 104 | 4 | transformers | [
"transformers",
"gguf",
"finance",
"supervision",
"cyber risk",
"cybersecurity",
"cyber threats",
"SFT",
"LoRA",
"A100GPU",
"en",
"dataset:Vanessasml/cybersecurity_32k_instruction_input_output",
"base_model:Vanessasml/cyber-risk-llama-3-8b",
"base_model:quantized:Vanessasml/cyber-risk-llama-3-8b",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-19T18:47:31Z | ---
base_model: Vanessasml/cyber-risk-llama-3-8b
datasets:
- Vanessasml/cybersecurity_32k_instruction_input_output
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- finance
- supervision
- cyber risk
- cybersecurity
- cyber threats
- SFT
- LoRA
- A100GPU
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Vanessasml/cyber-risk-llama-3-8b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/cyber-risk-llama-3-8b-GGUF/resolve/main/cyber-risk-llama-3-8b.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/cyber-risk-llama-3-8b-GGUF/resolve/main/cyber-risk-llama-3-8b.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/cyber-risk-llama-3-8b-GGUF/resolve/main/cyber-risk-llama-3-8b.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/cyber-risk-llama-3-8b-GGUF/resolve/main/cyber-risk-llama-3-8b.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/cyber-risk-llama-3-8b-GGUF/resolve/main/cyber-risk-llama-3-8b.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/cyber-risk-llama-3-8b-GGUF/resolve/main/cyber-risk-llama-3-8b.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/cyber-risk-llama-3-8b-GGUF/resolve/main/cyber-risk-llama-3-8b.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/cyber-risk-llama-3-8b-GGUF/resolve/main/cyber-risk-llama-3-8b.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/cyber-risk-llama-3-8b-GGUF/resolve/main/cyber-risk-llama-3-8b.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/cyber-risk-llama-3-8b-GGUF/resolve/main/cyber-risk-llama-3-8b.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/cyber-risk-llama-3-8b-GGUF/resolve/main/cyber-risk-llama-3-8b.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/cyber-risk-llama-3-8b-GGUF/resolve/main/cyber-risk-llama-3-8b.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/cyber-risk-llama-3-8b-GGUF/resolve/main/cyber-risk-llama-3-8b.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/cyber-risk-llama-3-8b-GGUF/resolve/main/cyber-risk-llama-3-8b.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Llama-3-13B-GGUF | mradermacher | 2024-05-06T04:38:34Z | 14 | 2 | transformers | [
"transformers",
"gguf",
"en",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-19T22:21:33Z | ---
base_model: Replete-AI/Llama-3-13B
language:
- en
library_name: transformers
license: other
license_link: https://llama.meta.com/llama3/license/
license_name: llama-3
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Replete-AI/Llama-3-13B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-13B-GGUF/resolve/main/Llama-3-13B.Q2_K.gguf) | Q2_K | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-13B-GGUF/resolve/main/Llama-3-13B.IQ3_XS.gguf) | IQ3_XS | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-13B-GGUF/resolve/main/Llama-3-13B.Q3_K_S.gguf) | Q3_K_S | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-13B-GGUF/resolve/main/Llama-3-13B.IQ3_S.gguf) | IQ3_S | 6.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-13B-GGUF/resolve/main/Llama-3-13B.IQ3_M.gguf) | IQ3_M | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-13B-GGUF/resolve/main/Llama-3-13B.Q3_K_M.gguf) | Q3_K_M | 6.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-13B-GGUF/resolve/main/Llama-3-13B.Q3_K_L.gguf) | Q3_K_L | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-13B-GGUF/resolve/main/Llama-3-13B.IQ4_XS.gguf) | IQ4_XS | 7.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-13B-GGUF/resolve/main/Llama-3-13B.Q4_K_S.gguf) | Q4_K_S | 7.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-13B-GGUF/resolve/main/Llama-3-13B.Q4_K_M.gguf) | Q4_K_M | 8.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-13B-GGUF/resolve/main/Llama-3-13B.Q5_K_S.gguf) | Q5_K_S | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-13B-GGUF/resolve/main/Llama-3-13B.Q5_K_M.gguf) | Q5_K_M | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-13B-GGUF/resolve/main/Llama-3-13B.Q6_K.gguf) | Q6_K | 11.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-13B-GGUF/resolve/main/Llama-3-13B.Q8_0.gguf) | Q8_0 | 14.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Boundary-Solar-Chat-2x10.7B-MoE-GGUF | mradermacher | 2024-05-06T04:38:32Z | 21 | 0 | transformers | [
"transformers",
"gguf",
"moe",
"merge",
"mergekit",
"NousResearch/Nous-Hermes-2-SOLAR-10.7B",
"upstage/SOLAR-10.7B-Instruct-v1.0",
"llama",
"Llama",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-19T23:30:43Z | ---
base_model: NotAiLOL/Boundary-Solar-Chat-2x10.7B-MoE
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- moe
- merge
- mergekit
- NousResearch/Nous-Hermes-2-SOLAR-10.7B
- upstage/SOLAR-10.7B-Instruct-v1.0
- llama
- Llama
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/NotAiLOL/Boundary-Solar-Chat-2x10.7B-MoE
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Boundary-Solar-Chat-2x10.7B-MoE-GGUF/resolve/main/Boundary-Solar-Chat-2x10.7B-MoE.Q2_K.gguf) | Q2_K | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/Boundary-Solar-Chat-2x10.7B-MoE-GGUF/resolve/main/Boundary-Solar-Chat-2x10.7B-MoE.IQ3_XS.gguf) | IQ3_XS | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Boundary-Solar-Chat-2x10.7B-MoE-GGUF/resolve/main/Boundary-Solar-Chat-2x10.7B-MoE.Q3_K_S.gguf) | Q3_K_S | 8.4 | |
| [GGUF](https://huggingface.co/mradermacher/Boundary-Solar-Chat-2x10.7B-MoE-GGUF/resolve/main/Boundary-Solar-Chat-2x10.7B-MoE.IQ3_S.gguf) | IQ3_S | 8.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Boundary-Solar-Chat-2x10.7B-MoE-GGUF/resolve/main/Boundary-Solar-Chat-2x10.7B-MoE.IQ3_M.gguf) | IQ3_M | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Boundary-Solar-Chat-2x10.7B-MoE-GGUF/resolve/main/Boundary-Solar-Chat-2x10.7B-MoE.Q3_K_M.gguf) | Q3_K_M | 9.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Boundary-Solar-Chat-2x10.7B-MoE-GGUF/resolve/main/Boundary-Solar-Chat-2x10.7B-MoE.Q3_K_L.gguf) | Q3_K_L | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/Boundary-Solar-Chat-2x10.7B-MoE-GGUF/resolve/main/Boundary-Solar-Chat-2x10.7B-MoE.IQ4_XS.gguf) | IQ4_XS | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/Boundary-Solar-Chat-2x10.7B-MoE-GGUF/resolve/main/Boundary-Solar-Chat-2x10.7B-MoE.Q4_K_S.gguf) | Q4_K_S | 11.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Boundary-Solar-Chat-2x10.7B-MoE-GGUF/resolve/main/Boundary-Solar-Chat-2x10.7B-MoE.Q4_K_M.gguf) | Q4_K_M | 11.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Boundary-Solar-Chat-2x10.7B-MoE-GGUF/resolve/main/Boundary-Solar-Chat-2x10.7B-MoE.Q5_K_S.gguf) | Q5_K_S | 13.3 | |
| [GGUF](https://huggingface.co/mradermacher/Boundary-Solar-Chat-2x10.7B-MoE-GGUF/resolve/main/Boundary-Solar-Chat-2x10.7B-MoE.Q5_K_M.gguf) | Q5_K_M | 13.7 | |
| [GGUF](https://huggingface.co/mradermacher/Boundary-Solar-Chat-2x10.7B-MoE-GGUF/resolve/main/Boundary-Solar-Chat-2x10.7B-MoE.Q6_K.gguf) | Q6_K | 15.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Boundary-Solar-Chat-2x10.7B-MoE-GGUF/resolve/main/Boundary-Solar-Chat-2x10.7B-MoE.Q8_0.gguf) | Q8_0 | 20.5 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Platypus2-70B-i1-GGUF | mradermacher | 2024-05-06T04:38:29Z | 295 | 2 | transformers | [
"transformers",
"gguf",
"en",
"dataset:garage-bAInd/Open-Platypus",
"base_model:garage-bAInd/Platypus2-70B",
"base_model:quantized:garage-bAInd/Platypus2-70B",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-20T01:49:29Z | ---
base_model: garage-bAInd/Platypus2-70B
datasets:
- garage-bAInd/Open-Platypus
language:
- en
library_name: transformers
license: cc-by-nc-sa-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/garage-bAInd/Platypus2-70B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Platypus2-70B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Platypus2-70B-i1-GGUF/resolve/main/Platypus2-70B.i1-IQ1_S.gguf) | i1-IQ1_S | 14.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Platypus2-70B-i1-GGUF/resolve/main/Platypus2-70B.i1-IQ1_M.gguf) | i1-IQ1_M | 16.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Platypus2-70B-i1-GGUF/resolve/main/Platypus2-70B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.4 | |
| [GGUF](https://huggingface.co/mradermacher/Platypus2-70B-i1-GGUF/resolve/main/Platypus2-70B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.4 | |
| [GGUF](https://huggingface.co/mradermacher/Platypus2-70B-i1-GGUF/resolve/main/Platypus2-70B.i1-IQ2_S.gguf) | i1-IQ2_S | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/Platypus2-70B-i1-GGUF/resolve/main/Platypus2-70B.i1-IQ2_M.gguf) | i1-IQ2_M | 23.3 | |
| [GGUF](https://huggingface.co/mradermacher/Platypus2-70B-i1-GGUF/resolve/main/Platypus2-70B.i1-Q2_K.gguf) | i1-Q2_K | 25.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Platypus2-70B-i1-GGUF/resolve/main/Platypus2-70B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 26.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Platypus2-70B-i1-GGUF/resolve/main/Platypus2-70B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.4 | |
| [GGUF](https://huggingface.co/mradermacher/Platypus2-70B-i1-GGUF/resolve/main/Platypus2-70B.i1-IQ3_S.gguf) | i1-IQ3_S | 30.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Platypus2-70B-i1-GGUF/resolve/main/Platypus2-70B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Platypus2-70B-i1-GGUF/resolve/main/Platypus2-70B.i1-IQ3_M.gguf) | i1-IQ3_M | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Platypus2-70B-i1-GGUF/resolve/main/Platypus2-70B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Platypus2-70B-i1-GGUF/resolve/main/Platypus2-70B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Platypus2-70B-i1-GGUF/resolve/main/Platypus2-70B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 36.9 | |
| [GGUF](https://huggingface.co/mradermacher/Platypus2-70B-i1-GGUF/resolve/main/Platypus2-70B.i1-Q4_0.gguf) | i1-Q4_0 | 39.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Platypus2-70B-i1-GGUF/resolve/main/Platypus2-70B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Platypus2-70B-i1-GGUF/resolve/main/Platypus2-70B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Platypus2-70B-i1-GGUF/resolve/main/Platypus2-70B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/Platypus2-70B-i1-GGUF/resolve/main/Platypus2-70B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/Platypus2-70B-i1-GGUF/resolve/main/Platypus2-70B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Platypus2-70B-i1-GGUF/resolve/main/Platypus2-70B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 56.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/deepseek-llm-67b-chat-GGUF | mradermacher | 2024-05-06T04:38:27Z | 143 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:deepseek-ai/deepseek-llm-67b-chat",
"base_model:quantized:deepseek-ai/deepseek-llm-67b-chat",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-20T02:06:26Z | ---
base_model: deepseek-ai/deepseek-llm-67b-chat
language:
- en
library_name: transformers
license: other
license_link: LICENSE
license_name: deepseek
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/deepseek-ai/deepseek-llm-67b-chat
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/deepseek-llm-67b-chat-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-chat-GGUF/resolve/main/deepseek-llm-67b-chat.Q2_K.gguf) | Q2_K | 25.2 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-chat-GGUF/resolve/main/deepseek-llm-67b-chat.IQ3_XS.gguf) | IQ3_XS | 28.0 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-chat-GGUF/resolve/main/deepseek-llm-67b-chat.Q3_K_S.gguf) | Q3_K_S | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-chat-GGUF/resolve/main/deepseek-llm-67b-chat.IQ3_S.gguf) | IQ3_S | 29.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-chat-GGUF/resolve/main/deepseek-llm-67b-chat.IQ3_M.gguf) | IQ3_M | 30.6 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-chat-GGUF/resolve/main/deepseek-llm-67b-chat.Q3_K_M.gguf) | Q3_K_M | 32.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-chat-GGUF/resolve/main/deepseek-llm-67b-chat.Q3_K_L.gguf) | Q3_K_L | 35.7 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-chat-GGUF/resolve/main/deepseek-llm-67b-chat.IQ4_XS.gguf) | IQ4_XS | 36.6 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-chat-GGUF/resolve/main/deepseek-llm-67b-chat.Q4_K_S.gguf) | Q4_K_S | 38.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-chat-GGUF/resolve/main/deepseek-llm-67b-chat.Q4_K_M.gguf) | Q4_K_M | 40.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-chat-GGUF/resolve/main/deepseek-llm-67b-chat.Q5_K_S.gguf) | Q5_K_S | 46.6 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-chat-GGUF/resolve/main/deepseek-llm-67b-chat.Q5_K_M.gguf) | Q5_K_M | 47.8 | |
| [PART 1](https://huggingface.co/mradermacher/deepseek-llm-67b-chat-GGUF/resolve/main/deepseek-llm-67b-chat.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/deepseek-llm-67b-chat-GGUF/resolve/main/deepseek-llm-67b-chat.Q6_K.gguf.part2of2) | Q6_K | 55.4 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/deepseek-llm-67b-chat-GGUF/resolve/main/deepseek-llm-67b-chat.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/deepseek-llm-67b-chat-GGUF/resolve/main/deepseek-llm-67b-chat.Q8_0.gguf.part2of2) | Q8_0 | 71.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Aura_L3_8B-GGUF | mradermacher | 2024-05-06T04:38:16Z | 98 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:ResplendentAI/Aura_L3_8B",
"base_model:quantized:ResplendentAI/Aura_L3_8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-20T03:48:07Z | ---
base_model: ResplendentAI/Aura_L3_8B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/ResplendentAI/Aura_L3_8B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Aura_L3_8B-GGUF/resolve/main/Aura_L3_8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Aura_L3_8B-GGUF/resolve/main/Aura_L3_8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Aura_L3_8B-GGUF/resolve/main/Aura_L3_8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Aura_L3_8B-GGUF/resolve/main/Aura_L3_8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Aura_L3_8B-GGUF/resolve/main/Aura_L3_8B.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Aura_L3_8B-GGUF/resolve/main/Aura_L3_8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Aura_L3_8B-GGUF/resolve/main/Aura_L3_8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Aura_L3_8B-GGUF/resolve/main/Aura_L3_8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Aura_L3_8B-GGUF/resolve/main/Aura_L3_8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Aura_L3_8B-GGUF/resolve/main/Aura_L3_8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Aura_L3_8B-GGUF/resolve/main/Aura_L3_8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Aura_L3_8B-GGUF/resolve/main/Aura_L3_8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Aura_L3_8B-GGUF/resolve/main/Aura_L3_8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Aura_L3_8B-GGUF/resolve/main/Aura_L3_8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Subsets and Splits