modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
sequence
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
mradermacher/Brunhilde-2x7b-MOE-DPO-v.01.5-GGUF
mradermacher
2024-05-06T05:20:30Z
38
0
transformers
[ "transformers", "gguf", "moe", "en", "base_model:arlineka/Brunhilde-2x7b-MOE-DPO-v.01.5", "base_model:quantized:arlineka/Brunhilde-2x7b-MOE-DPO-v.01.5", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-03T15:02:34Z
--- base_model: arlineka/Brunhilde-2x7b-MOE-DPO-v.01.5 language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher tags: - moe --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/arlineka/Brunhilde-2x7b-MOE-DPO-v.01.5 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Brunhilde-2x7b-MOE-DPO-v.01.5-GGUF/resolve/main/Brunhilde-2x7b-MOE-DPO-v.01.5.Q2_K.gguf) | Q2_K | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/Brunhilde-2x7b-MOE-DPO-v.01.5-GGUF/resolve/main/Brunhilde-2x7b-MOE-DPO-v.01.5.IQ3_XS.gguf) | IQ3_XS | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/Brunhilde-2x7b-MOE-DPO-v.01.5-GGUF/resolve/main/Brunhilde-2x7b-MOE-DPO-v.01.5.Q3_K_S.gguf) | Q3_K_S | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Brunhilde-2x7b-MOE-DPO-v.01.5-GGUF/resolve/main/Brunhilde-2x7b-MOE-DPO-v.01.5.IQ3_S.gguf) | IQ3_S | 5.9 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Brunhilde-2x7b-MOE-DPO-v.01.5-GGUF/resolve/main/Brunhilde-2x7b-MOE-DPO-v.01.5.IQ3_M.gguf) | IQ3_M | 6.0 | | | [GGUF](https://huggingface.co/mradermacher/Brunhilde-2x7b-MOE-DPO-v.01.5-GGUF/resolve/main/Brunhilde-2x7b-MOE-DPO-v.01.5.Q3_K_M.gguf) | Q3_K_M | 6.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Brunhilde-2x7b-MOE-DPO-v.01.5-GGUF/resolve/main/Brunhilde-2x7b-MOE-DPO-v.01.5.Q3_K_L.gguf) | Q3_K_L | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/Brunhilde-2x7b-MOE-DPO-v.01.5-GGUF/resolve/main/Brunhilde-2x7b-MOE-DPO-v.01.5.IQ4_XS.gguf) | IQ4_XS | 7.2 | | | [GGUF](https://huggingface.co/mradermacher/Brunhilde-2x7b-MOE-DPO-v.01.5-GGUF/resolve/main/Brunhilde-2x7b-MOE-DPO-v.01.5.Q4_K_S.gguf) | Q4_K_S | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Brunhilde-2x7b-MOE-DPO-v.01.5-GGUF/resolve/main/Brunhilde-2x7b-MOE-DPO-v.01.5.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Brunhilde-2x7b-MOE-DPO-v.01.5-GGUF/resolve/main/Brunhilde-2x7b-MOE-DPO-v.01.5.Q5_K_S.gguf) | Q5_K_S | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/Brunhilde-2x7b-MOE-DPO-v.01.5-GGUF/resolve/main/Brunhilde-2x7b-MOE-DPO-v.01.5.Q5_K_M.gguf) | Q5_K_M | 9.4 | | | [GGUF](https://huggingface.co/mradermacher/Brunhilde-2x7b-MOE-DPO-v.01.5-GGUF/resolve/main/Brunhilde-2x7b-MOE-DPO-v.01.5.Q6_K.gguf) | Q6_K | 10.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Brunhilde-2x7b-MOE-DPO-v.01.5-GGUF/resolve/main/Brunhilde-2x7b-MOE-DPO-v.01.5.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/HeatherSpell-7b-GGUF
mradermacher
2024-05-06T05:20:25Z
34
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "yam-peleg/Experiment26-7B", "Kukedlc/NeuralExperiment-7b-MagicCoder-v7.5", "en", "base_model:MysticFoxMagic/HeatherSpell-7b", "base_model:quantized:MysticFoxMagic/HeatherSpell-7b", "endpoints_compatible", "region:us" ]
null
2024-04-03T16:44:58Z
--- base_model: MysticFoxMagic/HeatherSpell-7b language: - en library_name: transformers quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - yam-peleg/Experiment26-7B - Kukedlc/NeuralExperiment-7b-MagicCoder-v7.5 --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/MysticFoxMagic/HeatherSpell-7b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/HeatherSpell-7b-GGUF/resolve/main/HeatherSpell-7b.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/HeatherSpell-7b-GGUF/resolve/main/HeatherSpell-7b.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/HeatherSpell-7b-GGUF/resolve/main/HeatherSpell-7b.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/HeatherSpell-7b-GGUF/resolve/main/HeatherSpell-7b.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/HeatherSpell-7b-GGUF/resolve/main/HeatherSpell-7b.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/HeatherSpell-7b-GGUF/resolve/main/HeatherSpell-7b.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/HeatherSpell-7b-GGUF/resolve/main/HeatherSpell-7b.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/HeatherSpell-7b-GGUF/resolve/main/HeatherSpell-7b.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/HeatherSpell-7b-GGUF/resolve/main/HeatherSpell-7b.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/HeatherSpell-7b-GGUF/resolve/main/HeatherSpell-7b.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/HeatherSpell-7b-GGUF/resolve/main/HeatherSpell-7b.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/HeatherSpell-7b-GGUF/resolve/main/HeatherSpell-7b.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/HeatherSpell-7b-GGUF/resolve/main/HeatherSpell-7b.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/HeatherSpell-7b-GGUF/resolve/main/HeatherSpell-7b.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/pandafish-dt-7b-GGUF
mradermacher
2024-05-06T05:20:10Z
64
1
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "CultriX/MergeCeption-7B-v3", "en", "base_model:ichigoberry/pandafish-dt-7b", "base_model:quantized:ichigoberry/pandafish-dt-7b", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-03T19:03:45Z
--- base_model: ichigoberry/pandafish-dt-7b language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - CultriX/MergeCeption-7B-v3 --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/ichigoberry/pandafish-dt-7b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/airoboros-l2-70b-2.2-GGUF
mradermacher
2024-05-06T05:19:59Z
15
0
transformers
[ "transformers", "gguf", "en", "dataset:jondurbin/airoboros-2.2", "base_model:jondurbin/airoboros-l2-70b-2.2", "base_model:quantized:jondurbin/airoboros-l2-70b-2.2", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-04-03T20:44:45Z
--- base_model: jondurbin/airoboros-l2-70b-2.2 datasets: - jondurbin/airoboros-2.2 language: - en library_name: transformers license: llama2 quantized_by: mradermacher --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/jondurbin/airoboros-l2-70b-2.2 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/airoboros-l2-70b-2.2-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/airoboros-l2-70b-2.2-GGUF/resolve/main/airoboros-l2-70b-2.2.Q2_K.gguf) | Q2_K | 25.9 | | | [GGUF](https://huggingface.co/mradermacher/airoboros-l2-70b-2.2-GGUF/resolve/main/airoboros-l2-70b-2.2.IQ3_XS.gguf) | IQ3_XS | 28.7 | | | [GGUF](https://huggingface.co/mradermacher/airoboros-l2-70b-2.2-GGUF/resolve/main/airoboros-l2-70b-2.2.IQ3_S.gguf) | IQ3_S | 30.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/airoboros-l2-70b-2.2-GGUF/resolve/main/airoboros-l2-70b-2.2.Q3_K_S.gguf) | Q3_K_S | 30.3 | | | [GGUF](https://huggingface.co/mradermacher/airoboros-l2-70b-2.2-GGUF/resolve/main/airoboros-l2-70b-2.2.IQ3_M.gguf) | IQ3_M | 31.4 | | | [GGUF](https://huggingface.co/mradermacher/airoboros-l2-70b-2.2-GGUF/resolve/main/airoboros-l2-70b-2.2.Q3_K_M.gguf) | Q3_K_M | 33.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/airoboros-l2-70b-2.2-GGUF/resolve/main/airoboros-l2-70b-2.2.Q3_K_L.gguf) | Q3_K_L | 36.6 | | | [GGUF](https://huggingface.co/mradermacher/airoboros-l2-70b-2.2-GGUF/resolve/main/airoboros-l2-70b-2.2.IQ4_XS.gguf) | IQ4_XS | 37.6 | | | [GGUF](https://huggingface.co/mradermacher/airoboros-l2-70b-2.2-GGUF/resolve/main/airoboros-l2-70b-2.2.Q4_K_S.gguf) | Q4_K_S | 39.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/airoboros-l2-70b-2.2-GGUF/resolve/main/airoboros-l2-70b-2.2.Q4_K_M.gguf) | Q4_K_M | 41.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/airoboros-l2-70b-2.2-GGUF/resolve/main/airoboros-l2-70b-2.2.Q5_K_S.gguf) | Q5_K_S | 47.9 | | | [GGUF](https://huggingface.co/mradermacher/airoboros-l2-70b-2.2-GGUF/resolve/main/airoboros-l2-70b-2.2.Q5_K_M.gguf) | Q5_K_M | 49.2 | | | [PART 1](https://huggingface.co/mradermacher/airoboros-l2-70b-2.2-GGUF/resolve/main/airoboros-l2-70b-2.2.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/airoboros-l2-70b-2.2-GGUF/resolve/main/airoboros-l2-70b-2.2.Q6_K.gguf.part2of2) | Q6_K | 57.0 | very good quality | | [PART 1](https://huggingface.co/mradermacher/airoboros-l2-70b-2.2-GGUF/resolve/main/airoboros-l2-70b-2.2.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/airoboros-l2-70b-2.2-GGUF/resolve/main/airoboros-l2-70b-2.2.Q8_0.gguf.part2of2) | Q8_0 | 73.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/FNCARL-7b-GGUF
mradermacher
2024-05-06T05:19:57Z
11
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:jambroz/FNCARL-7b", "base_model:quantized:jambroz/FNCARL-7b", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-03T20:47:28Z
--- base_model: jambroz/FNCARL-7b language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/jambroz/FNCARL-7b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-GGUF/resolve/main/FNCARL-7b.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-GGUF/resolve/main/FNCARL-7b.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-GGUF/resolve/main/FNCARL-7b.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-GGUF/resolve/main/FNCARL-7b.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-GGUF/resolve/main/FNCARL-7b.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-GGUF/resolve/main/FNCARL-7b.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-GGUF/resolve/main/FNCARL-7b.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-GGUF/resolve/main/FNCARL-7b.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-GGUF/resolve/main/FNCARL-7b.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-GGUF/resolve/main/FNCARL-7b.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-GGUF/resolve/main/FNCARL-7b.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-GGUF/resolve/main/FNCARL-7b.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-GGUF/resolve/main/FNCARL-7b.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-GGUF/resolve/main/FNCARL-7b.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/KunoichiVerse-7B-GGUF
mradermacher
2024-05-06T05:19:51Z
28
1
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "en", "base_model:Ppoyaa/KunoichiVerse-7B", "base_model:quantized:Ppoyaa/KunoichiVerse-7B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-03T21:48:59Z
--- base_model: Ppoyaa/KunoichiVerse-7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Ppoyaa/KunoichiVerse-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/StarMonarch-7B-GGUF
mradermacher
2024-05-06T05:19:34Z
71
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "en", "base_model:Ppoyaa/StarMonarch-7B", "base_model:quantized:Ppoyaa/StarMonarch-7B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-04T00:13:47Z
--- base_model: Ppoyaa/StarMonarch-7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Ppoyaa/StarMonarch-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/StarlingMaid-2x7B-base-GGUF
mradermacher
2024-05-06T05:19:27Z
42
0
transformers
[ "transformers", "gguf", "en", "base_model:dawn17/StarlingMaid-2x7B-base", "base_model:quantized:dawn17/StarlingMaid-2x7B-base", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-04T00:42:41Z
--- base_model: dawn17/StarlingMaid-2x7B-base language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/dawn17/StarlingMaid-2x7B-base <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/StarlingMaid-2x7B-base-GGUF/resolve/main/StarlingMaid-2x7B-base.Q2_K.gguf) | Q2_K | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/StarlingMaid-2x7B-base-GGUF/resolve/main/StarlingMaid-2x7B-base.IQ3_XS.gguf) | IQ3_XS | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/StarlingMaid-2x7B-base-GGUF/resolve/main/StarlingMaid-2x7B-base.Q3_K_S.gguf) | Q3_K_S | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/StarlingMaid-2x7B-base-GGUF/resolve/main/StarlingMaid-2x7B-base.IQ3_S.gguf) | IQ3_S | 5.9 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/StarlingMaid-2x7B-base-GGUF/resolve/main/StarlingMaid-2x7B-base.IQ3_M.gguf) | IQ3_M | 6.0 | | | [GGUF](https://huggingface.co/mradermacher/StarlingMaid-2x7B-base-GGUF/resolve/main/StarlingMaid-2x7B-base.Q3_K_M.gguf) | Q3_K_M | 6.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/StarlingMaid-2x7B-base-GGUF/resolve/main/StarlingMaid-2x7B-base.Q3_K_L.gguf) | Q3_K_L | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/StarlingMaid-2x7B-base-GGUF/resolve/main/StarlingMaid-2x7B-base.IQ4_XS.gguf) | IQ4_XS | 7.2 | | | [GGUF](https://huggingface.co/mradermacher/StarlingMaid-2x7B-base-GGUF/resolve/main/StarlingMaid-2x7B-base.Q4_K_S.gguf) | Q4_K_S | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/StarlingMaid-2x7B-base-GGUF/resolve/main/StarlingMaid-2x7B-base.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/StarlingMaid-2x7B-base-GGUF/resolve/main/StarlingMaid-2x7B-base.Q5_K_S.gguf) | Q5_K_S | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/StarlingMaid-2x7B-base-GGUF/resolve/main/StarlingMaid-2x7B-base.Q5_K_M.gguf) | Q5_K_M | 9.4 | | | [GGUF](https://huggingface.co/mradermacher/StarlingMaid-2x7B-base-GGUF/resolve/main/StarlingMaid-2x7B-base.Q6_K.gguf) | Q6_K | 10.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/StarlingMaid-2x7B-base-GGUF/resolve/main/StarlingMaid-2x7B-base.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Miqu-MS-70B-i1-GGUF
mradermacher
2024-05-06T05:19:24Z
40
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Undi95/Miqu-MS-70B", "base_model:quantized:Undi95/Miqu-MS-70B", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-04T01:01:01Z
--- base_model: Undi95/Miqu-MS-70B language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/Undi95/Miqu-MS-70B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Miqu-MS-70B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Miqu-MS-70B-i1-GGUF/resolve/main/Miqu-MS-70B.i1-IQ1_S.gguf) | i1-IQ1_S | 15.0 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Miqu-MS-70B-i1-GGUF/resolve/main/Miqu-MS-70B.i1-IQ1_M.gguf) | i1-IQ1_M | 16.4 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Miqu-MS-70B-i1-GGUF/resolve/main/Miqu-MS-70B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.7 | | | [GGUF](https://huggingface.co/mradermacher/Miqu-MS-70B-i1-GGUF/resolve/main/Miqu-MS-70B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.8 | | | [GGUF](https://huggingface.co/mradermacher/Miqu-MS-70B-i1-GGUF/resolve/main/Miqu-MS-70B.i1-IQ2_S.gguf) | i1-IQ2_S | 21.8 | | | [GGUF](https://huggingface.co/mradermacher/Miqu-MS-70B-i1-GGUF/resolve/main/Miqu-MS-70B.i1-IQ2_M.gguf) | i1-IQ2_M | 23.7 | | | [GGUF](https://huggingface.co/mradermacher/Miqu-MS-70B-i1-GGUF/resolve/main/Miqu-MS-70B.i1-Q2_K.gguf) | i1-Q2_K | 25.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Miqu-MS-70B-i1-GGUF/resolve/main/Miqu-MS-70B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Miqu-MS-70B-i1-GGUF/resolve/main/Miqu-MS-70B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.7 | | | [GGUF](https://huggingface.co/mradermacher/Miqu-MS-70B-i1-GGUF/resolve/main/Miqu-MS-70B.i1-IQ3_S.gguf) | i1-IQ3_S | 30.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Miqu-MS-70B-i1-GGUF/resolve/main/Miqu-MS-70B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.3 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Miqu-MS-70B-i1-GGUF/resolve/main/Miqu-MS-70B.i1-IQ3_M.gguf) | i1-IQ3_M | 31.4 | | | [GGUF](https://huggingface.co/mradermacher/Miqu-MS-70B-i1-GGUF/resolve/main/Miqu-MS-70B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.7 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Miqu-MS-70B-i1-GGUF/resolve/main/Miqu-MS-70B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.6 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Miqu-MS-70B-i1-GGUF/resolve/main/Miqu-MS-70B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 37.2 | | | [GGUF](https://huggingface.co/mradermacher/Miqu-MS-70B-i1-GGUF/resolve/main/Miqu-MS-70B.i1-Q4_0.gguf) | i1-Q4_0 | 39.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Miqu-MS-70B-i1-GGUF/resolve/main/Miqu-MS-70B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Miqu-MS-70B-i1-GGUF/resolve/main/Miqu-MS-70B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Miqu-MS-70B-i1-GGUF/resolve/main/Miqu-MS-70B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.9 | | | [GGUF](https://huggingface.co/mradermacher/Miqu-MS-70B-i1-GGUF/resolve/main/Miqu-MS-70B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 49.2 | | | [PART 1](https://huggingface.co/mradermacher/Miqu-MS-70B-i1-GGUF/resolve/main/Miqu-MS-70B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Miqu-MS-70B-i1-GGUF/resolve/main/Miqu-MS-70B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 57.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Starling-LM-10.7B-beta-GGUF
mradermacher
2024-05-06T05:19:11Z
4
0
transformers
[ "transformers", "gguf", "en", "base_model:ddh0/Starling-LM-10.7B-beta", "base_model:quantized:ddh0/Starling-LM-10.7B-beta", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-04T02:05:48Z
--- base_model: ddh0/Starling-LM-10.7B-beta language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/ddh0/Starling-LM-10.7B-beta <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Starling-LM-10.7B-beta-GGUF/resolve/main/Starling-LM-10.7B-beta.Q2_K.gguf) | Q2_K | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/Starling-LM-10.7B-beta-GGUF/resolve/main/Starling-LM-10.7B-beta.IQ3_XS.gguf) | IQ3_XS | 4.7 | | | [GGUF](https://huggingface.co/mradermacher/Starling-LM-10.7B-beta-GGUF/resolve/main/Starling-LM-10.7B-beta.Q3_K_S.gguf) | Q3_K_S | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Starling-LM-10.7B-beta-GGUF/resolve/main/Starling-LM-10.7B-beta.IQ3_S.gguf) | IQ3_S | 4.9 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Starling-LM-10.7B-beta-GGUF/resolve/main/Starling-LM-10.7B-beta.IQ3_M.gguf) | IQ3_M | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Starling-LM-10.7B-beta-GGUF/resolve/main/Starling-LM-10.7B-beta.Q3_K_M.gguf) | Q3_K_M | 5.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Starling-LM-10.7B-beta-GGUF/resolve/main/Starling-LM-10.7B-beta.Q3_K_L.gguf) | Q3_K_L | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/Starling-LM-10.7B-beta-GGUF/resolve/main/Starling-LM-10.7B-beta.IQ4_XS.gguf) | IQ4_XS | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/Starling-LM-10.7B-beta-GGUF/resolve/main/Starling-LM-10.7B-beta.Q4_K_S.gguf) | Q4_K_S | 6.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Starling-LM-10.7B-beta-GGUF/resolve/main/Starling-LM-10.7B-beta.Q4_K_M.gguf) | Q4_K_M | 6.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Starling-LM-10.7B-beta-GGUF/resolve/main/Starling-LM-10.7B-beta.Q5_K_S.gguf) | Q5_K_S | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/Starling-LM-10.7B-beta-GGUF/resolve/main/Starling-LM-10.7B-beta.Q5_K_M.gguf) | Q5_K_M | 7.9 | | | [GGUF](https://huggingface.co/mradermacher/Starling-LM-10.7B-beta-GGUF/resolve/main/Starling-LM-10.7B-beta.Q6_K.gguf) | Q6_K | 9.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Starling-LM-10.7B-beta-GGUF/resolve/main/Starling-LM-10.7B-beta.Q8_0.gguf) | Q8_0 | 11.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/AuroraRP-8x7B-GGUF
mradermacher
2024-05-06T05:18:57Z
26
1
transformers
[ "transformers", "gguf", "roleplay", "rp", "mergekit", "merge", "en", "endpoints_compatible", "region:us" ]
null
2024-04-04T04:00:24Z
--- base_model: Fredithefish/AuroraRP-8x7B language: - en library_name: transformers quantized_by: mradermacher tags: - roleplay - rp - mergekit - merge --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Fredithefish/AuroraRP-8x7B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/AuroraRP-8x7B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/AuroraRP-8x7B-GGUF/resolve/main/AuroraRP-8x7B.Q2_K.gguf) | Q2_K | 17.6 | | | [GGUF](https://huggingface.co/mradermacher/AuroraRP-8x7B-GGUF/resolve/main/AuroraRP-8x7B.IQ3_XS.gguf) | IQ3_XS | 19.5 | | | [GGUF](https://huggingface.co/mradermacher/AuroraRP-8x7B-GGUF/resolve/main/AuroraRP-8x7B.IQ3_S.gguf) | IQ3_S | 20.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/AuroraRP-8x7B-GGUF/resolve/main/AuroraRP-8x7B.Q3_K_S.gguf) | Q3_K_S | 20.7 | | | [GGUF](https://huggingface.co/mradermacher/AuroraRP-8x7B-GGUF/resolve/main/AuroraRP-8x7B.IQ3_M.gguf) | IQ3_M | 21.7 | | | [GGUF](https://huggingface.co/mradermacher/AuroraRP-8x7B-GGUF/resolve/main/AuroraRP-8x7B.Q3_K_M.gguf) | Q3_K_M | 22.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/AuroraRP-8x7B-GGUF/resolve/main/AuroraRP-8x7B.Q3_K_L.gguf) | Q3_K_L | 24.4 | | | [GGUF](https://huggingface.co/mradermacher/AuroraRP-8x7B-GGUF/resolve/main/AuroraRP-8x7B.IQ4_XS.gguf) | IQ4_XS | 25.6 | | | [GGUF](https://huggingface.co/mradermacher/AuroraRP-8x7B-GGUF/resolve/main/AuroraRP-8x7B.Q4_K_S.gguf) | Q4_K_S | 27.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/AuroraRP-8x7B-GGUF/resolve/main/AuroraRP-8x7B.Q4_K_M.gguf) | Q4_K_M | 28.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/AuroraRP-8x7B-GGUF/resolve/main/AuroraRP-8x7B.Q5_K_S.gguf) | Q5_K_S | 32.5 | | | [GGUF](https://huggingface.co/mradermacher/AuroraRP-8x7B-GGUF/resolve/main/AuroraRP-8x7B.Q5_K_M.gguf) | Q5_K_M | 33.5 | | | [GGUF](https://huggingface.co/mradermacher/AuroraRP-8x7B-GGUF/resolve/main/AuroraRP-8x7B.Q6_K.gguf) | Q6_K | 38.6 | very good quality | | [PART 1](https://huggingface.co/mradermacher/AuroraRP-8x7B-GGUF/resolve/main/AuroraRP-8x7B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/AuroraRP-8x7B-GGUF/resolve/main/AuroraRP-8x7B.Q8_0.gguf.part2of2) | Q8_0 | 49.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/airoboros-l2-70b-2.2-i1-GGUF
mradermacher
2024-05-06T05:18:52Z
54
0
transformers
[ "transformers", "gguf", "en", "dataset:jondurbin/airoboros-2.2", "base_model:jondurbin/airoboros-l2-70b-2.2", "base_model:quantized:jondurbin/airoboros-l2-70b-2.2", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-04-04T04:00:39Z
--- base_model: jondurbin/airoboros-l2-70b-2.2 datasets: - jondurbin/airoboros-2.2 language: - en library_name: transformers license: llama2 quantized_by: mradermacher --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/jondurbin/airoboros-l2-70b-2.2 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/airoboros-l2-70b-2.2-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/airoboros-l2-70b-2.2-i1-GGUF/resolve/main/airoboros-l2-70b-2.2.i1-IQ1_S.gguf) | i1-IQ1_S | 15.0 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/airoboros-l2-70b-2.2-i1-GGUF/resolve/main/airoboros-l2-70b-2.2.i1-IQ1_M.gguf) | i1-IQ1_M | 16.4 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/airoboros-l2-70b-2.2-i1-GGUF/resolve/main/airoboros-l2-70b-2.2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.7 | | | [GGUF](https://huggingface.co/mradermacher/airoboros-l2-70b-2.2-i1-GGUF/resolve/main/airoboros-l2-70b-2.2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.8 | | | [GGUF](https://huggingface.co/mradermacher/airoboros-l2-70b-2.2-i1-GGUF/resolve/main/airoboros-l2-70b-2.2.i1-IQ2_S.gguf) | i1-IQ2_S | 21.8 | | | [GGUF](https://huggingface.co/mradermacher/airoboros-l2-70b-2.2-i1-GGUF/resolve/main/airoboros-l2-70b-2.2.i1-IQ2_M.gguf) | i1-IQ2_M | 23.7 | | | [GGUF](https://huggingface.co/mradermacher/airoboros-l2-70b-2.2-i1-GGUF/resolve/main/airoboros-l2-70b-2.2.i1-Q2_K.gguf) | i1-Q2_K | 25.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/airoboros-l2-70b-2.2-i1-GGUF/resolve/main/airoboros-l2-70b-2.2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/airoboros-l2-70b-2.2-i1-GGUF/resolve/main/airoboros-l2-70b-2.2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.7 | | | [GGUF](https://huggingface.co/mradermacher/airoboros-l2-70b-2.2-i1-GGUF/resolve/main/airoboros-l2-70b-2.2.i1-IQ3_S.gguf) | i1-IQ3_S | 30.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/airoboros-l2-70b-2.2-i1-GGUF/resolve/main/airoboros-l2-70b-2.2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.3 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/airoboros-l2-70b-2.2-i1-GGUF/resolve/main/airoboros-l2-70b-2.2.i1-IQ3_M.gguf) | i1-IQ3_M | 31.4 | | | [GGUF](https://huggingface.co/mradermacher/airoboros-l2-70b-2.2-i1-GGUF/resolve/main/airoboros-l2-70b-2.2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.7 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/airoboros-l2-70b-2.2-i1-GGUF/resolve/main/airoboros-l2-70b-2.2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.6 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/airoboros-l2-70b-2.2-i1-GGUF/resolve/main/airoboros-l2-70b-2.2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 37.2 | | | [GGUF](https://huggingface.co/mradermacher/airoboros-l2-70b-2.2-i1-GGUF/resolve/main/airoboros-l2-70b-2.2.i1-Q4_0.gguf) | i1-Q4_0 | 39.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/airoboros-l2-70b-2.2-i1-GGUF/resolve/main/airoboros-l2-70b-2.2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/airoboros-l2-70b-2.2-i1-GGUF/resolve/main/airoboros-l2-70b-2.2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/airoboros-l2-70b-2.2-i1-GGUF/resolve/main/airoboros-l2-70b-2.2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.9 | | | [GGUF](https://huggingface.co/mradermacher/airoboros-l2-70b-2.2-i1-GGUF/resolve/main/airoboros-l2-70b-2.2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 49.2 | | | [PART 1](https://huggingface.co/mradermacher/airoboros-l2-70b-2.2-i1-GGUF/resolve/main/airoboros-l2-70b-2.2.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/airoboros-l2-70b-2.2-i1-GGUF/resolve/main/airoboros-l2-70b-2.2.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 57.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Mavens-2-GGUF
mradermacher
2024-05-06T05:18:49Z
322
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "en", "base_model:AiMavenAi/Mavens-2", "base_model:quantized:AiMavenAi/Mavens-2", "license:cc-by-nc-nd-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-04T05:06:34Z
--- base_model: AiMavenAi/Mavens-2 language: - en library_name: transformers license: cc-by-nc-nd-4.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/AiMavenAi/Mavens-2 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mavens-2-GGUF/resolve/main/Mavens-2.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Mavens-2-GGUF/resolve/main/Mavens-2.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Mavens-2-GGUF/resolve/main/Mavens-2.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Mavens-2-GGUF/resolve/main/Mavens-2.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mavens-2-GGUF/resolve/main/Mavens-2.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Mavens-2-GGUF/resolve/main/Mavens-2.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mavens-2-GGUF/resolve/main/Mavens-2.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Mavens-2-GGUF/resolve/main/Mavens-2.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Mavens-2-GGUF/resolve/main/Mavens-2.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mavens-2-GGUF/resolve/main/Mavens-2.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mavens-2-GGUF/resolve/main/Mavens-2.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Mavens-2-GGUF/resolve/main/Mavens-2.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Mavens-2-GGUF/resolve/main/Mavens-2.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Mavens-2-GGUF/resolve/main/Mavens-2.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/HyouKan-3x7B-V2-32k-GGUF
mradermacher
2024-05-06T05:18:41Z
55
0
transformers
[ "transformers", "gguf", "moe", "merge", "Roleplay", "en", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-04T06:17:02Z
--- base_model: Alsebay/HyouKan-3x7B-V2-32k language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher tags: - moe - merge - Roleplay --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Alsebay/HyouKan-3x7B-V2-32k <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/HyouKan-3x7B-V2-32k-GGUF/resolve/main/HyouKan-3x7B-V2-32k.Q2_K.gguf) | Q2_K | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/HyouKan-3x7B-V2-32k-GGUF/resolve/main/HyouKan-3x7B-V2-32k.IQ3_XS.gguf) | IQ3_XS | 7.8 | | | [GGUF](https://huggingface.co/mradermacher/HyouKan-3x7B-V2-32k-GGUF/resolve/main/HyouKan-3x7B-V2-32k.Q3_K_S.gguf) | Q3_K_S | 8.3 | | | [GGUF](https://huggingface.co/mradermacher/HyouKan-3x7B-V2-32k-GGUF/resolve/main/HyouKan-3x7B-V2-32k.IQ3_S.gguf) | IQ3_S | 8.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/HyouKan-3x7B-V2-32k-GGUF/resolve/main/HyouKan-3x7B-V2-32k.IQ3_M.gguf) | IQ3_M | 8.4 | | | [GGUF](https://huggingface.co/mradermacher/HyouKan-3x7B-V2-32k-GGUF/resolve/main/HyouKan-3x7B-V2-32k.Q3_K_M.gguf) | Q3_K_M | 9.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/HyouKan-3x7B-V2-32k-GGUF/resolve/main/HyouKan-3x7B-V2-32k.Q3_K_L.gguf) | Q3_K_L | 9.9 | | | [GGUF](https://huggingface.co/mradermacher/HyouKan-3x7B-V2-32k-GGUF/resolve/main/HyouKan-3x7B-V2-32k.IQ4_XS.gguf) | IQ4_XS | 10.3 | | | [GGUF](https://huggingface.co/mradermacher/HyouKan-3x7B-V2-32k-GGUF/resolve/main/HyouKan-3x7B-V2-32k.Q4_K_S.gguf) | Q4_K_S | 10.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/HyouKan-3x7B-V2-32k-GGUF/resolve/main/HyouKan-3x7B-V2-32k.Q4_K_M.gguf) | Q4_K_M | 11.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/HyouKan-3x7B-V2-32k-GGUF/resolve/main/HyouKan-3x7B-V2-32k.Q5_K_S.gguf) | Q5_K_S | 13.0 | | | [GGUF](https://huggingface.co/mradermacher/HyouKan-3x7B-V2-32k-GGUF/resolve/main/HyouKan-3x7B-V2-32k.Q5_K_M.gguf) | Q5_K_M | 13.4 | | | [GGUF](https://huggingface.co/mradermacher/HyouKan-3x7B-V2-32k-GGUF/resolve/main/HyouKan-3x7B-V2-32k.Q6_K.gguf) | Q6_K | 15.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/HyouKan-3x7B-V2-32k-GGUF/resolve/main/HyouKan-3x7B-V2-32k.Q8_0.gguf) | Q8_0 | 19.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/MistarlingMaid-2x7B-base-GGUF
mradermacher
2024-05-06T05:18:27Z
74
0
transformers
[ "transformers", "gguf", "en", "base_model:dawn17/MistarlingMaid-2x7B-base", "base_model:quantized:dawn17/MistarlingMaid-2x7B-base", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-04T08:20:09Z
--- base_model: dawn17/MistarlingMaid-2x7B-base language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/dawn17/MistarlingMaid-2x7B-base <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.Q2_K.gguf) | Q2_K | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.IQ3_XS.gguf) | IQ3_XS | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.Q3_K_S.gguf) | Q3_K_S | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.IQ3_S.gguf) | IQ3_S | 5.9 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.IQ3_M.gguf) | IQ3_M | 6.0 | | | [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.Q3_K_M.gguf) | Q3_K_M | 6.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.Q3_K_L.gguf) | Q3_K_L | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.IQ4_XS.gguf) | IQ4_XS | 7.2 | | | [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.Q4_K_S.gguf) | Q4_K_S | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.Q5_K_S.gguf) | Q5_K_S | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.Q5_K_M.gguf) | Q5_K_M | 9.4 | | | [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.Q6_K.gguf) | Q6_K | 10.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/TripleMerge2-7B-Ties-GGUF
mradermacher
2024-05-06T05:18:25Z
20
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "allknowingroger/LimyQstar-7B-slerp", "allknowingroger/JaskierMistral-7B-slerp", "allknowingroger/LimmyAutomerge-7B-slerp", "en", "base_model:allknowingroger/TripleMerge2-7B-Ties", "base_model:quantized:allknowingroger/TripleMerge2-7B-Ties", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-04T08:25:34Z
--- base_model: allknowingroger/TripleMerge2-7B-Ties language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - allknowingroger/LimyQstar-7B-slerp - allknowingroger/JaskierMistral-7B-slerp - allknowingroger/LimmyAutomerge-7B-slerp --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/allknowingroger/TripleMerge2-7B-Ties <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/TripleMerge2-7B-Ties-GGUF/resolve/main/TripleMerge2-7B-Ties.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/TripleMerge2-7B-Ties-GGUF/resolve/main/TripleMerge2-7B-Ties.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/TripleMerge2-7B-Ties-GGUF/resolve/main/TripleMerge2-7B-Ties.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/TripleMerge2-7B-Ties-GGUF/resolve/main/TripleMerge2-7B-Ties.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/TripleMerge2-7B-Ties-GGUF/resolve/main/TripleMerge2-7B-Ties.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/TripleMerge2-7B-Ties-GGUF/resolve/main/TripleMerge2-7B-Ties.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/TripleMerge2-7B-Ties-GGUF/resolve/main/TripleMerge2-7B-Ties.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/TripleMerge2-7B-Ties-GGUF/resolve/main/TripleMerge2-7B-Ties.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/TripleMerge2-7B-Ties-GGUF/resolve/main/TripleMerge2-7B-Ties.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/TripleMerge2-7B-Ties-GGUF/resolve/main/TripleMerge2-7B-Ties.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/TripleMerge2-7B-Ties-GGUF/resolve/main/TripleMerge2-7B-Ties.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/TripleMerge2-7B-Ties-GGUF/resolve/main/TripleMerge2-7B-Ties.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/TripleMerge2-7B-Ties-GGUF/resolve/main/TripleMerge2-7B-Ties.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/TripleMerge2-7B-Ties-GGUF/resolve/main/TripleMerge2-7B-Ties.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/MythoMax-L2-Kimiko-v2-13b-GGUF
mradermacher
2024-05-06T05:18:21Z
40
1
transformers
[ "transformers", "gguf", "en", "base_model:Undi95/MythoMax-L2-Kimiko-v2-13b", "base_model:quantized:Undi95/MythoMax-L2-Kimiko-v2-13b", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-04T09:08:24Z
--- base_model: Undi95/MythoMax-L2-Kimiko-v2-13b language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Undi95/MythoMax-L2-Kimiko-v2-13b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-Kimiko-v2-13b-GGUF/resolve/main/MythoMax-L2-Kimiko-v2-13b.Q2_K.gguf) | Q2_K | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-Kimiko-v2-13b-GGUF/resolve/main/MythoMax-L2-Kimiko-v2-13b.IQ3_XS.gguf) | IQ3_XS | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-Kimiko-v2-13b-GGUF/resolve/main/MythoMax-L2-Kimiko-v2-13b.IQ3_S.gguf) | IQ3_S | 6.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-Kimiko-v2-13b-GGUF/resolve/main/MythoMax-L2-Kimiko-v2-13b.Q3_K_S.gguf) | Q3_K_S | 6.0 | | | [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-Kimiko-v2-13b-GGUF/resolve/main/MythoMax-L2-Kimiko-v2-13b.IQ3_M.gguf) | IQ3_M | 6.3 | | | [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-Kimiko-v2-13b-GGUF/resolve/main/MythoMax-L2-Kimiko-v2-13b.Q3_K_M.gguf) | Q3_K_M | 6.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-Kimiko-v2-13b-GGUF/resolve/main/MythoMax-L2-Kimiko-v2-13b.Q3_K_L.gguf) | Q3_K_L | 7.2 | | | [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-Kimiko-v2-13b-GGUF/resolve/main/MythoMax-L2-Kimiko-v2-13b.IQ4_XS.gguf) | IQ4_XS | 7.3 | | | [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-Kimiko-v2-13b-GGUF/resolve/main/MythoMax-L2-Kimiko-v2-13b.Q4_K_S.gguf) | Q4_K_S | 7.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-Kimiko-v2-13b-GGUF/resolve/main/MythoMax-L2-Kimiko-v2-13b.Q4_K_M.gguf) | Q4_K_M | 8.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-Kimiko-v2-13b-GGUF/resolve/main/MythoMax-L2-Kimiko-v2-13b.Q5_K_S.gguf) | Q5_K_S | 9.3 | | | [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-Kimiko-v2-13b-GGUF/resolve/main/MythoMax-L2-Kimiko-v2-13b.Q5_K_M.gguf) | Q5_K_M | 9.5 | | | [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-Kimiko-v2-13b-GGUF/resolve/main/MythoMax-L2-Kimiko-v2-13b.Q6_K.gguf) | Q6_K | 11.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-Kimiko-v2-13b-GGUF/resolve/main/MythoMax-L2-Kimiko-v2-13b.Q8_0.gguf) | Q8_0 | 14.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/meerkat-7b-v1.0-GGUF
mradermacher
2024-05-06T05:18:17Z
70
0
transformers
[ "transformers", "gguf", "medical", "small LM", "instruction-tuned", "usmle", "chain-of-thought", "synthetic data", "en", "base_model:dmis-lab/meerkat-7b-v1.0", "base_model:quantized:dmis-lab/meerkat-7b-v1.0", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-04T09:27:33Z
--- base_model: dmis-lab/meerkat-7b-v1.0 language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher tags: - medical - small LM - instruction-tuned - usmle - chain-of-thought - synthetic data --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/dmis-lab/meerkat-7b-v1.0 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/meerkat-7b-v1.0-GGUF/resolve/main/meerkat-7b-v1.0.Q2_K.gguf) | Q2_K | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/meerkat-7b-v1.0-GGUF/resolve/main/meerkat-7b-v1.0.IQ3_XS.gguf) | IQ3_XS | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/meerkat-7b-v1.0-GGUF/resolve/main/meerkat-7b-v1.0.Q3_K_S.gguf) | Q3_K_S | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/meerkat-7b-v1.0-GGUF/resolve/main/meerkat-7b-v1.0.IQ3_S.gguf) | IQ3_S | 3.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/meerkat-7b-v1.0-GGUF/resolve/main/meerkat-7b-v1.0.IQ3_M.gguf) | IQ3_M | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/meerkat-7b-v1.0-GGUF/resolve/main/meerkat-7b-v1.0.Q3_K_M.gguf) | Q3_K_M | 4.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/meerkat-7b-v1.0-GGUF/resolve/main/meerkat-7b-v1.0.Q3_K_L.gguf) | Q3_K_L | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/meerkat-7b-v1.0-GGUF/resolve/main/meerkat-7b-v1.0.IQ4_XS.gguf) | IQ4_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/meerkat-7b-v1.0-GGUF/resolve/main/meerkat-7b-v1.0.Q4_K_S.gguf) | Q4_K_S | 4.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/meerkat-7b-v1.0-GGUF/resolve/main/meerkat-7b-v1.0.Q4_K_M.gguf) | Q4_K_M | 4.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/meerkat-7b-v1.0-GGUF/resolve/main/meerkat-7b-v1.0.Q5_K_S.gguf) | Q5_K_S | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/meerkat-7b-v1.0-GGUF/resolve/main/meerkat-7b-v1.0.Q5_K_M.gguf) | Q5_K_M | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/meerkat-7b-v1.0-GGUF/resolve/main/meerkat-7b-v1.0.Q6_K.gguf) | Q6_K | 6.5 | very good quality | | [GGUF](https://huggingface.co/mradermacher/meerkat-7b-v1.0-GGUF/resolve/main/meerkat-7b-v1.0.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Mixtral_AI_Cyber_Matrix_2_0-GGUF
mradermacher
2024-05-06T05:18:04Z
37
2
transformers
[ "transformers", "gguf", "mergekit", "megamerge", "code", "Cyber-Series", "en", "dataset:Open-Orca/OpenOrca", "dataset:cognitivecomputations/dolphin", "dataset:WhiteRabbitNeo/WRN-Chapter-2", "dataset:WhiteRabbitNeo/WRN-Chapter-1", "dataset:gate369/Alpaca-Star", "dataset:gate369/alpaca-star-ascii", "base_model:LeroyDyer/Mixtral_AI_Cyber_Matrix_2_0", "base_model:quantized:LeroyDyer/Mixtral_AI_Cyber_Matrix_2_0", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-04T10:11:11Z
--- base_model: LeroyDyer/Mixtral_AI_Cyber_Matrix_2_0 datasets: - Open-Orca/OpenOrca - cognitivecomputations/dolphin - WhiteRabbitNeo/WRN-Chapter-2 - WhiteRabbitNeo/WRN-Chapter-1 - gate369/Alpaca-Star - gate369/alpaca-star-ascii language: - en library_name: transformers license: mit quantized_by: mradermacher tags: - mergekit - megamerge - code - Cyber-Series --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/LeroyDyer/Mixtral_AI_Cyber_Matrix_2_0 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_Matrix_2_0-GGUF/resolve/main/Mixtral_AI_Cyber_Matrix_2_0.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_Matrix_2_0-GGUF/resolve/main/Mixtral_AI_Cyber_Matrix_2_0.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_Matrix_2_0-GGUF/resolve/main/Mixtral_AI_Cyber_Matrix_2_0.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_Matrix_2_0-GGUF/resolve/main/Mixtral_AI_Cyber_Matrix_2_0.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_Matrix_2_0-GGUF/resolve/main/Mixtral_AI_Cyber_Matrix_2_0.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_Matrix_2_0-GGUF/resolve/main/Mixtral_AI_Cyber_Matrix_2_0.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_Matrix_2_0-GGUF/resolve/main/Mixtral_AI_Cyber_Matrix_2_0.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_Matrix_2_0-GGUF/resolve/main/Mixtral_AI_Cyber_Matrix_2_0.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_Matrix_2_0-GGUF/resolve/main/Mixtral_AI_Cyber_Matrix_2_0.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_Matrix_2_0-GGUF/resolve/main/Mixtral_AI_Cyber_Matrix_2_0.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_Matrix_2_0-GGUF/resolve/main/Mixtral_AI_Cyber_Matrix_2_0.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_Matrix_2_0-GGUF/resolve/main/Mixtral_AI_Cyber_Matrix_2_0.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_Matrix_2_0-GGUF/resolve/main/Mixtral_AI_Cyber_Matrix_2_0.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_Matrix_2_0-GGUF/resolve/main/Mixtral_AI_Cyber_Matrix_2_0.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF
mradermacher
2024-05-06T05:17:59Z
1
0
transformers
[ "transformers", "gguf", "en", "base_model:ddh0/Mistral-10.7B-Instruct-v0.2", "base_model:quantized:ddh0/Mistral-10.7B-Instruct-v0.2", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-04T12:08:09Z
--- base_model: ddh0/Mistral-10.7B-Instruct-v0.2 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/ddh0/Mistral-10.7B-Instruct-v0.2 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.Q2_K.gguf) | Q2_K | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.IQ3_XS.gguf) | IQ3_XS | 4.7 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.Q3_K_S.gguf) | Q3_K_S | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.IQ3_S.gguf) | IQ3_S | 4.9 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.IQ3_M.gguf) | IQ3_M | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.Q3_K_M.gguf) | Q3_K_M | 5.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.Q3_K_L.gguf) | Q3_K_L | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.IQ4_XS.gguf) | IQ4_XS | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.Q4_K_S.gguf) | Q4_K_S | 6.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.Q4_K_M.gguf) | Q4_K_M | 6.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.Q5_K_S.gguf) | Q5_K_S | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.Q5_K_M.gguf) | Q5_K_M | 7.9 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.Q6_K.gguf) | Q6_K | 9.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.Q8_0.gguf) | Q8_0 | 11.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/megatron_2.1_MoE_2x7B-GGUF
mradermacher
2024-05-06T05:17:51Z
1
0
transformers
[ "transformers", "gguf", "moe", "merge", "en", "base_model:Eurdem/megatron_2.1_MoE_2x7B", "base_model:quantized:Eurdem/megatron_2.1_MoE_2x7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-04T12:13:09Z
--- base_model: Eurdem/megatron_2.1_MoE_2x7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - moe - merge --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Eurdem/megatron_2.1_MoE_2x7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/megatron_2.1_MoE_2x7B-GGUF/resolve/main/megatron_2.1_MoE_2x7B.Q2_K.gguf) | Q2_K | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/megatron_2.1_MoE_2x7B-GGUF/resolve/main/megatron_2.1_MoE_2x7B.IQ3_XS.gguf) | IQ3_XS | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/megatron_2.1_MoE_2x7B-GGUF/resolve/main/megatron_2.1_MoE_2x7B.Q3_K_S.gguf) | Q3_K_S | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/megatron_2.1_MoE_2x7B-GGUF/resolve/main/megatron_2.1_MoE_2x7B.IQ3_S.gguf) | IQ3_S | 5.9 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/megatron_2.1_MoE_2x7B-GGUF/resolve/main/megatron_2.1_MoE_2x7B.IQ3_M.gguf) | IQ3_M | 6.0 | | | [GGUF](https://huggingface.co/mradermacher/megatron_2.1_MoE_2x7B-GGUF/resolve/main/megatron_2.1_MoE_2x7B.Q3_K_M.gguf) | Q3_K_M | 6.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/megatron_2.1_MoE_2x7B-GGUF/resolve/main/megatron_2.1_MoE_2x7B.Q3_K_L.gguf) | Q3_K_L | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/megatron_2.1_MoE_2x7B-GGUF/resolve/main/megatron_2.1_MoE_2x7B.IQ4_XS.gguf) | IQ4_XS | 7.2 | | | [GGUF](https://huggingface.co/mradermacher/megatron_2.1_MoE_2x7B-GGUF/resolve/main/megatron_2.1_MoE_2x7B.Q4_K_S.gguf) | Q4_K_S | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/megatron_2.1_MoE_2x7B-GGUF/resolve/main/megatron_2.1_MoE_2x7B.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/megatron_2.1_MoE_2x7B-GGUF/resolve/main/megatron_2.1_MoE_2x7B.Q5_K_S.gguf) | Q5_K_S | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/megatron_2.1_MoE_2x7B-GGUF/resolve/main/megatron_2.1_MoE_2x7B.Q5_K_M.gguf) | Q5_K_M | 9.4 | | | [GGUF](https://huggingface.co/mradermacher/megatron_2.1_MoE_2x7B-GGUF/resolve/main/megatron_2.1_MoE_2x7B.Q6_K.gguf) | Q6_K | 10.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/megatron_2.1_MoE_2x7B-GGUF/resolve/main/megatron_2.1_MoE_2x7B.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/CatNyanster-34b-GGUF
mradermacher
2024-05-06T05:17:44Z
59
1
transformers
[ "transformers", "gguf", "merge", "roleplay", "chat", "en", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-04T13:04:35Z
--- base_model: arlineka/CatNyanster-34b language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher tags: - merge - roleplay - chat --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/arlineka/CatNyanster-34b <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-GGUF/resolve/main/CatNyanster-34b.Q2_K.gguf) | Q2_K | 13.5 | | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-GGUF/resolve/main/CatNyanster-34b.IQ3_XS.gguf) | IQ3_XS | 14.9 | | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-GGUF/resolve/main/CatNyanster-34b.Q3_K_S.gguf) | Q3_K_S | 15.6 | | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-GGUF/resolve/main/CatNyanster-34b.IQ3_S.gguf) | IQ3_S | 15.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-GGUF/resolve/main/CatNyanster-34b.IQ3_M.gguf) | IQ3_M | 16.2 | | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-GGUF/resolve/main/CatNyanster-34b.Q3_K_M.gguf) | Q3_K_M | 17.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-GGUF/resolve/main/CatNyanster-34b.Q3_K_L.gguf) | Q3_K_L | 18.8 | | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-GGUF/resolve/main/CatNyanster-34b.IQ4_XS.gguf) | IQ4_XS | 19.3 | | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-GGUF/resolve/main/CatNyanster-34b.Q4_K_S.gguf) | Q4_K_S | 20.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-GGUF/resolve/main/CatNyanster-34b.Q4_K_M.gguf) | Q4_K_M | 21.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-GGUF/resolve/main/CatNyanster-34b.Q5_K_S.gguf) | Q5_K_S | 24.3 | | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-GGUF/resolve/main/CatNyanster-34b.Q5_K_M.gguf) | Q5_K_M | 25.0 | | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-GGUF/resolve/main/CatNyanster-34b.Q6_K.gguf) | Q6_K | 28.9 | very good quality | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-GGUF/resolve/main/CatNyanster-34b.Q8_0.gguf) | Q8_0 | 37.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/CatNyanster-34b-i1-GGUF
mradermacher
2024-05-06T05:17:35Z
4
0
transformers
[ "transformers", "gguf", "en", "endpoints_compatible", "region:us" ]
null
2024-04-04T15:33:46Z
--- base_model: arlineka/CatNyanster-34b language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/arlineka/CatNyanster-34b <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/CatNyanster-34b-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-IQ1_S.gguf) | i1-IQ1_S | 8.2 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-IQ1_M.gguf) | i1-IQ1_M | 8.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 10.0 | | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 11.0 | | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-IQ2_S.gguf) | i1-IQ2_S | 11.6 | | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-IQ2_M.gguf) | i1-IQ2_M | 12.5 | | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-Q2_K.gguf) | i1-Q2_K | 13.5 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 14.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 14.9 | | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 15.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-IQ3_S.gguf) | i1-IQ3_S | 15.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-IQ3_M.gguf) | i1-IQ3_M | 16.2 | | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 17.3 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 18.8 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 19.1 | | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-Q4_0.gguf) | i1-Q4_0 | 20.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 20.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 21.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 24.3 | | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 25.0 | | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-34b-i1-GGUF/resolve/main/CatNyanster-34b.i1-Q6_K.gguf) | i1-Q6_K | 28.9 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/XuanYuan-70B-Chat-GGUF
mradermacher
2024-05-06T05:17:20Z
36
0
transformers
[ "transformers", "gguf", "en", "base_model:Duxiaoman-DI/XuanYuan-70B-Chat", "base_model:quantized:Duxiaoman-DI/XuanYuan-70B-Chat", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-04-04T16:20:39Z
--- base_model: Duxiaoman-DI/XuanYuan-70B-Chat language: - en library_name: transformers license: llama2 quantized_by: mradermacher --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Duxiaoman-DI/XuanYuan-70B-Chat <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/XuanYuan-70B-Chat-GGUF/resolve/main/XuanYuan-70B-Chat.Q2_K.gguf) | Q2_K | 26.0 | | | [GGUF](https://huggingface.co/mradermacher/XuanYuan-70B-Chat-GGUF/resolve/main/XuanYuan-70B-Chat.IQ3_XS.gguf) | IQ3_XS | 28.9 | | | [GGUF](https://huggingface.co/mradermacher/XuanYuan-70B-Chat-GGUF/resolve/main/XuanYuan-70B-Chat.IQ3_S.gguf) | IQ3_S | 30.5 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/XuanYuan-70B-Chat-GGUF/resolve/main/XuanYuan-70B-Chat.Q3_K_S.gguf) | Q3_K_S | 30.5 | | | [GGUF](https://huggingface.co/mradermacher/XuanYuan-70B-Chat-GGUF/resolve/main/XuanYuan-70B-Chat.IQ3_M.gguf) | IQ3_M | 31.5 | | | [GGUF](https://huggingface.co/mradermacher/XuanYuan-70B-Chat-GGUF/resolve/main/XuanYuan-70B-Chat.Q3_K_M.gguf) | Q3_K_M | 33.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/XuanYuan-70B-Chat-GGUF/resolve/main/XuanYuan-70B-Chat.Q3_K_L.gguf) | Q3_K_L | 36.7 | | | [GGUF](https://huggingface.co/mradermacher/XuanYuan-70B-Chat-GGUF/resolve/main/XuanYuan-70B-Chat.IQ4_XS.gguf) | IQ4_XS | 37.8 | | | [GGUF](https://huggingface.co/mradermacher/XuanYuan-70B-Chat-GGUF/resolve/main/XuanYuan-70B-Chat.Q4_K_S.gguf) | Q4_K_S | 39.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/XuanYuan-70B-Chat-GGUF/resolve/main/XuanYuan-70B-Chat.Q4_K_M.gguf) | Q4_K_M | 42.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/XuanYuan-70B-Chat-GGUF/resolve/main/XuanYuan-70B-Chat.Q5_K_S.gguf) | Q5_K_S | 48.0 | | | [GGUF](https://huggingface.co/mradermacher/XuanYuan-70B-Chat-GGUF/resolve/main/XuanYuan-70B-Chat.Q5_K_M.gguf) | Q5_K_M | 49.3 | | | [PART 1](https://huggingface.co/mradermacher/XuanYuan-70B-Chat-GGUF/resolve/main/XuanYuan-70B-Chat.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/XuanYuan-70B-Chat-GGUF/resolve/main/XuanYuan-70B-Chat.Q6_K.gguf.part2of2) | Q6_K | 57.2 | very good quality | | [PART 1](https://huggingface.co/mradermacher/XuanYuan-70B-Chat-GGUF/resolve/main/XuanYuan-70B-Chat.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/XuanYuan-70B-Chat-GGUF/resolve/main/XuanYuan-70B-Chat.Q8_0.gguf.part2of2) | Q8_0 | 73.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Yi-34B-200K-DARE-megamerge-v8-i1-GGUF
mradermacher
2024-05-06T05:17:03Z
14
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "Yi", "en", "base_model:brucethemoose/Yi-34B-200K-DARE-megamerge-v8", "base_model:quantized:brucethemoose/Yi-34B-200K-DARE-megamerge-v8", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-04T21:19:38Z
--- base_model: brucethemoose/Yi-34B-200K-DARE-megamerge-v8 language: - en library_name: transformers license: other license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE license_name: yi-license quantized_by: mradermacher tags: - mergekit - merge - Yi --- ## About <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/brucethemoose/Yi-34B-200K-DARE-megamerge-v8 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Yi-34B-200K-DARE-megamerge-v8-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-DARE-megamerge-v8-i1-GGUF/resolve/main/Yi-34B-200K-DARE-megamerge-v8.i1-IQ1_S.gguf) | i1-IQ1_S | 7.6 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-DARE-megamerge-v8-i1-GGUF/resolve/main/Yi-34B-200K-DARE-megamerge-v8.i1-IQ1_M.gguf) | i1-IQ1_M | 8.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-DARE-megamerge-v8-i1-GGUF/resolve/main/Yi-34B-200K-DARE-megamerge-v8.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.4 | | | [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-DARE-megamerge-v8-i1-GGUF/resolve/main/Yi-34B-200K-DARE-megamerge-v8.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.4 | | | [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-DARE-megamerge-v8-i1-GGUF/resolve/main/Yi-34B-200K-DARE-megamerge-v8.i1-IQ2_S.gguf) | i1-IQ2_S | 11.0 | | | [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-DARE-megamerge-v8-i1-GGUF/resolve/main/Yi-34B-200K-DARE-megamerge-v8.i1-IQ2_M.gguf) | i1-IQ2_M | 11.9 | | | [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-DARE-megamerge-v8-i1-GGUF/resolve/main/Yi-34B-200K-DARE-megamerge-v8.i1-Q2_K.gguf) | i1-Q2_K | 12.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-DARE-megamerge-v8-i1-GGUF/resolve/main/Yi-34B-200K-DARE-megamerge-v8.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 13.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-DARE-megamerge-v8-i1-GGUF/resolve/main/Yi-34B-200K-DARE-megamerge-v8.i1-IQ3_XS.gguf) | i1-IQ3_XS | 14.3 | | | [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-DARE-megamerge-v8-i1-GGUF/resolve/main/Yi-34B-200K-DARE-megamerge-v8.i1-Q3_K_S.gguf) | i1-Q3_K_S | 15.1 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-DARE-megamerge-v8-i1-GGUF/resolve/main/Yi-34B-200K-DARE-megamerge-v8.i1-IQ3_S.gguf) | i1-IQ3_S | 15.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-DARE-megamerge-v8-i1-GGUF/resolve/main/Yi-34B-200K-DARE-megamerge-v8.i1-IQ3_M.gguf) | i1-IQ3_M | 15.7 | | | [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-DARE-megamerge-v8-i1-GGUF/resolve/main/Yi-34B-200K-DARE-megamerge-v8.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.8 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-DARE-megamerge-v8-i1-GGUF/resolve/main/Yi-34B-200K-DARE-megamerge-v8.i1-Q3_K_L.gguf) | i1-Q3_K_L | 18.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-DARE-megamerge-v8-i1-GGUF/resolve/main/Yi-34B-200K-DARE-megamerge-v8.i1-IQ4_XS.gguf) | i1-IQ4_XS | 18.6 | | | [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-DARE-megamerge-v8-i1-GGUF/resolve/main/Yi-34B-200K-DARE-megamerge-v8.i1-Q4_0.gguf) | i1-Q4_0 | 19.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-DARE-megamerge-v8-i1-GGUF/resolve/main/Yi-34B-200K-DARE-megamerge-v8.i1-Q4_K_S.gguf) | i1-Q4_K_S | 19.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-DARE-megamerge-v8-i1-GGUF/resolve/main/Yi-34B-200K-DARE-megamerge-v8.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-DARE-megamerge-v8-i1-GGUF/resolve/main/Yi-34B-200K-DARE-megamerge-v8.i1-Q5_K_S.gguf) | i1-Q5_K_S | 23.8 | | | [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-DARE-megamerge-v8-i1-GGUF/resolve/main/Yi-34B-200K-DARE-megamerge-v8.i1-Q5_K_M.gguf) | i1-Q5_K_M | 24.4 | | | [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-DARE-megamerge-v8-i1-GGUF/resolve/main/Yi-34B-200K-DARE-megamerge-v8.i1-Q6_K.gguf) | i1-Q6_K | 28.3 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic-GGUF
mradermacher
2024-05-06T05:16:57Z
2
0
transformers
[ "transformers", "gguf", "en", "base_model:TroyDoesAI/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic", "base_model:quantized:TroyDoesAI/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-04T22:43:02Z
--- base_model: TroyDoesAI/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic language: - en library_name: transformers license: cc-by-4.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/TroyDoesAI/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic-GGUF/resolve/main/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic.Q2_K.gguf) | Q2_K | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic-GGUF/resolve/main/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic.IQ3_XS.gguf) | IQ3_XS | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic-GGUF/resolve/main/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic.Q3_K_S.gguf) | Q3_K_S | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic-GGUF/resolve/main/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic.IQ3_S.gguf) | IQ3_S | 4.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic-GGUF/resolve/main/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic.IQ3_M.gguf) | IQ3_M | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic-GGUF/resolve/main/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic.Q3_K_M.gguf) | Q3_K_M | 4.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic-GGUF/resolve/main/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic.Q3_K_L.gguf) | Q3_K_L | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic-GGUF/resolve/main/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic.IQ4_XS.gguf) | IQ4_XS | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic-GGUF/resolve/main/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic.Q4_K_S.gguf) | Q4_K_S | 5.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic-GGUF/resolve/main/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic.Q4_K_M.gguf) | Q4_K_M | 5.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic-GGUF/resolve/main/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic.Q5_K_S.gguf) | Q5_K_S | 6.2 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic-GGUF/resolve/main/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic.Q5_K_M.gguf) | Q5_K_M | 6.4 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic-GGUF/resolve/main/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic.Q6_K.gguf) | Q6_K | 7.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic-GGUF/resolve/main/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic.Q8_0.gguf) | Q8_0 | 9.5 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Young-Children-Storyteller-Mistral-7B-GGUF
mradermacher
2024-05-06T05:16:55Z
126
1
transformers
[ "transformers", "gguf", "story", "young children", "educational", "knowledge", "en", "dataset:ajibawa-2023/Children-Stories-Collection", "base_model:ajibawa-2023/Young-Children-Storyteller-Mistral-7B", "base_model:quantized:ajibawa-2023/Young-Children-Storyteller-Mistral-7B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-04T23:59:06Z
--- base_model: ajibawa-2023/Young-Children-Storyteller-Mistral-7B datasets: - ajibawa-2023/Children-Stories-Collection language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - story - young children - educational - knowledge --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/ajibawa-2023/Young-Children-Storyteller-Mistral-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Young-Children-Storyteller-Mistral-7B-GGUF/resolve/main/Young-Children-Storyteller-Mistral-7B.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Young-Children-Storyteller-Mistral-7B-GGUF/resolve/main/Young-Children-Storyteller-Mistral-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Young-Children-Storyteller-Mistral-7B-GGUF/resolve/main/Young-Children-Storyteller-Mistral-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Young-Children-Storyteller-Mistral-7B-GGUF/resolve/main/Young-Children-Storyteller-Mistral-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Young-Children-Storyteller-Mistral-7B-GGUF/resolve/main/Young-Children-Storyteller-Mistral-7B.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Young-Children-Storyteller-Mistral-7B-GGUF/resolve/main/Young-Children-Storyteller-Mistral-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Young-Children-Storyteller-Mistral-7B-GGUF/resolve/main/Young-Children-Storyteller-Mistral-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Young-Children-Storyteller-Mistral-7B-GGUF/resolve/main/Young-Children-Storyteller-Mistral-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Young-Children-Storyteller-Mistral-7B-GGUF/resolve/main/Young-Children-Storyteller-Mistral-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Young-Children-Storyteller-Mistral-7B-GGUF/resolve/main/Young-Children-Storyteller-Mistral-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Young-Children-Storyteller-Mistral-7B-GGUF/resolve/main/Young-Children-Storyteller-Mistral-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Young-Children-Storyteller-Mistral-7B-GGUF/resolve/main/Young-Children-Storyteller-Mistral-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Young-Children-Storyteller-Mistral-7B-GGUF/resolve/main/Young-Children-Storyteller-Mistral-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Young-Children-Storyteller-Mistral-7B-GGUF/resolve/main/Young-Children-Storyteller-Mistral-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/CatNyanster-7b-GGUF
mradermacher
2024-05-06T05:16:49Z
17
0
transformers
[ "transformers", "gguf", "merge", "en", "base_model:arlineka/CatNyanster-7b", "base_model:quantized:arlineka/CatNyanster-7b", "endpoints_compatible", "region:us" ]
null
2024-04-05T00:44:13Z
--- base_model: arlineka/CatNyanster-7b language: - en library_name: transformers quantized_by: mradermacher tags: - merge --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/arlineka/CatNyanster-7b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/CatNyanster-7b-GGUF/resolve/main/CatNyanster-7b.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-7b-GGUF/resolve/main/CatNyanster-7b.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-7b-GGUF/resolve/main/CatNyanster-7b.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-7b-GGUF/resolve/main/CatNyanster-7b.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-7b-GGUF/resolve/main/CatNyanster-7b.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-7b-GGUF/resolve/main/CatNyanster-7b.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-7b-GGUF/resolve/main/CatNyanster-7b.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-7b-GGUF/resolve/main/CatNyanster-7b.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-7b-GGUF/resolve/main/CatNyanster-7b.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-7b-GGUF/resolve/main/CatNyanster-7b.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-7b-GGUF/resolve/main/CatNyanster-7b.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-7b-GGUF/resolve/main/CatNyanster-7b.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-7b-GGUF/resolve/main/CatNyanster-7b.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/CatNyanster-7b-GGUF/resolve/main/CatNyanster-7b.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/NeuralNinja-2x-7B-GGUF
mradermacher
2024-05-06T05:16:45Z
2
0
transformers
[ "transformers", "gguf", "en", "base_model:Muhammad2003/NeuralNinja-2x-7B", "base_model:quantized:Muhammad2003/NeuralNinja-2x-7B", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-05T00:55:22Z
--- base_model: Muhammad2003/NeuralNinja-2x-7B language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Muhammad2003/NeuralNinja-2x-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/NeuralNinja-2x-7B-GGUF/resolve/main/NeuralNinja-2x-7B.Q2_K.gguf) | Q2_K | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/NeuralNinja-2x-7B-GGUF/resolve/main/NeuralNinja-2x-7B.IQ3_XS.gguf) | IQ3_XS | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/NeuralNinja-2x-7B-GGUF/resolve/main/NeuralNinja-2x-7B.Q3_K_S.gguf) | Q3_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/NeuralNinja-2x-7B-GGUF/resolve/main/NeuralNinja-2x-7B.IQ3_S.gguf) | IQ3_S | 5.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/NeuralNinja-2x-7B-GGUF/resolve/main/NeuralNinja-2x-7B.IQ3_M.gguf) | IQ3_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/NeuralNinja-2x-7B-GGUF/resolve/main/NeuralNinja-2x-7B.Q3_K_M.gguf) | Q3_K_M | 6.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/NeuralNinja-2x-7B-GGUF/resolve/main/NeuralNinja-2x-7B.Q3_K_L.gguf) | Q3_K_L | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/NeuralNinja-2x-7B-GGUF/resolve/main/NeuralNinja-2x-7B.IQ4_XS.gguf) | IQ4_XS | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/NeuralNinja-2x-7B-GGUF/resolve/main/NeuralNinja-2x-7B.Q4_K_S.gguf) | Q4_K_S | 7.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/NeuralNinja-2x-7B-GGUF/resolve/main/NeuralNinja-2x-7B.Q4_K_M.gguf) | Q4_K_M | 7.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/NeuralNinja-2x-7B-GGUF/resolve/main/NeuralNinja-2x-7B.Q5_K_S.gguf) | Q5_K_S | 9.0 | | | [GGUF](https://huggingface.co/mradermacher/NeuralNinja-2x-7B-GGUF/resolve/main/NeuralNinja-2x-7B.Q5_K_M.gguf) | Q5_K_M | 9.2 | | | [GGUF](https://huggingface.co/mradermacher/NeuralNinja-2x-7B-GGUF/resolve/main/NeuralNinja-2x-7B.Q6_K.gguf) | Q6_K | 10.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/NeuralNinja-2x-7B-GGUF/resolve/main/NeuralNinja-2x-7B.Q8_0.gguf) | Q8_0 | 13.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Cypher-7B-GGUF
mradermacher
2024-05-06T05:16:40Z
20
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "mistral", "nous", "westlake", "samantha", "en", "base_model:aloobun/Cypher-7B", "base_model:quantized:aloobun/Cypher-7B", "license:cc", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-05T01:45:10Z
--- base_model: aloobun/Cypher-7B language: - en library_name: transformers license: cc quantized_by: mradermacher tags: - mergekit - merge - mistral - nous - westlake - samantha --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/aloobun/Cypher-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Cypher-7B-GGUF/resolve/main/Cypher-7B.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Cypher-7B-GGUF/resolve/main/Cypher-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Cypher-7B-GGUF/resolve/main/Cypher-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Cypher-7B-GGUF/resolve/main/Cypher-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Cypher-7B-GGUF/resolve/main/Cypher-7B.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Cypher-7B-GGUF/resolve/main/Cypher-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Cypher-7B-GGUF/resolve/main/Cypher-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Cypher-7B-GGUF/resolve/main/Cypher-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Cypher-7B-GGUF/resolve/main/Cypher-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Cypher-7B-GGUF/resolve/main/Cypher-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Cypher-7B-GGUF/resolve/main/Cypher-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Cypher-7B-GGUF/resolve/main/Cypher-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Cypher-7B-GGUF/resolve/main/Cypher-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Cypher-7B-GGUF/resolve/main/Cypher-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/WizardLM-30B-V1.0-GGUF
mradermacher
2024-05-06T05:16:35Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:WizardLM/WizardLM-30B-V1.0", "base_model:quantized:WizardLM/WizardLM-30B-V1.0", "endpoints_compatible", "region:us" ]
null
2024-04-05T01:57:20Z
--- base_model: WizardLM/WizardLM-30B-V1.0 language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/WizardLM/WizardLM-30B-V1.0 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/WizardLM-30B-V1.0-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-GGUF/resolve/main/WizardLM-30B-V1.0.Q2_K.gguf) | Q2_K | 12.1 | | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-GGUF/resolve/main/WizardLM-30B-V1.0.IQ3_XS.gguf) | IQ3_XS | 13.4 | | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-GGUF/resolve/main/WizardLM-30B-V1.0.IQ3_S.gguf) | IQ3_S | 14.2 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-GGUF/resolve/main/WizardLM-30B-V1.0.Q3_K_S.gguf) | Q3_K_S | 14.2 | | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-GGUF/resolve/main/WizardLM-30B-V1.0.IQ3_M.gguf) | IQ3_M | 15.0 | | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-GGUF/resolve/main/WizardLM-30B-V1.0.Q3_K_M.gguf) | Q3_K_M | 15.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-GGUF/resolve/main/WizardLM-30B-V1.0.Q3_K_L.gguf) | Q3_K_L | 17.4 | | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-GGUF/resolve/main/WizardLM-30B-V1.0.IQ4_XS.gguf) | IQ4_XS | 17.6 | | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-GGUF/resolve/main/WizardLM-30B-V1.0.Q4_K_S.gguf) | Q4_K_S | 18.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-GGUF/resolve/main/WizardLM-30B-V1.0.Q4_K_M.gguf) | Q4_K_M | 19.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-GGUF/resolve/main/WizardLM-30B-V1.0.Q5_K_S.gguf) | Q5_K_S | 22.5 | | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-GGUF/resolve/main/WizardLM-30B-V1.0.Q5_K_M.gguf) | Q5_K_M | 23.1 | | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-GGUF/resolve/main/WizardLM-30B-V1.0.Q6_K.gguf) | Q6_K | 26.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-GGUF/resolve/main/WizardLM-30B-V1.0.Q8_0.gguf) | Q8_0 | 34.7 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/dragonwar-7b-orpo-GGUF
mradermacher
2024-05-06T05:16:32Z
47
0
transformers
[ "transformers", "gguf", "unsloth", "book", "en", "dataset:vicgalle/OpenHermesPreferences-roleplay", "base_model:maldv/dragonwar-7b-orpo", "base_model:quantized:maldv/dragonwar-7b-orpo", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-05T02:07:39Z
--- base_model: maldv/dragonwar-7b-orpo datasets: - vicgalle/OpenHermesPreferences-roleplay language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher tags: - unsloth - book --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/maldv/dragonwar-7b-orpo <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-orpo-GGUF/resolve/main/dragonwar-7b-orpo.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-orpo-GGUF/resolve/main/dragonwar-7b-orpo.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-orpo-GGUF/resolve/main/dragonwar-7b-orpo.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-orpo-GGUF/resolve/main/dragonwar-7b-orpo.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-orpo-GGUF/resolve/main/dragonwar-7b-orpo.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-orpo-GGUF/resolve/main/dragonwar-7b-orpo.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-orpo-GGUF/resolve/main/dragonwar-7b-orpo.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-orpo-GGUF/resolve/main/dragonwar-7b-orpo.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-orpo-GGUF/resolve/main/dragonwar-7b-orpo.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-orpo-GGUF/resolve/main/dragonwar-7b-orpo.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-orpo-GGUF/resolve/main/dragonwar-7b-orpo.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-orpo-GGUF/resolve/main/dragonwar-7b-orpo.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-orpo-GGUF/resolve/main/dragonwar-7b-orpo.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-orpo-GGUF/resolve/main/dragonwar-7b-orpo.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Irene-RP-v5-7B-GGUF
mradermacher
2024-05-06T05:16:24Z
1
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "mistral", "roleplay", "en", "base_model:Virt-io/Irene-RP-v5-7B", "base_model:quantized:Virt-io/Irene-RP-v5-7B", "endpoints_compatible", "region:us" ]
null
2024-04-05T02:29:13Z
--- base_model: Virt-io/Irene-RP-v5-7B language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge - mistral - roleplay --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Virt-io/Irene-RP-v5-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Irene-RP-v5-7B-GGUF/resolve/main/Irene-RP-v5-7B.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Irene-RP-v5-7B-GGUF/resolve/main/Irene-RP-v5-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Irene-RP-v5-7B-GGUF/resolve/main/Irene-RP-v5-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Irene-RP-v5-7B-GGUF/resolve/main/Irene-RP-v5-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Irene-RP-v5-7B-GGUF/resolve/main/Irene-RP-v5-7B.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Irene-RP-v5-7B-GGUF/resolve/main/Irene-RP-v5-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Irene-RP-v5-7B-GGUF/resolve/main/Irene-RP-v5-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Irene-RP-v5-7B-GGUF/resolve/main/Irene-RP-v5-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Irene-RP-v5-7B-GGUF/resolve/main/Irene-RP-v5-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Irene-RP-v5-7B-GGUF/resolve/main/Irene-RP-v5-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Irene-RP-v5-7B-GGUF/resolve/main/Irene-RP-v5-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Irene-RP-v5-7B-GGUF/resolve/main/Irene-RP-v5-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Irene-RP-v5-7B-GGUF/resolve/main/Irene-RP-v5-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Irene-RP-v5-7B-GGUF/resolve/main/Irene-RP-v5-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/MisCalmity-v0.1-model_stock-GGUF
mradermacher
2024-05-06T05:16:19Z
1
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "en", "base_model:thag8/MisCalmity-v0.1-model_stock", "base_model:quantized:thag8/MisCalmity-v0.1-model_stock", "endpoints_compatible", "region:us" ]
null
2024-04-05T03:25:20Z
--- base_model: thag8/MisCalmity-v0.1-model_stock language: - en library_name: transformers quantized_by: mradermacher tags: - merge - mergekit - lazymergekit --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/thag8/MisCalmity-v0.1-model_stock <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MisCalmity-v0.1-model_stock-GGUF/resolve/main/MisCalmity-v0.1-model_stock.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/MisCalmity-v0.1-model_stock-GGUF/resolve/main/MisCalmity-v0.1-model_stock.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/MisCalmity-v0.1-model_stock-GGUF/resolve/main/MisCalmity-v0.1-model_stock.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/MisCalmity-v0.1-model_stock-GGUF/resolve/main/MisCalmity-v0.1-model_stock.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/MisCalmity-v0.1-model_stock-GGUF/resolve/main/MisCalmity-v0.1-model_stock.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/MisCalmity-v0.1-model_stock-GGUF/resolve/main/MisCalmity-v0.1-model_stock.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MisCalmity-v0.1-model_stock-GGUF/resolve/main/MisCalmity-v0.1-model_stock.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/MisCalmity-v0.1-model_stock-GGUF/resolve/main/MisCalmity-v0.1-model_stock.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/MisCalmity-v0.1-model_stock-GGUF/resolve/main/MisCalmity-v0.1-model_stock.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MisCalmity-v0.1-model_stock-GGUF/resolve/main/MisCalmity-v0.1-model_stock.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MisCalmity-v0.1-model_stock-GGUF/resolve/main/MisCalmity-v0.1-model_stock.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/MisCalmity-v0.1-model_stock-GGUF/resolve/main/MisCalmity-v0.1-model_stock.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/MisCalmity-v0.1-model_stock-GGUF/resolve/main/MisCalmity-v0.1-model_stock.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/MisCalmity-v0.1-model_stock-GGUF/resolve/main/MisCalmity-v0.1-model_stock.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Maxine-34B-stock-GGUF
mradermacher
2024-05-06T05:15:53Z
17
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "ConvexAI/Luminex-34B-v0.2", "fblgit/UNA-34BeagleSimpleMath-32K-v1", "chemistry", "biology", "math", "en", "base_model:louisbrulenaudet/Maxine-34B-stock", "base_model:quantized:louisbrulenaudet/Maxine-34B-stock", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-05T06:16:31Z
--- base_model: louisbrulenaudet/Maxine-34B-stock language: - en library_name: transformers license: apache-2.0 no_imatrix: 'GGML_ASSERT: llama.cpp/ggml-quants.c:11239: grid_index >= 0' quantized_by: mradermacher tags: - merge - mergekit - ConvexAI/Luminex-34B-v0.2 - fblgit/UNA-34BeagleSimpleMath-32K-v1 - chemistry - biology - math --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/louisbrulenaudet/Maxine-34B-stock <!-- provided-files --> ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Maxine-34B-stock-GGUF/resolve/main/Maxine-34B-stock.Q2_K.gguf) | Q2_K | 12.9 | | | [GGUF](https://huggingface.co/mradermacher/Maxine-34B-stock-GGUF/resolve/main/Maxine-34B-stock.IQ3_XS.gguf) | IQ3_XS | 14.3 | | | [GGUF](https://huggingface.co/mradermacher/Maxine-34B-stock-GGUF/resolve/main/Maxine-34B-stock.Q3_K_S.gguf) | Q3_K_S | 15.1 | | | [GGUF](https://huggingface.co/mradermacher/Maxine-34B-stock-GGUF/resolve/main/Maxine-34B-stock.IQ3_S.gguf) | IQ3_S | 15.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Maxine-34B-stock-GGUF/resolve/main/Maxine-34B-stock.IQ3_M.gguf) | IQ3_M | 15.7 | | | [GGUF](https://huggingface.co/mradermacher/Maxine-34B-stock-GGUF/resolve/main/Maxine-34B-stock.Q3_K_M.gguf) | Q3_K_M | 16.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Maxine-34B-stock-GGUF/resolve/main/Maxine-34B-stock.Q3_K_L.gguf) | Q3_K_L | 18.2 | | | [GGUF](https://huggingface.co/mradermacher/Maxine-34B-stock-GGUF/resolve/main/Maxine-34B-stock.IQ4_XS.gguf) | IQ4_XS | 18.7 | | | [GGUF](https://huggingface.co/mradermacher/Maxine-34B-stock-GGUF/resolve/main/Maxine-34B-stock.Q4_K_S.gguf) | Q4_K_S | 19.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Maxine-34B-stock-GGUF/resolve/main/Maxine-34B-stock.Q4_K_M.gguf) | Q4_K_M | 20.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Maxine-34B-stock-GGUF/resolve/main/Maxine-34B-stock.Q5_K_S.gguf) | Q5_K_S | 23.8 | | | [GGUF](https://huggingface.co/mradermacher/Maxine-34B-stock-GGUF/resolve/main/Maxine-34B-stock.Q5_K_M.gguf) | Q5_K_M | 24.4 | | | [GGUF](https://huggingface.co/mradermacher/Maxine-34B-stock-GGUF/resolve/main/Maxine-34B-stock.Q6_K.gguf) | Q6_K | 28.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Maxine-34B-stock-GGUF/resolve/main/Maxine-34B-stock.Q8_0.gguf) | Q8_0 | 36.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/TableLLM-13b-GGUF
mradermacher
2024-05-06T05:15:43Z
129
0
transformers
[ "transformers", "gguf", "Table", "QA", "Code", "en", "dataset:RUCKBReasoning/TableLLM-SFT", "base_model:RUCKBReasoning/TableLLM-13b", "base_model:quantized:RUCKBReasoning/TableLLM-13b", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-04-05T06:37:28Z
--- base_model: RUCKBReasoning/TableLLM-13b datasets: - RUCKBReasoning/TableLLM-SFT language: - en library_name: transformers license: llama2 quantized_by: mradermacher tags: - Table - QA - Code --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/RUCKBReasoning/TableLLM-13b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/TableLLM-13b-GGUF/resolve/main/TableLLM-13b.Q2_K.gguf) | Q2_K | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/TableLLM-13b-GGUF/resolve/main/TableLLM-13b.IQ3_XS.gguf) | IQ3_XS | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/TableLLM-13b-GGUF/resolve/main/TableLLM-13b.IQ3_S.gguf) | IQ3_S | 5.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/TableLLM-13b-GGUF/resolve/main/TableLLM-13b.Q3_K_S.gguf) | Q3_K_S | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/TableLLM-13b-GGUF/resolve/main/TableLLM-13b.IQ3_M.gguf) | IQ3_M | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/TableLLM-13b-GGUF/resolve/main/TableLLM-13b.Q3_K_M.gguf) | Q3_K_M | 6.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/TableLLM-13b-GGUF/resolve/main/TableLLM-13b.Q3_K_L.gguf) | Q3_K_L | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/TableLLM-13b-GGUF/resolve/main/TableLLM-13b.IQ4_XS.gguf) | IQ4_XS | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/TableLLM-13b-GGUF/resolve/main/TableLLM-13b.Q4_K_S.gguf) | Q4_K_S | 7.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/TableLLM-13b-GGUF/resolve/main/TableLLM-13b.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/TableLLM-13b-GGUF/resolve/main/TableLLM-13b.Q5_K_S.gguf) | Q5_K_S | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/TableLLM-13b-GGUF/resolve/main/TableLLM-13b.Q5_K_M.gguf) | Q5_K_M | 9.3 | | | [GGUF](https://huggingface.co/mradermacher/TableLLM-13b-GGUF/resolve/main/TableLLM-13b.Q6_K.gguf) | Q6_K | 10.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/TableLLM-13b-GGUF/resolve/main/TableLLM-13b.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Turkcell-LLM-7b-v1-GGUF
mradermacher
2024-05-06T05:15:40Z
54
4
transformers
[ "transformers", "gguf", "tr", "base_model:TURKCELL/Turkcell-LLM-7b-v1", "base_model:quantized:TURKCELL/Turkcell-LLM-7b-v1", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-05T07:04:40Z
--- base_model: TURKCELL/Turkcell-LLM-7b-v1 language: - tr library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/TURKCELL/Turkcell-LLM-7b-v1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Turkcell-LLM-7b-v1-GGUF/resolve/main/Turkcell-LLM-7b-v1.Q2_K.gguf) | Q2_K | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Turkcell-LLM-7b-v1-GGUF/resolve/main/Turkcell-LLM-7b-v1.IQ3_XS.gguf) | IQ3_XS | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/Turkcell-LLM-7b-v1-GGUF/resolve/main/Turkcell-LLM-7b-v1.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Turkcell-LLM-7b-v1-GGUF/resolve/main/Turkcell-LLM-7b-v1.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Turkcell-LLM-7b-v1-GGUF/resolve/main/Turkcell-LLM-7b-v1.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Turkcell-LLM-7b-v1-GGUF/resolve/main/Turkcell-LLM-7b-v1.Q3_K_M.gguf) | Q3_K_M | 3.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Turkcell-LLM-7b-v1-GGUF/resolve/main/Turkcell-LLM-7b-v1.Q3_K_L.gguf) | Q3_K_L | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Turkcell-LLM-7b-v1-GGUF/resolve/main/Turkcell-LLM-7b-v1.IQ4_XS.gguf) | IQ4_XS | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Turkcell-LLM-7b-v1-GGUF/resolve/main/Turkcell-LLM-7b-v1.Q4_K_S.gguf) | Q4_K_S | 4.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Turkcell-LLM-7b-v1-GGUF/resolve/main/Turkcell-LLM-7b-v1.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Turkcell-LLM-7b-v1-GGUF/resolve/main/Turkcell-LLM-7b-v1.Q5_K_S.gguf) | Q5_K_S | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Turkcell-LLM-7b-v1-GGUF/resolve/main/Turkcell-LLM-7b-v1.Q5_K_M.gguf) | Q5_K_M | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Turkcell-LLM-7b-v1-GGUF/resolve/main/Turkcell-LLM-7b-v1.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Turkcell-LLM-7b-v1-GGUF/resolve/main/Turkcell-LLM-7b-v1.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/MaidFlameSoup-7B-GGUF
mradermacher
2024-05-06T05:15:34Z
21
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:nbeerbower/MaidFlameSoup-7B", "base_model:quantized:nbeerbower/MaidFlameSoup-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-05T08:10:09Z
--- base_model: nbeerbower/MaidFlameSoup-7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/nbeerbower/MaidFlameSoup-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MaidFlameSoup-7B-GGUF/resolve/main/MaidFlameSoup-7B.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/MaidFlameSoup-7B-GGUF/resolve/main/MaidFlameSoup-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/MaidFlameSoup-7B-GGUF/resolve/main/MaidFlameSoup-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/MaidFlameSoup-7B-GGUF/resolve/main/MaidFlameSoup-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/MaidFlameSoup-7B-GGUF/resolve/main/MaidFlameSoup-7B.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/MaidFlameSoup-7B-GGUF/resolve/main/MaidFlameSoup-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MaidFlameSoup-7B-GGUF/resolve/main/MaidFlameSoup-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/MaidFlameSoup-7B-GGUF/resolve/main/MaidFlameSoup-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/MaidFlameSoup-7B-GGUF/resolve/main/MaidFlameSoup-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MaidFlameSoup-7B-GGUF/resolve/main/MaidFlameSoup-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MaidFlameSoup-7B-GGUF/resolve/main/MaidFlameSoup-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/MaidFlameSoup-7B-GGUF/resolve/main/MaidFlameSoup-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/MaidFlameSoup-7B-GGUF/resolve/main/MaidFlameSoup-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/MaidFlameSoup-7B-GGUF/resolve/main/MaidFlameSoup-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/bophades-mistral-7B-GGUF
mradermacher
2024-05-06T05:15:28Z
15
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:nbeerbower/bophades-mistral-7B", "base_model:quantized:nbeerbower/bophades-mistral-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-05T08:45:46Z
--- base_model: nbeerbower/bophades-mistral-7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/nbeerbower/bophades-mistral-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/bophades-mistral-7B-GGUF/resolve/main/bophades-mistral-7B.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/bophades-mistral-7B-GGUF/resolve/main/bophades-mistral-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/bophades-mistral-7B-GGUF/resolve/main/bophades-mistral-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/bophades-mistral-7B-GGUF/resolve/main/bophades-mistral-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/bophades-mistral-7B-GGUF/resolve/main/bophades-mistral-7B.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/bophades-mistral-7B-GGUF/resolve/main/bophades-mistral-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/bophades-mistral-7B-GGUF/resolve/main/bophades-mistral-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/bophades-mistral-7B-GGUF/resolve/main/bophades-mistral-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/bophades-mistral-7B-GGUF/resolve/main/bophades-mistral-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/bophades-mistral-7B-GGUF/resolve/main/bophades-mistral-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/bophades-mistral-7B-GGUF/resolve/main/bophades-mistral-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/bophades-mistral-7B-GGUF/resolve/main/bophades-mistral-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/bophades-mistral-7B-GGUF/resolve/main/bophades-mistral-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/bophades-mistral-7B-GGUF/resolve/main/bophades-mistral-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF
mradermacher
2024-05-06T05:15:23Z
2
0
transformers
[ "transformers", "gguf", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-05T09:02:52Z
--- base_model: ParkTaeEon/Myrrh_solar_10.7b_v0.1-dpo language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/ParkTaeEon/Myrrh_solar_10.7b_v0.1-dpo <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.Q2_K.gguf) | Q2_K | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.IQ3_XS.gguf) | IQ3_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.Q3_K_S.gguf) | Q3_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.IQ3_S.gguf) | IQ3_S | 4.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.IQ3_M.gguf) | IQ3_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.Q3_K_L.gguf) | Q3_K_L | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.IQ4_XS.gguf) | IQ4_XS | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.Q5_K_S.gguf) | Q5_K_S | 7.5 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.Q5_K_M.gguf) | Q5_K_M | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.Q6_K.gguf) | Q6_K | 8.9 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.Q8_0.gguf) | Q8_0 | 11.5 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/dragonwar-7b-alpha-GGUF
mradermacher
2024-05-06T05:15:21Z
14
0
transformers
[ "transformers", "gguf", "unsloth", "book", "en", "base_model:maldv/dragonwar-7b-alpha", "base_model:quantized:maldv/dragonwar-7b-alpha", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-05T09:36:06Z
--- base_model: maldv/dragonwar-7b-alpha language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher tags: - unsloth - book --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/maldv/dragonwar-7b-alpha <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-alpha-GGUF/resolve/main/dragonwar-7b-alpha.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-alpha-GGUF/resolve/main/dragonwar-7b-alpha.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-alpha-GGUF/resolve/main/dragonwar-7b-alpha.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-alpha-GGUF/resolve/main/dragonwar-7b-alpha.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-alpha-GGUF/resolve/main/dragonwar-7b-alpha.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-alpha-GGUF/resolve/main/dragonwar-7b-alpha.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-alpha-GGUF/resolve/main/dragonwar-7b-alpha.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-alpha-GGUF/resolve/main/dragonwar-7b-alpha.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-alpha-GGUF/resolve/main/dragonwar-7b-alpha.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-alpha-GGUF/resolve/main/dragonwar-7b-alpha.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-alpha-GGUF/resolve/main/dragonwar-7b-alpha.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-alpha-GGUF/resolve/main/dragonwar-7b-alpha.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-alpha-GGUF/resolve/main/dragonwar-7b-alpha.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/dragonwar-7b-alpha-GGUF/resolve/main/dragonwar-7b-alpha.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Swallow-70b-NVE-RP-GGUF
mradermacher
2024-05-06T05:15:12Z
12
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "ja", "base_model:nitky/Swallow-70b-NVE-RP", "base_model:quantized:nitky/Swallow-70b-NVE-RP", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-04-05T09:56:33Z
--- base_model: nitky/Swallow-70b-NVE-RP language: - en - ja library_name: transformers license: llama2 model_type: llama quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/nitky/Swallow-70b-NVE-RP <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-GGUF/resolve/main/Swallow-70b-NVE-RP.Q2_K.gguf) | Q2_K | 25.6 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-GGUF/resolve/main/Swallow-70b-NVE-RP.IQ3_XS.gguf) | IQ3_XS | 28.4 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-GGUF/resolve/main/Swallow-70b-NVE-RP.IQ3_S.gguf) | IQ3_S | 30.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-GGUF/resolve/main/Swallow-70b-NVE-RP.Q3_K_S.gguf) | Q3_K_S | 30.0 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-GGUF/resolve/main/Swallow-70b-NVE-RP.IQ3_M.gguf) | IQ3_M | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-GGUF/resolve/main/Swallow-70b-NVE-RP.Q3_K_M.gguf) | Q3_K_M | 33.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-GGUF/resolve/main/Swallow-70b-NVE-RP.Q3_K_L.gguf) | Q3_K_L | 36.2 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-GGUF/resolve/main/Swallow-70b-NVE-RP.IQ4_XS.gguf) | IQ4_XS | 37.3 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-GGUF/resolve/main/Swallow-70b-NVE-RP.Q4_K_S.gguf) | Q4_K_S | 39.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-GGUF/resolve/main/Swallow-70b-NVE-RP.Q4_K_M.gguf) | Q4_K_M | 41.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-GGUF/resolve/main/Swallow-70b-NVE-RP.Q5_K_S.gguf) | Q5_K_S | 47.6 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-GGUF/resolve/main/Swallow-70b-NVE-RP.Q5_K_M.gguf) | Q5_K_M | 48.9 | | | [PART 1](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-GGUF/resolve/main/Swallow-70b-NVE-RP.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-GGUF/resolve/main/Swallow-70b-NVE-RP.Q6_K.gguf.part2of2) | Q6_K | 56.7 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-GGUF/resolve/main/Swallow-70b-NVE-RP.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-GGUF/resolve/main/Swallow-70b-NVE-RP.Q8_0.gguf.part2of2) | Q8_0 | 73.4 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/nox-solar-10.7b-v4-kolon-ITD-5-v2.0-GGUF
mradermacher
2024-05-06T05:15:09Z
71
0
transformers
[ "transformers", "gguf", "ko", "en", "base_model:gwonny/nox-solar-10.7b-v4-kolon-ITD-5-v2.0", "base_model:quantized:gwonny/nox-solar-10.7b-v4-kolon-ITD-5-v2.0", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-05T10:26:41Z
--- base_model: gwonny/nox-solar-10.7b-v4-kolon-ITD-5-v2.0 language: - ko - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/gwonny/nox-solar-10.7b-v4-kolon-ITD-5-v2.0 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/nox-solar-10.7b-v4-kolon-ITD-5-v2.0-GGUF/resolve/main/nox-solar-10.7b-v4-kolon-ITD-5-v2.0.Q2_K.gguf) | Q2_K | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/nox-solar-10.7b-v4-kolon-ITD-5-v2.0-GGUF/resolve/main/nox-solar-10.7b-v4-kolon-ITD-5-v2.0.IQ3_XS.gguf) | IQ3_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/nox-solar-10.7b-v4-kolon-ITD-5-v2.0-GGUF/resolve/main/nox-solar-10.7b-v4-kolon-ITD-5-v2.0.Q3_K_S.gguf) | Q3_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/nox-solar-10.7b-v4-kolon-ITD-5-v2.0-GGUF/resolve/main/nox-solar-10.7b-v4-kolon-ITD-5-v2.0.IQ3_S.gguf) | IQ3_S | 4.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/nox-solar-10.7b-v4-kolon-ITD-5-v2.0-GGUF/resolve/main/nox-solar-10.7b-v4-kolon-ITD-5-v2.0.IQ3_M.gguf) | IQ3_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/nox-solar-10.7b-v4-kolon-ITD-5-v2.0-GGUF/resolve/main/nox-solar-10.7b-v4-kolon-ITD-5-v2.0.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/nox-solar-10.7b-v4-kolon-ITD-5-v2.0-GGUF/resolve/main/nox-solar-10.7b-v4-kolon-ITD-5-v2.0.Q3_K_L.gguf) | Q3_K_L | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/nox-solar-10.7b-v4-kolon-ITD-5-v2.0-GGUF/resolve/main/nox-solar-10.7b-v4-kolon-ITD-5-v2.0.IQ4_XS.gguf) | IQ4_XS | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/nox-solar-10.7b-v4-kolon-ITD-5-v2.0-GGUF/resolve/main/nox-solar-10.7b-v4-kolon-ITD-5-v2.0.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/nox-solar-10.7b-v4-kolon-ITD-5-v2.0-GGUF/resolve/main/nox-solar-10.7b-v4-kolon-ITD-5-v2.0.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/nox-solar-10.7b-v4-kolon-ITD-5-v2.0-GGUF/resolve/main/nox-solar-10.7b-v4-kolon-ITD-5-v2.0.Q5_K_S.gguf) | Q5_K_S | 7.5 | | | [GGUF](https://huggingface.co/mradermacher/nox-solar-10.7b-v4-kolon-ITD-5-v2.0-GGUF/resolve/main/nox-solar-10.7b-v4-kolon-ITD-5-v2.0.Q5_K_M.gguf) | Q5_K_M | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/nox-solar-10.7b-v4-kolon-ITD-5-v2.0-GGUF/resolve/main/nox-solar-10.7b-v4-kolon-ITD-5-v2.0.Q6_K.gguf) | Q6_K | 8.9 | very good quality | | [GGUF](https://huggingface.co/mradermacher/nox-solar-10.7b-v4-kolon-ITD-5-v2.0-GGUF/resolve/main/nox-solar-10.7b-v4-kolon-ITD-5-v2.0.Q8_0.gguf) | Q8_0 | 11.5 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Mermaid-Yi-9B-GGUF
mradermacher
2024-05-06T05:15:07Z
1
0
transformers
[ "transformers", "gguf", "en", "base_model:TroyDoesAI/Mermaid-Yi-9B", "base_model:quantized:TroyDoesAI/Mermaid-Yi-9B", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-05T11:26:08Z
--- base_model: TroyDoesAI/Mermaid-Yi-9B language: - en library_name: transformers license: cc-by-4.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/TroyDoesAI/Mermaid-Yi-9B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mermaid-Yi-9B-GGUF/resolve/main/Mermaid-Yi-9B.Q2_K.gguf) | Q2_K | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Yi-9B-GGUF/resolve/main/Mermaid-Yi-9B.IQ3_XS.gguf) | IQ3_XS | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Yi-9B-GGUF/resolve/main/Mermaid-Yi-9B.Q3_K_S.gguf) | Q3_K_S | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Yi-9B-GGUF/resolve/main/Mermaid-Yi-9B.IQ3_S.gguf) | IQ3_S | 4.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Yi-9B-GGUF/resolve/main/Mermaid-Yi-9B.IQ3_M.gguf) | IQ3_M | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Yi-9B-GGUF/resolve/main/Mermaid-Yi-9B.Q3_K_M.gguf) | Q3_K_M | 4.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Yi-9B-GGUF/resolve/main/Mermaid-Yi-9B.Q3_K_L.gguf) | Q3_K_L | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Yi-9B-GGUF/resolve/main/Mermaid-Yi-9B.IQ4_XS.gguf) | IQ4_XS | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Yi-9B-GGUF/resolve/main/Mermaid-Yi-9B.Q4_K_S.gguf) | Q4_K_S | 5.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Yi-9B-GGUF/resolve/main/Mermaid-Yi-9B.Q4_K_M.gguf) | Q4_K_M | 5.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Yi-9B-GGUF/resolve/main/Mermaid-Yi-9B.Q5_K_S.gguf) | Q5_K_S | 6.2 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Yi-9B-GGUF/resolve/main/Mermaid-Yi-9B.Q5_K_M.gguf) | Q5_K_M | 6.4 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Yi-9B-GGUF/resolve/main/Mermaid-Yi-9B.Q6_K.gguf) | Q6_K | 7.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Yi-9B-GGUF/resolve/main/Mermaid-Yi-9B.Q8_0.gguf) | Q8_0 | 9.5 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Jaymax/llama3_FDA_qnabot_ver2-sft-test-push_ver2
Jaymax
2024-05-06T05:15:04Z
82
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-06T05:09:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/Myrrh_solar_10.7b_v0.1-GGUF
mradermacher
2024-05-06T05:14:59Z
0
0
transformers
[ "transformers", "gguf", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-05T11:43:30Z
--- base_model: ParkTaeEon/Myrrh_solar_10.7b_v0.1 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/ParkTaeEon/Myrrh_solar_10.7b_v0.1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1.Q2_K.gguf) | Q2_K | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1.IQ3_XS.gguf) | IQ3_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1.Q3_K_S.gguf) | Q3_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1.IQ3_S.gguf) | IQ3_S | 4.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1.IQ3_M.gguf) | IQ3_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1.Q3_K_L.gguf) | Q3_K_L | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1.IQ4_XS.gguf) | IQ4_XS | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1.Q5_K_S.gguf) | Q5_K_S | 7.5 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1.Q5_K_M.gguf) | Q5_K_M | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1.Q6_K.gguf) | Q6_K | 8.9 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1.Q8_0.gguf) | Q8_0 | 11.5 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Superkarakuri-lm-chat-70b-v0.1-GGUF
mradermacher
2024-05-06T05:14:57Z
4
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "ja", "license:llama2", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-05T12:00:20Z
--- base_model: Aratako/Superkarakuri-lm-chat-70b-v0.1 language: - ja library_name: transformers license: llama2 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Aratako/Superkarakuri-lm-chat-70b-v0.1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Superkarakuri-lm-chat-70b-v0.1-GGUF/resolve/main/Superkarakuri-lm-chat-70b-v0.1.Q2_K.gguf) | Q2_K | 25.7 | | | [GGUF](https://huggingface.co/mradermacher/Superkarakuri-lm-chat-70b-v0.1-GGUF/resolve/main/Superkarakuri-lm-chat-70b-v0.1.IQ3_XS.gguf) | IQ3_XS | 28.6 | | | [GGUF](https://huggingface.co/mradermacher/Superkarakuri-lm-chat-70b-v0.1-GGUF/resolve/main/Superkarakuri-lm-chat-70b-v0.1.IQ3_S.gguf) | IQ3_S | 30.2 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Superkarakuri-lm-chat-70b-v0.1-GGUF/resolve/main/Superkarakuri-lm-chat-70b-v0.1.Q3_K_S.gguf) | Q3_K_S | 30.2 | | | [GGUF](https://huggingface.co/mradermacher/Superkarakuri-lm-chat-70b-v0.1-GGUF/resolve/main/Superkarakuri-lm-chat-70b-v0.1.IQ3_M.gguf) | IQ3_M | 31.2 | | | [GGUF](https://huggingface.co/mradermacher/Superkarakuri-lm-chat-70b-v0.1-GGUF/resolve/main/Superkarakuri-lm-chat-70b-v0.1.Q3_K_M.gguf) | Q3_K_M | 33.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Superkarakuri-lm-chat-70b-v0.1-GGUF/resolve/main/Superkarakuri-lm-chat-70b-v0.1.Q3_K_L.gguf) | Q3_K_L | 36.4 | | | [GGUF](https://huggingface.co/mradermacher/Superkarakuri-lm-chat-70b-v0.1-GGUF/resolve/main/Superkarakuri-lm-chat-70b-v0.1.IQ4_XS.gguf) | IQ4_XS | 37.4 | | | [GGUF](https://huggingface.co/mradermacher/Superkarakuri-lm-chat-70b-v0.1-GGUF/resolve/main/Superkarakuri-lm-chat-70b-v0.1.Q4_K_S.gguf) | Q4_K_S | 39.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Superkarakuri-lm-chat-70b-v0.1-GGUF/resolve/main/Superkarakuri-lm-chat-70b-v0.1.Q4_K_M.gguf) | Q4_K_M | 41.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Superkarakuri-lm-chat-70b-v0.1-GGUF/resolve/main/Superkarakuri-lm-chat-70b-v0.1.Q5_K_S.gguf) | Q5_K_S | 47.7 | | | [GGUF](https://huggingface.co/mradermacher/Superkarakuri-lm-chat-70b-v0.1-GGUF/resolve/main/Superkarakuri-lm-chat-70b-v0.1.Q5_K_M.gguf) | Q5_K_M | 49.0 | | | [PART 1](https://huggingface.co/mradermacher/Superkarakuri-lm-chat-70b-v0.1-GGUF/resolve/main/Superkarakuri-lm-chat-70b-v0.1.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Superkarakuri-lm-chat-70b-v0.1-GGUF/resolve/main/Superkarakuri-lm-chat-70b-v0.1.Q6_K.gguf.part2of2) | Q6_K | 56.9 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Superkarakuri-lm-chat-70b-v0.1-GGUF/resolve/main/Superkarakuri-lm-chat-70b-v0.1.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Superkarakuri-lm-chat-70b-v0.1-GGUF/resolve/main/Superkarakuri-lm-chat-70b-v0.1.Q8_0.gguf.part2of2) | Q8_0 | 73.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/pandafish-2-7b-32k-GGUF
mradermacher
2024-05-06T05:14:48Z
16
5
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "cognitivecomputations/dolphin-2.8-mistral-7b-v02", "en", "base_model:ichigoberry/pandafish-2-7b-32k", "base_model:quantized:ichigoberry/pandafish-2-7b-32k", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-05T14:05:35Z
--- base_model: ichigoberry/pandafish-2-7b-32k language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - mistralai/Mistral-7B-Instruct-v0.2 - cognitivecomputations/dolphin-2.8-mistral-7b-v02 --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/ichigoberry/pandafish-2-7b-32k <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/pandafish-2-7b-32k-GGUF/resolve/main/pandafish-2-7b-32k.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-2-7b-32k-GGUF/resolve/main/pandafish-2-7b-32k.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-2-7b-32k-GGUF/resolve/main/pandafish-2-7b-32k.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-2-7b-32k-GGUF/resolve/main/pandafish-2-7b-32k.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/pandafish-2-7b-32k-GGUF/resolve/main/pandafish-2-7b-32k.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-2-7b-32k-GGUF/resolve/main/pandafish-2-7b-32k.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/pandafish-2-7b-32k-GGUF/resolve/main/pandafish-2-7b-32k.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-2-7b-32k-GGUF/resolve/main/pandafish-2-7b-32k.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-2-7b-32k-GGUF/resolve/main/pandafish-2-7b-32k.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/pandafish-2-7b-32k-GGUF/resolve/main/pandafish-2-7b-32k.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/pandafish-2-7b-32k-GGUF/resolve/main/pandafish-2-7b-32k.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-2-7b-32k-GGUF/resolve/main/pandafish-2-7b-32k.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-2-7b-32k-GGUF/resolve/main/pandafish-2-7b-32k.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/pandafish-2-7b-32k-GGUF/resolve/main/pandafish-2-7b-32k.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/ClaudeLimaRP-Maid-10.7B-GGUF
mradermacher
2024-05-06T05:14:46Z
107
1
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "Undi95/Mistral-ClaudeLimaRP-v3-7B", "SanjiWatsuki/Silicon-Maid-7B", "en", "base_model:akrads/ClaudeLimaRP-Maid-10.7B", "base_model:quantized:akrads/ClaudeLimaRP-Maid-10.7B", "endpoints_compatible", "region:us" ]
null
2024-04-05T14:16:23Z
--- base_model: akrads/ClaudeLimaRP-Maid-10.7B language: - en library_name: transformers quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - Undi95/Mistral-ClaudeLimaRP-v3-7B - SanjiWatsuki/Silicon-Maid-7B --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/akrads/ClaudeLimaRP-Maid-10.7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/ClaudeLimaRP-Maid-10.7B-GGUF/resolve/main/ClaudeLimaRP-Maid-10.7B.Q2_K.gguf) | Q2_K | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/ClaudeLimaRP-Maid-10.7B-GGUF/resolve/main/ClaudeLimaRP-Maid-10.7B.IQ3_XS.gguf) | IQ3_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/ClaudeLimaRP-Maid-10.7B-GGUF/resolve/main/ClaudeLimaRP-Maid-10.7B.Q3_K_S.gguf) | Q3_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/ClaudeLimaRP-Maid-10.7B-GGUF/resolve/main/ClaudeLimaRP-Maid-10.7B.IQ3_S.gguf) | IQ3_S | 4.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/ClaudeLimaRP-Maid-10.7B-GGUF/resolve/main/ClaudeLimaRP-Maid-10.7B.IQ3_M.gguf) | IQ3_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/ClaudeLimaRP-Maid-10.7B-GGUF/resolve/main/ClaudeLimaRP-Maid-10.7B.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/ClaudeLimaRP-Maid-10.7B-GGUF/resolve/main/ClaudeLimaRP-Maid-10.7B.Q3_K_L.gguf) | Q3_K_L | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/ClaudeLimaRP-Maid-10.7B-GGUF/resolve/main/ClaudeLimaRP-Maid-10.7B.IQ4_XS.gguf) | IQ4_XS | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/ClaudeLimaRP-Maid-10.7B-GGUF/resolve/main/ClaudeLimaRP-Maid-10.7B.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ClaudeLimaRP-Maid-10.7B-GGUF/resolve/main/ClaudeLimaRP-Maid-10.7B.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ClaudeLimaRP-Maid-10.7B-GGUF/resolve/main/ClaudeLimaRP-Maid-10.7B.Q5_K_S.gguf) | Q5_K_S | 7.5 | | | [GGUF](https://huggingface.co/mradermacher/ClaudeLimaRP-Maid-10.7B-GGUF/resolve/main/ClaudeLimaRP-Maid-10.7B.Q5_K_M.gguf) | Q5_K_M | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/ClaudeLimaRP-Maid-10.7B-GGUF/resolve/main/ClaudeLimaRP-Maid-10.7B.Q6_K.gguf) | Q6_K | 8.9 | very good quality | | [GGUF](https://huggingface.co/mradermacher/ClaudeLimaRP-Maid-10.7B-GGUF/resolve/main/ClaudeLimaRP-Maid-10.7B.Q8_0.gguf) | Q8_0 | 11.5 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Swallow-70b-NVE-RP-i1-GGUF
mradermacher
2024-05-06T05:14:43Z
99
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "ja", "base_model:nitky/Swallow-70b-NVE-RP", "base_model:quantized:nitky/Swallow-70b-NVE-RP", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-04-05T14:20:52Z
--- base_model: nitky/Swallow-70b-NVE-RP language: - en - ja library_name: transformers license: llama2 model_type: llama quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/nitky/Swallow-70b-NVE-RP <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-IQ1_S.gguf) | i1-IQ1_S | 14.6 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-IQ1_M.gguf) | i1-IQ1_M | 16.0 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.4 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.4 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-IQ2_S.gguf) | i1-IQ2_S | 21.5 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-IQ2_M.gguf) | i1-IQ2_M | 23.3 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-Q2_K.gguf) | i1-Q2_K | 25.6 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 26.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.4 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-IQ3_S.gguf) | i1-IQ3_S | 30.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-IQ3_M.gguf) | i1-IQ3_M | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-IQ4_XS.gguf) | i1-IQ4_XS | 36.9 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-Q4_0.gguf) | i1-Q4_0 | 39.1 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.3 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.6 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-Q5_K_M.gguf) | i1-Q5_K_M | 48.9 | | | [PART 1](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 56.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/anarchy-solar-10B-v1-GGUF
mradermacher
2024-05-06T05:14:37Z
0
0
transformers
[ "transformers", "gguf", "ko", "base_model:moondriller/anarchy-solar-10B-v1", "base_model:quantized:moondriller/anarchy-solar-10B-v1", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-05T15:34:15Z
--- base_model: moondriller/anarchy-solar-10B-v1 language: - ko library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/moondriller/anarchy-solar-10B-v1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/anarchy-solar-10B-v1-GGUF/resolve/main/anarchy-solar-10B-v1.Q2_K.gguf) | Q2_K | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/anarchy-solar-10B-v1-GGUF/resolve/main/anarchy-solar-10B-v1.IQ3_XS.gguf) | IQ3_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/anarchy-solar-10B-v1-GGUF/resolve/main/anarchy-solar-10B-v1.Q3_K_S.gguf) | Q3_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/anarchy-solar-10B-v1-GGUF/resolve/main/anarchy-solar-10B-v1.IQ3_S.gguf) | IQ3_S | 4.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/anarchy-solar-10B-v1-GGUF/resolve/main/anarchy-solar-10B-v1.IQ3_M.gguf) | IQ3_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/anarchy-solar-10B-v1-GGUF/resolve/main/anarchy-solar-10B-v1.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/anarchy-solar-10B-v1-GGUF/resolve/main/anarchy-solar-10B-v1.Q3_K_L.gguf) | Q3_K_L | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/anarchy-solar-10B-v1-GGUF/resolve/main/anarchy-solar-10B-v1.IQ4_XS.gguf) | IQ4_XS | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/anarchy-solar-10B-v1-GGUF/resolve/main/anarchy-solar-10B-v1.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/anarchy-solar-10B-v1-GGUF/resolve/main/anarchy-solar-10B-v1.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/anarchy-solar-10B-v1-GGUF/resolve/main/anarchy-solar-10B-v1.Q5_K_S.gguf) | Q5_K_S | 7.5 | | | [GGUF](https://huggingface.co/mradermacher/anarchy-solar-10B-v1-GGUF/resolve/main/anarchy-solar-10B-v1.Q5_K_M.gguf) | Q5_K_M | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/anarchy-solar-10B-v1-GGUF/resolve/main/anarchy-solar-10B-v1.Q6_K.gguf) | Q6_K | 8.9 | very good quality | | [GGUF](https://huggingface.co/mradermacher/anarchy-solar-10B-v1-GGUF/resolve/main/anarchy-solar-10B-v1.Q8_0.gguf) | Q8_0 | 11.5 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/anarchy-llama2-13B-v2-GGUF
mradermacher
2024-05-06T05:14:30Z
3
0
transformers
[ "transformers", "gguf", "en", "base_model:moondriller/anarchy-llama2-13B-v2", "base_model:quantized:moondriller/anarchy-llama2-13B-v2", "endpoints_compatible", "region:us" ]
null
2024-04-05T16:58:16Z
--- base_model: moondriller/anarchy-llama2-13B-v2 language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/moondriller/anarchy-llama2-13B-v2 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/anarchy-llama2-13B-v2-GGUF/resolve/main/anarchy-llama2-13B-v2.Q2_K.gguf) | Q2_K | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/anarchy-llama2-13B-v2-GGUF/resolve/main/anarchy-llama2-13B-v2.IQ3_XS.gguf) | IQ3_XS | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/anarchy-llama2-13B-v2-GGUF/resolve/main/anarchy-llama2-13B-v2.IQ3_S.gguf) | IQ3_S | 5.9 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/anarchy-llama2-13B-v2-GGUF/resolve/main/anarchy-llama2-13B-v2.Q3_K_S.gguf) | Q3_K_S | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/anarchy-llama2-13B-v2-GGUF/resolve/main/anarchy-llama2-13B-v2.IQ3_M.gguf) | IQ3_M | 6.2 | | | [GGUF](https://huggingface.co/mradermacher/anarchy-llama2-13B-v2-GGUF/resolve/main/anarchy-llama2-13B-v2.Q3_K_M.gguf) | Q3_K_M | 6.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/anarchy-llama2-13B-v2-GGUF/resolve/main/anarchy-llama2-13B-v2.Q3_K_L.gguf) | Q3_K_L | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/anarchy-llama2-13B-v2-GGUF/resolve/main/anarchy-llama2-13B-v2.IQ4_XS.gguf) | IQ4_XS | 7.2 | | | [GGUF](https://huggingface.co/mradermacher/anarchy-llama2-13B-v2-GGUF/resolve/main/anarchy-llama2-13B-v2.Q4_K_S.gguf) | Q4_K_S | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/anarchy-llama2-13B-v2-GGUF/resolve/main/anarchy-llama2-13B-v2.Q4_K_M.gguf) | Q4_K_M | 8.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/anarchy-llama2-13B-v2-GGUF/resolve/main/anarchy-llama2-13B-v2.Q5_K_S.gguf) | Q5_K_S | 9.2 | | | [GGUF](https://huggingface.co/mradermacher/anarchy-llama2-13B-v2-GGUF/resolve/main/anarchy-llama2-13B-v2.Q5_K_M.gguf) | Q5_K_M | 9.4 | | | [GGUF](https://huggingface.co/mradermacher/anarchy-llama2-13B-v2-GGUF/resolve/main/anarchy-llama2-13B-v2.Q6_K.gguf) | Q6_K | 10.9 | very good quality | | [GGUF](https://huggingface.co/mradermacher/anarchy-llama2-13B-v2-GGUF/resolve/main/anarchy-llama2-13B-v2.Q8_0.gguf) | Q8_0 | 14.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Mergerix-7b-v0.5-GGUF
mradermacher
2024-05-06T05:14:27Z
5
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "automerger/YamshadowExperiment28-7B", "automerger/PasticheInex12-7B", "en", "base_model:MiniMoog/Mergerix-7b-v0.5", "base_model:quantized:MiniMoog/Mergerix-7b-v0.5", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-05T18:32:48Z
--- base_model: MiniMoog/Mergerix-7b-v0.5 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - automerger/YamshadowExperiment28-7B - automerger/PasticheInex12-7B --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/MiniMoog/Mergerix-7b-v0.5 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mergerix-7b-v0.5-GGUF/resolve/main/Mergerix-7b-v0.5.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Mergerix-7b-v0.5-GGUF/resolve/main/Mergerix-7b-v0.5.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Mergerix-7b-v0.5-GGUF/resolve/main/Mergerix-7b-v0.5.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Mergerix-7b-v0.5-GGUF/resolve/main/Mergerix-7b-v0.5.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mergerix-7b-v0.5-GGUF/resolve/main/Mergerix-7b-v0.5.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Mergerix-7b-v0.5-GGUF/resolve/main/Mergerix-7b-v0.5.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mergerix-7b-v0.5-GGUF/resolve/main/Mergerix-7b-v0.5.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Mergerix-7b-v0.5-GGUF/resolve/main/Mergerix-7b-v0.5.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Mergerix-7b-v0.5-GGUF/resolve/main/Mergerix-7b-v0.5.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mergerix-7b-v0.5-GGUF/resolve/main/Mergerix-7b-v0.5.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mergerix-7b-v0.5-GGUF/resolve/main/Mergerix-7b-v0.5.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Mergerix-7b-v0.5-GGUF/resolve/main/Mergerix-7b-v0.5.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Mergerix-7b-v0.5-GGUF/resolve/main/Mergerix-7b-v0.5.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Mergerix-7b-v0.5-GGUF/resolve/main/Mergerix-7b-v0.5.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/H4na-7B-v0.1-GGUF
mradermacher
2024-05-06T05:14:20Z
23
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "mistral", "trl", "sft", "en", "base_model:Smuggling1710/H4na-7B-v0.1", "base_model:quantized:Smuggling1710/H4na-7B-v0.1", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-05T19:53:00Z
--- base_model: Smuggling1710/H4na-7B-v0.1 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - mistral - trl - sft --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Smuggling1710/H4na-7B-v0.1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/H4na-7B-v0.1-GGUF/resolve/main/H4na-7B-v0.1.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/H4na-7B-v0.1-GGUF/resolve/main/H4na-7B-v0.1.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/H4na-7B-v0.1-GGUF/resolve/main/H4na-7B-v0.1.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/H4na-7B-v0.1-GGUF/resolve/main/H4na-7B-v0.1.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/H4na-7B-v0.1-GGUF/resolve/main/H4na-7B-v0.1.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/H4na-7B-v0.1-GGUF/resolve/main/H4na-7B-v0.1.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/H4na-7B-v0.1-GGUF/resolve/main/H4na-7B-v0.1.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/H4na-7B-v0.1-GGUF/resolve/main/H4na-7B-v0.1.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/H4na-7B-v0.1-GGUF/resolve/main/H4na-7B-v0.1.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/H4na-7B-v0.1-GGUF/resolve/main/H4na-7B-v0.1.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/H4na-7B-v0.1-GGUF/resolve/main/H4na-7B-v0.1.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/H4na-7B-v0.1-GGUF/resolve/main/H4na-7B-v0.1.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/H4na-7B-v0.1-GGUF/resolve/main/H4na-7B-v0.1.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/H4na-7B-v0.1-GGUF/resolve/main/H4na-7B-v0.1.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/13B-HyperMantis-GGUF
mradermacher
2024-05-06T05:14:16Z
97
0
transformers
[ "transformers", "gguf", "llama", "alpaca", "vicuna", "mix", "merge", "model merge", "roleplay", "chat", "instruct", "en", "base_model:digitous/13B-HyperMantis", "base_model:quantized:digitous/13B-HyperMantis", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-05T20:04:53Z
--- base_model: digitous/13B-HyperMantis language: - en library_name: transformers license: other quantized_by: mradermacher tags: - llama - alpaca - vicuna - mix - merge - model merge - roleplay - chat - instruct --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/digitous/13B-HyperMantis <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/13B-HyperMantis-GGUF/resolve/main/13B-HyperMantis.Q2_K.gguf) | Q2_K | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/13B-HyperMantis-GGUF/resolve/main/13B-HyperMantis.IQ3_XS.gguf) | IQ3_XS | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/13B-HyperMantis-GGUF/resolve/main/13B-HyperMantis.IQ3_S.gguf) | IQ3_S | 5.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/13B-HyperMantis-GGUF/resolve/main/13B-HyperMantis.Q3_K_S.gguf) | Q3_K_S | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/13B-HyperMantis-GGUF/resolve/main/13B-HyperMantis.IQ3_M.gguf) | IQ3_M | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/13B-HyperMantis-GGUF/resolve/main/13B-HyperMantis.Q3_K_M.gguf) | Q3_K_M | 6.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/13B-HyperMantis-GGUF/resolve/main/13B-HyperMantis.Q3_K_L.gguf) | Q3_K_L | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/13B-HyperMantis-GGUF/resolve/main/13B-HyperMantis.IQ4_XS.gguf) | IQ4_XS | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/13B-HyperMantis-GGUF/resolve/main/13B-HyperMantis.Q4_K_S.gguf) | Q4_K_S | 7.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/13B-HyperMantis-GGUF/resolve/main/13B-HyperMantis.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/13B-HyperMantis-GGUF/resolve/main/13B-HyperMantis.Q5_K_S.gguf) | Q5_K_S | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/13B-HyperMantis-GGUF/resolve/main/13B-HyperMantis.Q5_K_M.gguf) | Q5_K_M | 9.3 | | | [GGUF](https://huggingface.co/mradermacher/13B-HyperMantis-GGUF/resolve/main/13B-HyperMantis.Q6_K.gguf) | Q6_K | 10.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/13B-HyperMantis-GGUF/resolve/main/13B-HyperMantis.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Mixtral-8x7B-Instruct-v0.1-upscaled-GGUF
mradermacher
2024-05-06T05:13:56Z
3
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "fr", "it", "de", "es", "en", "base_model:Aratako/Mixtral-8x7B-Instruct-v0.1-upscaled", "base_model:quantized:Aratako/Mixtral-8x7B-Instruct-v0.1-upscaled", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-05T21:19:21Z
--- base_model: Aratako/Mixtral-8x7B-Instruct-v0.1-upscaled language: - fr - it - de - es - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Aratako/Mixtral-8x7B-Instruct-v0.1-upscaled <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-upscaled-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-upscaled-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-upscaled.Q2_K.gguf) | Q2_K | 30.3 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-upscaled-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-upscaled.IQ3_XS.gguf) | IQ3_XS | 33.7 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-upscaled-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-upscaled.IQ3_S.gguf) | IQ3_S | 35.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-upscaled-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-upscaled.Q3_K_S.gguf) | Q3_K_S | 35.7 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-upscaled-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-upscaled.IQ3_M.gguf) | IQ3_M | 37.5 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-upscaled-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-upscaled.Q3_K_M.gguf) | Q3_K_M | 39.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-upscaled-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-upscaled.Q3_K_L.gguf) | Q3_K_L | 42.3 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-upscaled-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-upscaled.IQ4_XS.gguf) | IQ4_XS | 44.4 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-upscaled-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-upscaled.Q4_K_S.gguf) | Q4_K_S | 46.8 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-upscaled-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-upscaled.Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-upscaled-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-upscaled.Q4_K_M.gguf.part2of2) | Q4_K_M | 49.7 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-upscaled-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-upscaled.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-upscaled-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-upscaled.Q5_K_S.gguf.part2of2) | Q5_K_S | 56.4 | | | [PART 1](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-upscaled-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-upscaled.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-upscaled-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-upscaled.Q5_K_M.gguf.part2of2) | Q5_K_M | 58.1 | | | [PART 1](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-upscaled-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-upscaled.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-upscaled-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-upscaled.Q6_K.gguf.part2of2) | Q6_K | 67.1 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-upscaled-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-upscaled.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-upscaled-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-upscaled.Q8_0.gguf.part2of2) | Q8_0 | 86.7 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/13B-Chimera-GGUF
mradermacher
2024-05-06T05:13:49Z
34
0
transformers
[ "transformers", "gguf", "llama", "cot", "vicuna", "uncensored", "merge", "mix", "gptq", "en", "base_model:digitous/13B-Chimera", "base_model:quantized:digitous/13B-Chimera", "endpoints_compatible", "region:us" ]
null
2024-04-05T23:11:33Z
--- base_model: digitous/13B-Chimera language: - en library_name: transformers quantized_by: mradermacher tags: - llama - cot - vicuna - uncensored - merge - mix - gptq --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/digitous/13B-Chimera <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/13B-Chimera-GGUF/resolve/main/13B-Chimera.Q2_K.gguf) | Q2_K | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/13B-Chimera-GGUF/resolve/main/13B-Chimera.IQ3_XS.gguf) | IQ3_XS | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/13B-Chimera-GGUF/resolve/main/13B-Chimera.IQ3_S.gguf) | IQ3_S | 5.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/13B-Chimera-GGUF/resolve/main/13B-Chimera.Q3_K_S.gguf) | Q3_K_S | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/13B-Chimera-GGUF/resolve/main/13B-Chimera.IQ3_M.gguf) | IQ3_M | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/13B-Chimera-GGUF/resolve/main/13B-Chimera.Q3_K_M.gguf) | Q3_K_M | 6.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/13B-Chimera-GGUF/resolve/main/13B-Chimera.Q3_K_L.gguf) | Q3_K_L | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/13B-Chimera-GGUF/resolve/main/13B-Chimera.IQ4_XS.gguf) | IQ4_XS | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/13B-Chimera-GGUF/resolve/main/13B-Chimera.Q4_K_S.gguf) | Q4_K_S | 7.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/13B-Chimera-GGUF/resolve/main/13B-Chimera.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/13B-Chimera-GGUF/resolve/main/13B-Chimera.Q5_K_S.gguf) | Q5_K_S | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/13B-Chimera-GGUF/resolve/main/13B-Chimera.Q5_K_M.gguf) | Q5_K_M | 9.3 | | | [GGUF](https://huggingface.co/mradermacher/13B-Chimera-GGUF/resolve/main/13B-Chimera.Q6_K.gguf) | Q6_K | 10.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/13B-Chimera-GGUF/resolve/main/13B-Chimera.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/LemonadeRP-4.5.3-11B-GGUF
mradermacher
2024-05-06T05:13:46Z
6
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:mpasila/LemonadeRP-4.5.3-11B", "base_model:quantized:mpasila/LemonadeRP-4.5.3-11B", "endpoints_compatible", "region:us" ]
null
2024-04-06T00:30:17Z
--- base_model: mpasila/LemonadeRP-4.5.3-11B language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/mpasila/LemonadeRP-4.5.3-11B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-11B-GGUF/resolve/main/LemonadeRP-4.5.3-11B.Q2_K.gguf) | Q2_K | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-11B-GGUF/resolve/main/LemonadeRP-4.5.3-11B.IQ3_XS.gguf) | IQ3_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-11B-GGUF/resolve/main/LemonadeRP-4.5.3-11B.Q3_K_S.gguf) | Q3_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-11B-GGUF/resolve/main/LemonadeRP-4.5.3-11B.IQ3_S.gguf) | IQ3_S | 4.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-11B-GGUF/resolve/main/LemonadeRP-4.5.3-11B.IQ3_M.gguf) | IQ3_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-11B-GGUF/resolve/main/LemonadeRP-4.5.3-11B.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-11B-GGUF/resolve/main/LemonadeRP-4.5.3-11B.Q3_K_L.gguf) | Q3_K_L | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-11B-GGUF/resolve/main/LemonadeRP-4.5.3-11B.IQ4_XS.gguf) | IQ4_XS | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-11B-GGUF/resolve/main/LemonadeRP-4.5.3-11B.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-11B-GGUF/resolve/main/LemonadeRP-4.5.3-11B.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-11B-GGUF/resolve/main/LemonadeRP-4.5.3-11B.Q5_K_S.gguf) | Q5_K_S | 7.5 | | | [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-11B-GGUF/resolve/main/LemonadeRP-4.5.3-11B.Q5_K_M.gguf) | Q5_K_M | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-11B-GGUF/resolve/main/LemonadeRP-4.5.3-11B.Q6_K.gguf) | Q6_K | 8.9 | very good quality | | [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-11B-GGUF/resolve/main/LemonadeRP-4.5.3-11B.Q8_0.gguf) | Q8_0 | 11.5 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/WizardLM-30B-V1.0-i1-GGUF
mradermacher
2024-05-06T05:13:43Z
13
0
transformers
[ "transformers", "gguf", "en", "base_model:WizardLM/WizardLM-30B-V1.0", "base_model:quantized:WizardLM/WizardLM-30B-V1.0", "endpoints_compatible", "region:us" ]
null
2024-04-06T01:24:45Z
--- base_model: WizardLM/WizardLM-30B-V1.0 language: - en library_name: transformers no_imatrix: 'GGML_ASSERT: llama.cpp/ggml-quants.c:12166: besti1 >= 0 && besti2 >= 0 && best_k >= 0' quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/WizardLM/WizardLM-30B-V1.0 **No IQ1\* quants as llama.cpp is crashing when trying to generate it** <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/WizardLM-30B-V1.0-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-i1-GGUF/resolve/main/WizardLM-30B-V1.0.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-i1-GGUF/resolve/main/WizardLM-30B-V1.0.i1-IQ2_XS.gguf) | i1-IQ2_XS | 9.7 | | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-i1-GGUF/resolve/main/WizardLM-30B-V1.0.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-i1-GGUF/resolve/main/WizardLM-30B-V1.0.i1-IQ2_M.gguf) | i1-IQ2_M | 11.3 | | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-i1-GGUF/resolve/main/WizardLM-30B-V1.0.i1-Q2_K.gguf) | i1-Q2_K | 12.1 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-i1-GGUF/resolve/main/WizardLM-30B-V1.0.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-i1-GGUF/resolve/main/WizardLM-30B-V1.0.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.4 | | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-i1-GGUF/resolve/main/WizardLM-30B-V1.0.i1-IQ3_S.gguf) | i1-IQ3_S | 14.2 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-i1-GGUF/resolve/main/WizardLM-30B-V1.0.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.2 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-i1-GGUF/resolve/main/WizardLM-30B-V1.0.i1-IQ3_M.gguf) | i1-IQ3_M | 15.0 | | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-i1-GGUF/resolve/main/WizardLM-30B-V1.0.i1-Q3_K_M.gguf) | i1-Q3_K_M | 15.9 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-i1-GGUF/resolve/main/WizardLM-30B-V1.0.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-i1-GGUF/resolve/main/WizardLM-30B-V1.0.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.4 | | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-i1-GGUF/resolve/main/WizardLM-30B-V1.0.i1-Q4_0.gguf) | i1-Q4_0 | 18.5 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-i1-GGUF/resolve/main/WizardLM-30B-V1.0.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.6 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-i1-GGUF/resolve/main/WizardLM-30B-V1.0.i1-Q4_K_M.gguf) | i1-Q4_K_M | 19.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-i1-GGUF/resolve/main/WizardLM-30B-V1.0.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.5 | | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-i1-GGUF/resolve/main/WizardLM-30B-V1.0.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.1 | | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-i1-GGUF/resolve/main/WizardLM-30B-V1.0.i1-Q6_K.gguf) | i1-Q6_K | 26.8 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/FNCARLplus-7b-GGUF
mradermacher
2024-05-06T05:13:36Z
90
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:jambroz/FNCARLplus-7b", "base_model:quantized:jambroz/FNCARLplus-7b", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-06T02:16:58Z
--- base_model: jambroz/FNCARLplus-7b language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/jambroz/FNCARLplus-7b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/FNCARLplus-7b-GGUF/resolve/main/FNCARLplus-7b.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/FNCARLplus-7b-GGUF/resolve/main/FNCARLplus-7b.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/FNCARLplus-7b-GGUF/resolve/main/FNCARLplus-7b.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/FNCARLplus-7b-GGUF/resolve/main/FNCARLplus-7b.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/FNCARLplus-7b-GGUF/resolve/main/FNCARLplus-7b.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/FNCARLplus-7b-GGUF/resolve/main/FNCARLplus-7b.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/FNCARLplus-7b-GGUF/resolve/main/FNCARLplus-7b.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/FNCARLplus-7b-GGUF/resolve/main/FNCARLplus-7b.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/FNCARLplus-7b-GGUF/resolve/main/FNCARLplus-7b.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/FNCARLplus-7b-GGUF/resolve/main/FNCARLplus-7b.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/FNCARLplus-7b-GGUF/resolve/main/FNCARLplus-7b.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/FNCARLplus-7b-GGUF/resolve/main/FNCARLplus-7b.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/FNCARLplus-7b-GGUF/resolve/main/FNCARLplus-7b.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/FNCARLplus-7b-GGUF/resolve/main/FNCARLplus-7b.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/prometheus-8x7b-v2.0-1-pp-GGUF
mradermacher
2024-05-06T05:13:24Z
4
0
transformers
[ "transformers", "gguf", "en", "base_model:kaist-ai/prometheus-8x7b-v2.0-1-pp", "base_model:quantized:kaist-ai/prometheus-8x7b-v2.0-1-pp", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-06T05:27:51Z
--- base_model: kaist-ai/prometheus-8x7b-v2.0-1-pp language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/kaist-ai/prometheus-8x7b-v2.0-1-pp <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-1-pp-GGUF/resolve/main/prometheus-8x7b-v2.0-1-pp.Q2_K.gguf) | Q2_K | 17.4 | | | [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-1-pp-GGUF/resolve/main/prometheus-8x7b-v2.0-1-pp.IQ3_XS.gguf) | IQ3_XS | 19.4 | | | [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-1-pp-GGUF/resolve/main/prometheus-8x7b-v2.0-1-pp.IQ3_S.gguf) | IQ3_S | 20.5 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-1-pp-GGUF/resolve/main/prometheus-8x7b-v2.0-1-pp.Q3_K_S.gguf) | Q3_K_S | 20.5 | | | [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-1-pp-GGUF/resolve/main/prometheus-8x7b-v2.0-1-pp.IQ3_M.gguf) | IQ3_M | 21.5 | | | [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-1-pp-GGUF/resolve/main/prometheus-8x7b-v2.0-1-pp.Q3_K_M.gguf) | Q3_K_M | 22.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-1-pp-GGUF/resolve/main/prometheus-8x7b-v2.0-1-pp.Q3_K_L.gguf) | Q3_K_L | 24.3 | | | [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-1-pp-GGUF/resolve/main/prometheus-8x7b-v2.0-1-pp.IQ4_XS.gguf) | IQ4_XS | 25.5 | | | [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-1-pp-GGUF/resolve/main/prometheus-8x7b-v2.0-1-pp.Q4_K_S.gguf) | Q4_K_S | 26.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-1-pp-GGUF/resolve/main/prometheus-8x7b-v2.0-1-pp.Q4_K_M.gguf) | Q4_K_M | 28.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-1-pp-GGUF/resolve/main/prometheus-8x7b-v2.0-1-pp.Q5_K_S.gguf) | Q5_K_S | 32.3 | | | [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-1-pp-GGUF/resolve/main/prometheus-8x7b-v2.0-1-pp.Q5_K_M.gguf) | Q5_K_M | 33.3 | | | [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-1-pp-GGUF/resolve/main/prometheus-8x7b-v2.0-1-pp.Q6_K.gguf) | Q6_K | 38.5 | very good quality | | [PART 1](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-1-pp-GGUF/resolve/main/prometheus-8x7b-v2.0-1-pp.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-1-pp-GGUF/resolve/main/prometheus-8x7b-v2.0-1-pp.Q8_0.gguf.part2of2) | Q8_0 | 49.7 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/pandafish-3-7B-32k-GGUF
mradermacher
2024-05-06T05:13:16Z
1
0
transformers
[ "transformers", "gguf", "en", "base_model:ichigoberry/pandafish-3-7B-32k", "base_model:quantized:ichigoberry/pandafish-3-7B-32k", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-06T06:14:55Z
--- base_model: ichigoberry/pandafish-3-7B-32k language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/ichigoberry/pandafish-3-7B-32k <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/pandafish-3-7B-32k-GGUF/resolve/main/pandafish-3-7B-32k.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-3-7B-32k-GGUF/resolve/main/pandafish-3-7B-32k.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-3-7B-32k-GGUF/resolve/main/pandafish-3-7B-32k.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-3-7B-32k-GGUF/resolve/main/pandafish-3-7B-32k.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/pandafish-3-7B-32k-GGUF/resolve/main/pandafish-3-7B-32k.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-3-7B-32k-GGUF/resolve/main/pandafish-3-7B-32k.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/pandafish-3-7B-32k-GGUF/resolve/main/pandafish-3-7B-32k.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-3-7B-32k-GGUF/resolve/main/pandafish-3-7B-32k.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-3-7B-32k-GGUF/resolve/main/pandafish-3-7B-32k.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/pandafish-3-7B-32k-GGUF/resolve/main/pandafish-3-7B-32k.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/pandafish-3-7B-32k-GGUF/resolve/main/pandafish-3-7B-32k.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-3-7B-32k-GGUF/resolve/main/pandafish-3-7B-32k.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-3-7B-32k-GGUF/resolve/main/pandafish-3-7B-32k.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/pandafish-3-7B-32k-GGUF/resolve/main/pandafish-3-7B-32k.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/FNCARL-7b-dpo-GGUF
mradermacher
2024-05-06T05:12:49Z
4
0
transformers
[ "transformers", "gguf", "en", "base_model:jambroz/FNCARL-7b-dpo", "base_model:quantized:jambroz/FNCARL-7b-dpo", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-06T09:46:29Z
--- base_model: jambroz/FNCARL-7b-dpo language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/jambroz/FNCARL-7b-dpo <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-dpo-GGUF/resolve/main/FNCARL-7b-dpo.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-dpo-GGUF/resolve/main/FNCARL-7b-dpo.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-dpo-GGUF/resolve/main/FNCARL-7b-dpo.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-dpo-GGUF/resolve/main/FNCARL-7b-dpo.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-dpo-GGUF/resolve/main/FNCARL-7b-dpo.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-dpo-GGUF/resolve/main/FNCARL-7b-dpo.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-dpo-GGUF/resolve/main/FNCARL-7b-dpo.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-dpo-GGUF/resolve/main/FNCARL-7b-dpo.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-dpo-GGUF/resolve/main/FNCARL-7b-dpo.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-dpo-GGUF/resolve/main/FNCARL-7b-dpo.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-dpo-GGUF/resolve/main/FNCARL-7b-dpo.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-dpo-GGUF/resolve/main/FNCARL-7b-dpo.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-dpo-GGUF/resolve/main/FNCARL-7b-dpo.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-dpo-GGUF/resolve/main/FNCARL-7b-dpo.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Pioneer-2x7B-GGUF
mradermacher
2024-05-06T05:12:46Z
78
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:hibana2077/Pioneer-2x7B", "base_model:quantized:hibana2077/Pioneer-2x7B", "endpoints_compatible", "region:us" ]
null
2024-04-06T10:24:47Z
--- base_model: hibana2077/Pioneer-2x7B language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/hibana2077/Pioneer-2x7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Pioneer-2x7B-GGUF/resolve/main/Pioneer-2x7B.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Pioneer-2x7B-GGUF/resolve/main/Pioneer-2x7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Pioneer-2x7B-GGUF/resolve/main/Pioneer-2x7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Pioneer-2x7B-GGUF/resolve/main/Pioneer-2x7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Pioneer-2x7B-GGUF/resolve/main/Pioneer-2x7B.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Pioneer-2x7B-GGUF/resolve/main/Pioneer-2x7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Pioneer-2x7B-GGUF/resolve/main/Pioneer-2x7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Pioneer-2x7B-GGUF/resolve/main/Pioneer-2x7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Pioneer-2x7B-GGUF/resolve/main/Pioneer-2x7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Pioneer-2x7B-GGUF/resolve/main/Pioneer-2x7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Pioneer-2x7B-GGUF/resolve/main/Pioneer-2x7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Pioneer-2x7B-GGUF/resolve/main/Pioneer-2x7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Pioneer-2x7B-GGUF/resolve/main/Pioneer-2x7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Pioneer-2x7B-GGUF/resolve/main/Pioneer-2x7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Wittgenbot-7B-GGUF
mradermacher
2024-05-06T05:12:38Z
6
0
transformers
[ "transformers", "gguf", "en", "base_model:descartesevildemon/Wittgenbot-7B", "base_model:quantized:descartesevildemon/Wittgenbot-7B", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-06T10:54:34Z
--- base_model: descartesevildemon/Wittgenbot-7B language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/descartesevildemon/Wittgenbot-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Swallow-MS-7b-v0.1-ChatMathSkill-GGUF
mradermacher
2024-05-06T05:12:26Z
1
0
transformers
[ "transformers", "gguf", "SkillEnhanced", "mistral", "en", "base_model:HachiML/Swallow-MS-7b-v0.1-ChatMathSkill", "base_model:quantized:HachiML/Swallow-MS-7b-v0.1-ChatMathSkill", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-06T11:24:30Z
--- base_model: HachiML/Swallow-MS-7b-v0.1-ChatMathSkill language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - SkillEnhanced - mistral --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/HachiML/Swallow-MS-7b-v0.1-ChatMathSkill <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatMathSkill-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatMathSkill.Q2_K.gguf) | Q2_K | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatMathSkill-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatMathSkill.IQ3_XS.gguf) | IQ3_XS | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatMathSkill-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatMathSkill.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatMathSkill-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatMathSkill.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatMathSkill-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatMathSkill.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatMathSkill-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatMathSkill.Q3_K_M.gguf) | Q3_K_M | 3.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatMathSkill-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatMathSkill.Q3_K_L.gguf) | Q3_K_L | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatMathSkill-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatMathSkill.IQ4_XS.gguf) | IQ4_XS | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatMathSkill-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatMathSkill.Q4_K_S.gguf) | Q4_K_S | 4.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatMathSkill-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatMathSkill.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatMathSkill-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatMathSkill.Q5_K_S.gguf) | Q5_K_S | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatMathSkill-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatMathSkill.Q5_K_M.gguf) | Q5_K_M | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatMathSkill-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatMathSkill.Q6_K.gguf) | Q6_K | 6.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatMathSkill-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatMathSkill.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
jeongmi/solar_insta_chai_80_final
jeongmi
2024-05-06T05:12:10Z
3
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-04-22T04:05:18Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/v1olet_merged_dpo_7B-GGUF
mradermacher
2024-05-06T05:12:04Z
35
0
transformers
[ "transformers", "gguf", "en", "base_model:v1olet/v1olet_merged_dpo_7B", "base_model:quantized:v1olet/v1olet_merged_dpo_7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-06T15:17:18Z
--- base_model: v1olet/v1olet_merged_dpo_7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/v1olet/v1olet_merged_dpo_7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/v1olet_merged_dpo_7B-GGUF/resolve/main/v1olet_merged_dpo_7B.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/v1olet_merged_dpo_7B-GGUF/resolve/main/v1olet_merged_dpo_7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/v1olet_merged_dpo_7B-GGUF/resolve/main/v1olet_merged_dpo_7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/v1olet_merged_dpo_7B-GGUF/resolve/main/v1olet_merged_dpo_7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/v1olet_merged_dpo_7B-GGUF/resolve/main/v1olet_merged_dpo_7B.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/v1olet_merged_dpo_7B-GGUF/resolve/main/v1olet_merged_dpo_7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/v1olet_merged_dpo_7B-GGUF/resolve/main/v1olet_merged_dpo_7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/v1olet_merged_dpo_7B-GGUF/resolve/main/v1olet_merged_dpo_7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/v1olet_merged_dpo_7B-GGUF/resolve/main/v1olet_merged_dpo_7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/v1olet_merged_dpo_7B-GGUF/resolve/main/v1olet_merged_dpo_7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/v1olet_merged_dpo_7B-GGUF/resolve/main/v1olet_merged_dpo_7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/v1olet_merged_dpo_7B-GGUF/resolve/main/v1olet_merged_dpo_7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/v1olet_merged_dpo_7B-GGUF/resolve/main/v1olet_merged_dpo_7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/v1olet_merged_dpo_7B-GGUF/resolve/main/v1olet_merged_dpo_7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Alpaca-elina-65b-GGUF
mradermacher
2024-05-06T05:11:45Z
5
0
transformers
[ "transformers", "gguf", "en", "base_model:Aeala/Alpaca-elina-65b", "base_model:quantized:Aeala/Alpaca-elina-65b", "endpoints_compatible", "region:us" ]
null
2024-04-06T19:05:16Z
--- base_model: Aeala/Alpaca-elina-65b language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Aeala/Alpaca-elina-65b <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Alpaca-elina-65b-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Alpaca-elina-65b-GGUF/resolve/main/Alpaca-elina-65b.Q2_K.gguf) | Q2_K | 24.2 | | | [GGUF](https://huggingface.co/mradermacher/Alpaca-elina-65b-GGUF/resolve/main/Alpaca-elina-65b.IQ3_XS.gguf) | IQ3_XS | 26.7 | | | [GGUF](https://huggingface.co/mradermacher/Alpaca-elina-65b-GGUF/resolve/main/Alpaca-elina-65b.IQ3_S.gguf) | IQ3_S | 28.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Alpaca-elina-65b-GGUF/resolve/main/Alpaca-elina-65b.Q3_K_S.gguf) | Q3_K_S | 28.3 | | | [GGUF](https://huggingface.co/mradermacher/Alpaca-elina-65b-GGUF/resolve/main/Alpaca-elina-65b.IQ3_M.gguf) | IQ3_M | 29.9 | | | [GGUF](https://huggingface.co/mradermacher/Alpaca-elina-65b-GGUF/resolve/main/Alpaca-elina-65b.Q3_K_M.gguf) | Q3_K_M | 31.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Alpaca-elina-65b-GGUF/resolve/main/Alpaca-elina-65b.Q3_K_L.gguf) | Q3_K_L | 34.7 | | | [GGUF](https://huggingface.co/mradermacher/Alpaca-elina-65b-GGUF/resolve/main/Alpaca-elina-65b.IQ4_XS.gguf) | IQ4_XS | 35.1 | | | [GGUF](https://huggingface.co/mradermacher/Alpaca-elina-65b-GGUF/resolve/main/Alpaca-elina-65b.Q4_K_S.gguf) | Q4_K_S | 37.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Alpaca-elina-65b-GGUF/resolve/main/Alpaca-elina-65b.Q4_K_M.gguf) | Q4_K_M | 39.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Alpaca-elina-65b-GGUF/resolve/main/Alpaca-elina-65b.Q5_K_S.gguf) | Q5_K_S | 45.0 | | | [GGUF](https://huggingface.co/mradermacher/Alpaca-elina-65b-GGUF/resolve/main/Alpaca-elina-65b.Q5_K_M.gguf) | Q5_K_M | 46.3 | | | [PART 1](https://huggingface.co/mradermacher/Alpaca-elina-65b-GGUF/resolve/main/Alpaca-elina-65b.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Alpaca-elina-65b-GGUF/resolve/main/Alpaca-elina-65b.Q6_K.gguf.part2of2) | Q6_K | 53.7 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Alpaca-elina-65b-GGUF/resolve/main/Alpaca-elina-65b.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Alpaca-elina-65b-GGUF/resolve/main/Alpaca-elina-65b.Q8_0.gguf.part2of2) | Q8_0 | 69.5 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/WinterGoddess-1.4x-70B-L2-GGUF
mradermacher
2024-05-06T05:11:42Z
2
0
transformers
[ "transformers", "gguf", "en", "base_model:Sao10K/WinterGoddess-1.4x-70B-L2", "base_model:quantized:Sao10K/WinterGoddess-1.4x-70B-L2", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-06T19:40:17Z
--- base_model: Sao10K/WinterGoddess-1.4x-70B-L2 language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Sao10K/WinterGoddess-1.4x-70B-L2 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.Q2_K.gguf) | Q2_K | 25.6 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.IQ3_XS.gguf) | IQ3_XS | 28.4 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.IQ3_S.gguf) | IQ3_S | 30.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.Q3_K_S.gguf) | Q3_K_S | 30.0 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.IQ3_M.gguf) | IQ3_M | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.Q3_K_M.gguf) | Q3_K_M | 33.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.Q3_K_L.gguf) | Q3_K_L | 36.2 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.IQ4_XS.gguf) | IQ4_XS | 37.3 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.Q4_K_S.gguf) | Q4_K_S | 39.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.Q4_K_M.gguf) | Q4_K_M | 41.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.Q5_K_S.gguf) | Q5_K_S | 47.6 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.Q5_K_M.gguf) | Q5_K_M | 48.9 | | | [PART 1](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.Q6_K.gguf.part2of2) | Q6_K | 56.7 | very good quality | | [PART 1](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.Q8_0.gguf.part2of2) | Q8_0 | 73.4 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Mermaid_13B-GGUF
mradermacher
2024-05-06T05:11:29Z
118
0
transformers
[ "transformers", "gguf", "en", "base_model:TroyDoesAI/Mermaid_13B", "base_model:quantized:TroyDoesAI/Mermaid_13B", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-06T22:03:19Z
--- base_model: TroyDoesAI/Mermaid_13B language: - en library_name: transformers license: cc-by-nc-sa-4.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/TroyDoesAI/Mermaid_13B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mermaid_13B-GGUF/resolve/main/Mermaid_13B.Q2_K.gguf) | Q2_K | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid_13B-GGUF/resolve/main/Mermaid_13B.IQ3_XS.gguf) | IQ3_XS | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid_13B-GGUF/resolve/main/Mermaid_13B.Q3_K_S.gguf) | Q3_K_S | 6.0 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid_13B-GGUF/resolve/main/Mermaid_13B.IQ3_S.gguf) | IQ3_S | 6.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mermaid_13B-GGUF/resolve/main/Mermaid_13B.IQ3_M.gguf) | IQ3_M | 6.2 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid_13B-GGUF/resolve/main/Mermaid_13B.Q3_K_M.gguf) | Q3_K_M | 6.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mermaid_13B-GGUF/resolve/main/Mermaid_13B.Q3_K_L.gguf) | Q3_K_L | 7.2 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid_13B-GGUF/resolve/main/Mermaid_13B.IQ4_XS.gguf) | IQ4_XS | 7.4 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid_13B-GGUF/resolve/main/Mermaid_13B.Q4_K_S.gguf) | Q4_K_S | 7.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mermaid_13B-GGUF/resolve/main/Mermaid_13B.Q4_K_M.gguf) | Q4_K_M | 8.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mermaid_13B-GGUF/resolve/main/Mermaid_13B.Q5_K_S.gguf) | Q5_K_S | 9.4 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid_13B-GGUF/resolve/main/Mermaid_13B.Q5_K_M.gguf) | Q5_K_M | 9.7 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid_13B-GGUF/resolve/main/Mermaid_13B.Q6_K.gguf) | Q6_K | 11.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Mermaid_13B-GGUF/resolve/main/Mermaid_13B.Q8_0.gguf) | Q8_0 | 14.4 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/PandafishHeatherReReloaded-GGUF
mradermacher
2024-05-06T05:11:22Z
131
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "MysticFoxMagic/HeatherSpell-7b", "ichigoberry/pandafish-2-7b-32k", "en", "base_model:MysticFoxMagic/PandafishHeatherReReloaded", "base_model:quantized:MysticFoxMagic/PandafishHeatherReReloaded", "endpoints_compatible", "region:us" ]
null
2024-04-07T00:10:05Z
--- base_model: MysticFoxMagic/PandafishHeatherReReloaded language: - en library_name: transformers quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - MysticFoxMagic/HeatherSpell-7b - ichigoberry/pandafish-2-7b-32k --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/MysticFoxMagic/PandafishHeatherReReloaded <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReReloaded-GGUF/resolve/main/PandafishHeatherReReloaded.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReReloaded-GGUF/resolve/main/PandafishHeatherReReloaded.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReReloaded-GGUF/resolve/main/PandafishHeatherReReloaded.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReReloaded-GGUF/resolve/main/PandafishHeatherReReloaded.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReReloaded-GGUF/resolve/main/PandafishHeatherReReloaded.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReReloaded-GGUF/resolve/main/PandafishHeatherReReloaded.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReReloaded-GGUF/resolve/main/PandafishHeatherReReloaded.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReReloaded-GGUF/resolve/main/PandafishHeatherReReloaded.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReReloaded-GGUF/resolve/main/PandafishHeatherReReloaded.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReReloaded-GGUF/resolve/main/PandafishHeatherReReloaded.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReReloaded-GGUF/resolve/main/PandafishHeatherReReloaded.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReReloaded-GGUF/resolve/main/PandafishHeatherReReloaded.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReReloaded-GGUF/resolve/main/PandafishHeatherReReloaded.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReReloaded-GGUF/resolve/main/PandafishHeatherReReloaded.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
CodeTriad/mistral-base-finetune-15000-unique-second
CodeTriad
2024-05-06T05:11:21Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "region:us" ]
null
2024-05-06T05:11:05Z
--- library_name: peft base_model: mistralai/Mistral-7B-v0.1 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
mradermacher/WinterGoddess-1.4x-70B-L2-i1-GGUF
mradermacher
2024-05-06T05:11:19Z
11
0
transformers
[ "transformers", "gguf", "en", "base_model:Sao10K/WinterGoddess-1.4x-70B-L2", "base_model:quantized:Sao10K/WinterGoddess-1.4x-70B-L2", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-07T00:23:15Z
--- base_model: Sao10K/WinterGoddess-1.4x-70B-L2 language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/Sao10K/WinterGoddess-1.4x-70B-L2 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-i1-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.i1-IQ1_S.gguf) | i1-IQ1_S | 14.6 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-i1-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.i1-IQ1_M.gguf) | i1-IQ1_M | 16.0 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-i1-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.4 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-i1-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.4 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-i1-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.i1-IQ2_S.gguf) | i1-IQ2_S | 21.5 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-i1-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.i1-IQ2_M.gguf) | i1-IQ2_M | 23.3 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-i1-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.i1-Q2_K.gguf) | i1-Q2_K | 25.6 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-i1-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 26.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-i1-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.4 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-i1-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.i1-IQ3_S.gguf) | i1-IQ3_S | 30.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-i1-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-i1-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.i1-IQ3_M.gguf) | i1-IQ3_M | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-i1-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-i1-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-i1-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 36.9 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-i1-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.i1-Q4_0.gguf) | i1-Q4_0 | 39.1 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-i1-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.3 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-i1-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-i1-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.6 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-i1-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 48.9 | | | [PART 1](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-i1-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-i1-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 56.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
harshraj/phi-1_5_hinglish_text_pretrained
harshraj
2024-05-06T05:11:18Z
168
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-06T04:31:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/XunziALLM-GGUF
mradermacher
2024-05-06T05:11:17Z
69
0
transformers
[ "transformers", "gguf", "en", "base_model:ccwu0918/XunziALLM", "base_model:quantized:ccwu0918/XunziALLM", "endpoints_compatible", "region:us" ]
null
2024-04-07T01:10:14Z
--- base_model: ccwu0918/XunziALLM language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/ccwu0918/XunziALLM <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/XunziALLM-GGUF/resolve/main/XunziALLM.Q2_K.gguf) | Q2_K | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/XunziALLM-GGUF/resolve/main/XunziALLM.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/XunziALLM-GGUF/resolve/main/XunziALLM.IQ3_S.gguf) | IQ3_S | 3.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/XunziALLM-GGUF/resolve/main/XunziALLM.Q3_K_S.gguf) | Q3_K_S | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/XunziALLM-GGUF/resolve/main/XunziALLM.IQ3_M.gguf) | IQ3_M | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/XunziALLM-GGUF/resolve/main/XunziALLM.Q3_K_M.gguf) | Q3_K_M | 4.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/XunziALLM-GGUF/resolve/main/XunziALLM.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/XunziALLM-GGUF/resolve/main/XunziALLM.IQ4_XS.gguf) | IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/XunziALLM-GGUF/resolve/main/XunziALLM.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/XunziALLM-GGUF/resolve/main/XunziALLM.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/XunziALLM-GGUF/resolve/main/XunziALLM.Q5_K_S.gguf) | Q5_K_S | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/XunziALLM-GGUF/resolve/main/XunziALLM.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/XunziALLM-GGUF/resolve/main/XunziALLM.Q6_K.gguf) | Q6_K | 6.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/XunziALLM-GGUF/resolve/main/XunziALLM.Q8_0.gguf) | Q8_0 | 8.3 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/XunziALLM-GGUF/resolve/main/XunziALLM.SOURCE.gguf) | SOURCE | 15.5 | source gguf, only provided when it was hard to come by | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/OpenDolphin-7B-slerp-GGUF
mradermacher
2024-05-06T05:11:13Z
16
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "macadeliccc/Mistral-7B-v0.2-OpenHermes", "cognitivecomputations/dolphin-2.8-mistral-7b-v02", "en", "base_model:WesPro/OpenDolphin-7B-slerp", "base_model:quantized:WesPro/OpenDolphin-7B-slerp", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-07T01:27:55Z
--- base_model: WesPro/OpenDolphin-7B-slerp language: - en library_name: transformers quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - macadeliccc/Mistral-7B-v0.2-OpenHermes - cognitivecomputations/dolphin-2.8-mistral-7b-v02 --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/WesPro/OpenDolphin-7B-slerp <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/OpenDolphin-7B-slerp-GGUF/resolve/main/OpenDolphin-7B-slerp.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/OpenDolphin-7B-slerp-GGUF/resolve/main/OpenDolphin-7B-slerp.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/OpenDolphin-7B-slerp-GGUF/resolve/main/OpenDolphin-7B-slerp.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/OpenDolphin-7B-slerp-GGUF/resolve/main/OpenDolphin-7B-slerp.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/OpenDolphin-7B-slerp-GGUF/resolve/main/OpenDolphin-7B-slerp.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/OpenDolphin-7B-slerp-GGUF/resolve/main/OpenDolphin-7B-slerp.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/OpenDolphin-7B-slerp-GGUF/resolve/main/OpenDolphin-7B-slerp.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/OpenDolphin-7B-slerp-GGUF/resolve/main/OpenDolphin-7B-slerp.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/OpenDolphin-7B-slerp-GGUF/resolve/main/OpenDolphin-7B-slerp.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/OpenDolphin-7B-slerp-GGUF/resolve/main/OpenDolphin-7B-slerp.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/OpenDolphin-7B-slerp-GGUF/resolve/main/OpenDolphin-7B-slerp.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/OpenDolphin-7B-slerp-GGUF/resolve/main/OpenDolphin-7B-slerp.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/OpenDolphin-7B-slerp-GGUF/resolve/main/OpenDolphin-7B-slerp.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/OpenDolphin-7B-slerp-GGUF/resolve/main/OpenDolphin-7B-slerp.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/PandafishHeatherReloaded-GGUF
mradermacher
2024-05-06T05:11:11Z
89
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "ichigoberry/pandafish-dt-7b", "MysticFoxMagic/HeatherSpell-7b", "en", "base_model:MysticFoxMagic/PandafishHeatherReloaded", "base_model:quantized:MysticFoxMagic/PandafishHeatherReloaded", "endpoints_compatible", "region:us" ]
null
2024-04-07T01:32:30Z
--- base_model: MysticFoxMagic/PandafishHeatherReloaded language: - en library_name: transformers quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - ichigoberry/pandafish-dt-7b - MysticFoxMagic/HeatherSpell-7b --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/MysticFoxMagic/PandafishHeatherReloaded <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReloaded-GGUF/resolve/main/PandafishHeatherReloaded.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReloaded-GGUF/resolve/main/PandafishHeatherReloaded.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReloaded-GGUF/resolve/main/PandafishHeatherReloaded.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReloaded-GGUF/resolve/main/PandafishHeatherReloaded.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReloaded-GGUF/resolve/main/PandafishHeatherReloaded.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReloaded-GGUF/resolve/main/PandafishHeatherReloaded.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReloaded-GGUF/resolve/main/PandafishHeatherReloaded.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReloaded-GGUF/resolve/main/PandafishHeatherReloaded.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReloaded-GGUF/resolve/main/PandafishHeatherReloaded.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReloaded-GGUF/resolve/main/PandafishHeatherReloaded.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReloaded-GGUF/resolve/main/PandafishHeatherReloaded.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReloaded-GGUF/resolve/main/PandafishHeatherReloaded.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReloaded-GGUF/resolve/main/PandafishHeatherReloaded.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReloaded-GGUF/resolve/main/PandafishHeatherReloaded.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Tess-72B-v1.5b-GGUF
mradermacher
2024-05-06T05:11:02Z
1
0
transformers
[ "transformers", "gguf", "en", "base_model:migtissera/Tess-72B-v1.5b", "base_model:quantized:migtissera/Tess-72B-v1.5b", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-07T03:47:12Z
--- base_model: migtissera/Tess-72B-v1.5b language: - en library_name: transformers license: other license_link: https://huggingface.co/Qwen/Qwen-72B/blob/main/LICENSE license_name: qwen-72b-licence quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/migtissera/Tess-72B-v1.5b <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Tess-72B-v1.5b-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Tess-72B-v1.5b-GGUF/resolve/main/Tess-72B-v1.5b.Q2_K.gguf) | Q2_K | 27.2 | | | [GGUF](https://huggingface.co/mradermacher/Tess-72B-v1.5b-GGUF/resolve/main/Tess-72B-v1.5b.IQ3_XS.gguf) | IQ3_XS | 30.0 | | | [GGUF](https://huggingface.co/mradermacher/Tess-72B-v1.5b-GGUF/resolve/main/Tess-72B-v1.5b.IQ3_S.gguf) | IQ3_S | 31.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Tess-72B-v1.5b-GGUF/resolve/main/Tess-72B-v1.5b.Q3_K_S.gguf) | Q3_K_S | 31.7 | | | [GGUF](https://huggingface.co/mradermacher/Tess-72B-v1.5b-GGUF/resolve/main/Tess-72B-v1.5b.IQ3_M.gguf) | IQ3_M | 33.4 | | | [GGUF](https://huggingface.co/mradermacher/Tess-72B-v1.5b-GGUF/resolve/main/Tess-72B-v1.5b.Q3_K_M.gguf) | Q3_K_M | 35.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Tess-72B-v1.5b-GGUF/resolve/main/Tess-72B-v1.5b.Q3_K_L.gguf) | Q3_K_L | 38.6 | | | [GGUF](https://huggingface.co/mradermacher/Tess-72B-v1.5b-GGUF/resolve/main/Tess-72B-v1.5b.IQ4_XS.gguf) | IQ4_XS | 39.2 | | | [GGUF](https://huggingface.co/mradermacher/Tess-72B-v1.5b-GGUF/resolve/main/Tess-72B-v1.5b.Q4_K_S.gguf) | Q4_K_S | 41.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Tess-72B-v1.5b-GGUF/resolve/main/Tess-72B-v1.5b.Q4_K_M.gguf) | Q4_K_M | 43.9 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/Tess-72B-v1.5b-GGUF/resolve/main/Tess-72B-v1.5b.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Tess-72B-v1.5b-GGUF/resolve/main/Tess-72B-v1.5b.Q5_K_S.gguf.part2of2) | Q5_K_S | 50.0 | | | [PART 1](https://huggingface.co/mradermacher/Tess-72B-v1.5b-GGUF/resolve/main/Tess-72B-v1.5b.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Tess-72B-v1.5b-GGUF/resolve/main/Tess-72B-v1.5b.Q5_K_M.gguf.part2of2) | Q5_K_M | 51.4 | | | [PART 1](https://huggingface.co/mradermacher/Tess-72B-v1.5b-GGUF/resolve/main/Tess-72B-v1.5b.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Tess-72B-v1.5b-GGUF/resolve/main/Tess-72B-v1.5b.Q6_K.gguf.part2of2) | Q6_K | 59.4 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Tess-72B-v1.5b-GGUF/resolve/main/Tess-72B-v1.5b.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Tess-72B-v1.5b-GGUF/resolve/main/Tess-72B-v1.5b.Q8_0.gguf.part2of2) | Q8_0 | 76.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Limitless-GGUF
mradermacher
2024-05-06T05:10:56Z
116
0
transformers
[ "transformers", "gguf", "en", "base_model:alkahestry/Limitless", "base_model:quantized:alkahestry/Limitless", "endpoints_compatible", "region:us" ]
null
2024-04-07T04:22:23Z
--- base_model: alkahestry/Limitless language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/alkahestry/Limitless <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Limitless-GGUF/resolve/main/Limitless.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Limitless-GGUF/resolve/main/Limitless.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Limitless-GGUF/resolve/main/Limitless.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Limitless-GGUF/resolve/main/Limitless.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Limitless-GGUF/resolve/main/Limitless.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Limitless-GGUF/resolve/main/Limitless.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Limitless-GGUF/resolve/main/Limitless.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Limitless-GGUF/resolve/main/Limitless.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Limitless-GGUF/resolve/main/Limitless.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Limitless-GGUF/resolve/main/Limitless.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Limitless-GGUF/resolve/main/Limitless.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Limitless-GGUF/resolve/main/Limitless.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Limitless-GGUF/resolve/main/Limitless.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Limitless-GGUF/resolve/main/Limitless.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/WinterGoddess-1.4x-70b-32k-GGUF
mradermacher
2024-05-06T05:10:33Z
41
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:ChuckMcSneed/WinterGoddess-1.4x-70b-32k", "base_model:quantized:ChuckMcSneed/WinterGoddess-1.4x-70b-32k", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-04-07T08:19:24Z
--- base_model: ChuckMcSneed/WinterGoddess-1.4x-70b-32k language: - en library_name: transformers license: llama2 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/ChuckMcSneed/WinterGoddess-1.4x-70b-32k <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.Q2_K.gguf) | Q2_K | 25.6 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.IQ3_XS.gguf) | IQ3_XS | 28.4 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.IQ3_S.gguf) | IQ3_S | 30.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.Q3_K_S.gguf) | Q3_K_S | 30.0 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.IQ3_M.gguf) | IQ3_M | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.Q3_K_M.gguf) | Q3_K_M | 33.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.Q3_K_L.gguf) | Q3_K_L | 36.2 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.IQ4_XS.gguf) | IQ4_XS | 37.3 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.Q4_K_S.gguf) | Q4_K_S | 39.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.Q4_K_M.gguf) | Q4_K_M | 41.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.Q5_K_S.gguf) | Q5_K_S | 47.6 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.Q5_K_M.gguf) | Q5_K_M | 48.9 | | | [PART 1](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.Q6_K.gguf.part2of2) | Q6_K | 56.7 | very good quality | | [PART 1](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.Q8_0.gguf.part2of2) | Q8_0 | 73.4 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Tess-34B-v1.5b-i1-GGUF
mradermacher
2024-05-06T05:10:23Z
29
0
transformers
[ "transformers", "gguf", "en", "base_model:migtissera/Tess-34B-v1.5b", "base_model:quantized:migtissera/Tess-34B-v1.5b", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-07T10:33:11Z
--- base_model: migtissera/Tess-34B-v1.5b language: - en library_name: transformers license: other license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE license_name: yi-34b quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/migtissera/Tess-34B-v1.5b **This uses only 40k tokens of my standard set, as the model overflowed with more.** <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Tess-34B-v1.5b-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Tess-34B-v1.5b-i1-GGUF/resolve/main/Tess-34B-v1.5b.i1-IQ1_S.gguf) | i1-IQ1_S | 7.6 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Tess-34B-v1.5b-i1-GGUF/resolve/main/Tess-34B-v1.5b.i1-IQ1_M.gguf) | i1-IQ1_M | 8.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Tess-34B-v1.5b-i1-GGUF/resolve/main/Tess-34B-v1.5b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.4 | | | [GGUF](https://huggingface.co/mradermacher/Tess-34B-v1.5b-i1-GGUF/resolve/main/Tess-34B-v1.5b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.4 | | | [GGUF](https://huggingface.co/mradermacher/Tess-34B-v1.5b-i1-GGUF/resolve/main/Tess-34B-v1.5b.i1-IQ2_S.gguf) | i1-IQ2_S | 11.0 | | | [GGUF](https://huggingface.co/mradermacher/Tess-34B-v1.5b-i1-GGUF/resolve/main/Tess-34B-v1.5b.i1-IQ2_M.gguf) | i1-IQ2_M | 11.9 | | | [GGUF](https://huggingface.co/mradermacher/Tess-34B-v1.5b-i1-GGUF/resolve/main/Tess-34B-v1.5b.i1-Q2_K.gguf) | i1-Q2_K | 12.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Tess-34B-v1.5b-i1-GGUF/resolve/main/Tess-34B-v1.5b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 13.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Tess-34B-v1.5b-i1-GGUF/resolve/main/Tess-34B-v1.5b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 14.3 | | | [GGUF](https://huggingface.co/mradermacher/Tess-34B-v1.5b-i1-GGUF/resolve/main/Tess-34B-v1.5b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 15.1 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Tess-34B-v1.5b-i1-GGUF/resolve/main/Tess-34B-v1.5b.i1-IQ3_S.gguf) | i1-IQ3_S | 15.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Tess-34B-v1.5b-i1-GGUF/resolve/main/Tess-34B-v1.5b.i1-IQ3_M.gguf) | i1-IQ3_M | 15.7 | | | [GGUF](https://huggingface.co/mradermacher/Tess-34B-v1.5b-i1-GGUF/resolve/main/Tess-34B-v1.5b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.8 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Tess-34B-v1.5b-i1-GGUF/resolve/main/Tess-34B-v1.5b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 18.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Tess-34B-v1.5b-i1-GGUF/resolve/main/Tess-34B-v1.5b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 18.6 | | | [GGUF](https://huggingface.co/mradermacher/Tess-34B-v1.5b-i1-GGUF/resolve/main/Tess-34B-v1.5b.i1-Q4_0.gguf) | i1-Q4_0 | 19.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Tess-34B-v1.5b-i1-GGUF/resolve/main/Tess-34B-v1.5b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 19.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Tess-34B-v1.5b-i1-GGUF/resolve/main/Tess-34B-v1.5b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Tess-34B-v1.5b-i1-GGUF/resolve/main/Tess-34B-v1.5b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 23.8 | | | [GGUF](https://huggingface.co/mradermacher/Tess-34B-v1.5b-i1-GGUF/resolve/main/Tess-34B-v1.5b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 24.4 | | | [GGUF](https://huggingface.co/mradermacher/Tess-34B-v1.5b-i1-GGUF/resolve/main/Tess-34B-v1.5b.i1-Q6_K.gguf) | i1-Q6_K | 28.3 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/HeroBophades-3x7B-GGUF
mradermacher
2024-05-06T05:10:15Z
33
0
transformers
[ "transformers", "gguf", "en", "dataset:jondurbin/truthy-dpo-v0.1", "dataset:kyujinpy/orca_math_dpo", "dataset:jondurbin/gutenberg-dpo-v0.1", "base_model:nbeerbower/HeroBophades-3x7B", "base_model:quantized:nbeerbower/HeroBophades-3x7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-07T12:06:59Z
--- base_model: nbeerbower/HeroBophades-3x7B datasets: - jondurbin/truthy-dpo-v0.1 - kyujinpy/orca_math_dpo - jondurbin/gutenberg-dpo-v0.1 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/nbeerbower/HeroBophades-3x7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/HeroBophades-3x7B-GGUF/resolve/main/HeroBophades-3x7B.Q2_K.gguf) | Q2_K | 6.9 | | | [GGUF](https://huggingface.co/mradermacher/HeroBophades-3x7B-GGUF/resolve/main/HeroBophades-3x7B.IQ3_XS.gguf) | IQ3_XS | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/HeroBophades-3x7B-GGUF/resolve/main/HeroBophades-3x7B.Q3_K_S.gguf) | Q3_K_S | 8.1 | | | [GGUF](https://huggingface.co/mradermacher/HeroBophades-3x7B-GGUF/resolve/main/HeroBophades-3x7B.IQ3_S.gguf) | IQ3_S | 8.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/HeroBophades-3x7B-GGUF/resolve/main/HeroBophades-3x7B.IQ3_M.gguf) | IQ3_M | 8.3 | | | [GGUF](https://huggingface.co/mradermacher/HeroBophades-3x7B-GGUF/resolve/main/HeroBophades-3x7B.Q3_K_M.gguf) | Q3_K_M | 9.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/HeroBophades-3x7B-GGUF/resolve/main/HeroBophades-3x7B.Q3_K_L.gguf) | Q3_K_L | 9.7 | | | [GGUF](https://huggingface.co/mradermacher/HeroBophades-3x7B-GGUF/resolve/main/HeroBophades-3x7B.IQ4_XS.gguf) | IQ4_XS | 10.1 | | | [GGUF](https://huggingface.co/mradermacher/HeroBophades-3x7B-GGUF/resolve/main/HeroBophades-3x7B.Q4_K_S.gguf) | Q4_K_S | 10.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/HeroBophades-3x7B-GGUF/resolve/main/HeroBophades-3x7B.Q4_K_M.gguf) | Q4_K_M | 11.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/HeroBophades-3x7B-GGUF/resolve/main/HeroBophades-3x7B.Q5_K_S.gguf) | Q5_K_S | 12.8 | | | [GGUF](https://huggingface.co/mradermacher/HeroBophades-3x7B-GGUF/resolve/main/HeroBophades-3x7B.Q5_K_M.gguf) | Q5_K_M | 13.2 | | | [GGUF](https://huggingface.co/mradermacher/HeroBophades-3x7B-GGUF/resolve/main/HeroBophades-3x7B.Q6_K.gguf) | Q6_K | 15.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/HeroBophades-3x7B-GGUF/resolve/main/HeroBophades-3x7B.Q8_0.gguf) | Q8_0 | 19.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/NeuralSynthesis-7B-v0.3-GGUF
mradermacher
2024-05-06T05:10:09Z
10
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "en", "base_model:Kukedlc/NeuralSynthesis-7B-v0.3", "base_model:quantized:Kukedlc/NeuralSynthesis-7B-v0.3", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-07T15:03:53Z
--- base_model: Kukedlc/NeuralSynthesis-7B-v0.3 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Kukedlc/NeuralSynthesis-7B-v0.3 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/NeuralSynthesis-7B-v0.3-GGUF/resolve/main/NeuralSynthesis-7B-v0.3.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/NeuralSynthesis-7B-v0.3-GGUF/resolve/main/NeuralSynthesis-7B-v0.3.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/NeuralSynthesis-7B-v0.3-GGUF/resolve/main/NeuralSynthesis-7B-v0.3.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/NeuralSynthesis-7B-v0.3-GGUF/resolve/main/NeuralSynthesis-7B-v0.3.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/NeuralSynthesis-7B-v0.3-GGUF/resolve/main/NeuralSynthesis-7B-v0.3.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/NeuralSynthesis-7B-v0.3-GGUF/resolve/main/NeuralSynthesis-7B-v0.3.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/NeuralSynthesis-7B-v0.3-GGUF/resolve/main/NeuralSynthesis-7B-v0.3.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/NeuralSynthesis-7B-v0.3-GGUF/resolve/main/NeuralSynthesis-7B-v0.3.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/NeuralSynthesis-7B-v0.3-GGUF/resolve/main/NeuralSynthesis-7B-v0.3.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/NeuralSynthesis-7B-v0.3-GGUF/resolve/main/NeuralSynthesis-7B-v0.3.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/NeuralSynthesis-7B-v0.3-GGUF/resolve/main/NeuralSynthesis-7B-v0.3.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/NeuralSynthesis-7B-v0.3-GGUF/resolve/main/NeuralSynthesis-7B-v0.3.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/NeuralSynthesis-7B-v0.3-GGUF/resolve/main/NeuralSynthesis-7B-v0.3.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/NeuralSynthesis-7B-v0.3-GGUF/resolve/main/NeuralSynthesis-7B-v0.3.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Enterredaas-33b-i1-GGUF
mradermacher
2024-05-06T05:10:03Z
5
0
transformers
[ "transformers", "gguf", "en", "base_model:Aeala/Enterredaas-33b", "base_model:quantized:Aeala/Enterredaas-33b", "endpoints_compatible", "region:us" ]
null
2024-04-07T16:10:25Z
--- base_model: Aeala/Enterredaas-33b language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/Aeala/Enterredaas-33b <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Enterredaas-33b-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Enterredaas-33b-i1-GGUF/resolve/main/Enterredaas-33b.i1-IQ1_S.gguf) | i1-IQ1_S | 7.2 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Enterredaas-33b-i1-GGUF/resolve/main/Enterredaas-33b.i1-IQ1_M.gguf) | i1-IQ1_M | 7.8 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Enterredaas-33b-i1-GGUF/resolve/main/Enterredaas-33b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/Enterredaas-33b-i1-GGUF/resolve/main/Enterredaas-33b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 9.7 | | | [GGUF](https://huggingface.co/mradermacher/Enterredaas-33b-i1-GGUF/resolve/main/Enterredaas-33b.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | | | [GGUF](https://huggingface.co/mradermacher/Enterredaas-33b-i1-GGUF/resolve/main/Enterredaas-33b.i1-IQ2_M.gguf) | i1-IQ2_M | 11.3 | | | [GGUF](https://huggingface.co/mradermacher/Enterredaas-33b-i1-GGUF/resolve/main/Enterredaas-33b.i1-Q2_K.gguf) | i1-Q2_K | 12.1 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Enterredaas-33b-i1-GGUF/resolve/main/Enterredaas-33b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Enterredaas-33b-i1-GGUF/resolve/main/Enterredaas-33b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.4 | | | [GGUF](https://huggingface.co/mradermacher/Enterredaas-33b-i1-GGUF/resolve/main/Enterredaas-33b.i1-IQ3_S.gguf) | i1-IQ3_S | 14.2 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Enterredaas-33b-i1-GGUF/resolve/main/Enterredaas-33b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.2 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Enterredaas-33b-i1-GGUF/resolve/main/Enterredaas-33b.i1-IQ3_M.gguf) | i1-IQ3_M | 15.0 | | | [GGUF](https://huggingface.co/mradermacher/Enterredaas-33b-i1-GGUF/resolve/main/Enterredaas-33b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 15.9 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Enterredaas-33b-i1-GGUF/resolve/main/Enterredaas-33b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Enterredaas-33b-i1-GGUF/resolve/main/Enterredaas-33b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.4 | | | [GGUF](https://huggingface.co/mradermacher/Enterredaas-33b-i1-GGUF/resolve/main/Enterredaas-33b.i1-Q4_0.gguf) | i1-Q4_0 | 18.5 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Enterredaas-33b-i1-GGUF/resolve/main/Enterredaas-33b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.6 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Enterredaas-33b-i1-GGUF/resolve/main/Enterredaas-33b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 19.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Enterredaas-33b-i1-GGUF/resolve/main/Enterredaas-33b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.5 | | | [GGUF](https://huggingface.co/mradermacher/Enterredaas-33b-i1-GGUF/resolve/main/Enterredaas-33b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.1 | | | [GGUF](https://huggingface.co/mradermacher/Enterredaas-33b-i1-GGUF/resolve/main/Enterredaas-33b.i1-Q6_K.gguf) | i1-Q6_K | 26.8 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/dolphin-mistral-TRACHI-7b-GGUF
mradermacher
2024-05-06T05:09:58Z
1
0
transformers
[ "transformers", "gguf", "en", "dataset:norygano/TRACHI", "base_model:norygano/dolphin-mistral-TRACHI-7b", "base_model:quantized:norygano/dolphin-mistral-TRACHI-7b", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-07T16:46:03Z
--- base_model: norygano/dolphin-mistral-TRACHI-7b datasets: - norygano/TRACHI language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/norygano/dolphin-mistral-TRACHI-7b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/dolphin-mistral-TRACHI-7b-GGUF/resolve/main/dolphin-mistral-TRACHI-7b.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-mistral-TRACHI-7b-GGUF/resolve/main/dolphin-mistral-TRACHI-7b.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-mistral-TRACHI-7b-GGUF/resolve/main/dolphin-mistral-TRACHI-7b.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-mistral-TRACHI-7b-GGUF/resolve/main/dolphin-mistral-TRACHI-7b.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/dolphin-mistral-TRACHI-7b-GGUF/resolve/main/dolphin-mistral-TRACHI-7b.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-mistral-TRACHI-7b-GGUF/resolve/main/dolphin-mistral-TRACHI-7b.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/dolphin-mistral-TRACHI-7b-GGUF/resolve/main/dolphin-mistral-TRACHI-7b.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-mistral-TRACHI-7b-GGUF/resolve/main/dolphin-mistral-TRACHI-7b.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-mistral-TRACHI-7b-GGUF/resolve/main/dolphin-mistral-TRACHI-7b.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/dolphin-mistral-TRACHI-7b-GGUF/resolve/main/dolphin-mistral-TRACHI-7b.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/dolphin-mistral-TRACHI-7b-GGUF/resolve/main/dolphin-mistral-TRACHI-7b.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-mistral-TRACHI-7b-GGUF/resolve/main/dolphin-mistral-TRACHI-7b.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-mistral-TRACHI-7b-GGUF/resolve/main/dolphin-mistral-TRACHI-7b.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/dolphin-mistral-TRACHI-7b-GGUF/resolve/main/dolphin-mistral-TRACHI-7b.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Tess-72B-v1.5b-i1-GGUF
mradermacher
2024-05-06T05:09:52Z
4
0
transformers
[ "transformers", "gguf", "en", "base_model:migtissera/Tess-72B-v1.5b", "base_model:quantized:migtissera/Tess-72B-v1.5b", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-07T17:30:16Z
--- base_model: migtissera/Tess-72B-v1.5b language: - en library_name: transformers license: other license_link: https://huggingface.co/Qwen/Qwen-72B/blob/main/LICENSE license_name: qwen-72b-licence quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/migtissera/Tess-72B-v1.5b <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Tess-72B-v1.5b-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Tess-72B-v1.5b-i1-GGUF/resolve/main/Tess-72B-v1.5b.i1-IQ1_S.gguf) | i1-IQ1_S | 16.3 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Tess-72B-v1.5b-i1-GGUF/resolve/main/Tess-72B-v1.5b.i1-IQ1_M.gguf) | i1-IQ1_M | 17.7 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Tess-72B-v1.5b-i1-GGUF/resolve/main/Tess-72B-v1.5b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.9 | | | [GGUF](https://huggingface.co/mradermacher/Tess-72B-v1.5b-i1-GGUF/resolve/main/Tess-72B-v1.5b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.9 | | | [GGUF](https://huggingface.co/mradermacher/Tess-72B-v1.5b-i1-GGUF/resolve/main/Tess-72B-v1.5b.i1-IQ2_S.gguf) | i1-IQ2_S | 23.5 | | | [GGUF](https://huggingface.co/mradermacher/Tess-72B-v1.5b-i1-GGUF/resolve/main/Tess-72B-v1.5b.i1-IQ2_M.gguf) | i1-IQ2_M | 25.3 | | | [GGUF](https://huggingface.co/mradermacher/Tess-72B-v1.5b-i1-GGUF/resolve/main/Tess-72B-v1.5b.i1-Q2_K.gguf) | i1-Q2_K | 27.2 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Tess-72B-v1.5b-i1-GGUF/resolve/main/Tess-72B-v1.5b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Tess-72B-v1.5b-i1-GGUF/resolve/main/Tess-72B-v1.5b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 30.0 | | | [GGUF](https://huggingface.co/mradermacher/Tess-72B-v1.5b-i1-GGUF/resolve/main/Tess-72B-v1.5b.i1-IQ3_S.gguf) | i1-IQ3_S | 31.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Tess-72B-v1.5b-i1-GGUF/resolve/main/Tess-72B-v1.5b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.7 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Tess-72B-v1.5b-i1-GGUF/resolve/main/Tess-72B-v1.5b.i1-IQ3_M.gguf) | i1-IQ3_M | 33.4 | | | [GGUF](https://huggingface.co/mradermacher/Tess-72B-v1.5b-i1-GGUF/resolve/main/Tess-72B-v1.5b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 35.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Tess-72B-v1.5b-i1-GGUF/resolve/main/Tess-72B-v1.5b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 38.6 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Tess-72B-v1.5b-i1-GGUF/resolve/main/Tess-72B-v1.5b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.9 | | | [GGUF](https://huggingface.co/mradermacher/Tess-72B-v1.5b-i1-GGUF/resolve/main/Tess-72B-v1.5b.i1-Q4_0.gguf) | i1-Q4_0 | 41.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Tess-72B-v1.5b-i1-GGUF/resolve/main/Tess-72B-v1.5b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 41.4 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Tess-72B-v1.5b-i1-GGUF/resolve/main/Tess-72B-v1.5b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 43.9 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/Tess-72B-v1.5b-i1-GGUF/resolve/main/Tess-72B-v1.5b.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Tess-72B-v1.5b-i1-GGUF/resolve/main/Tess-72B-v1.5b.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 50.0 | | | [PART 1](https://huggingface.co/mradermacher/Tess-72B-v1.5b-i1-GGUF/resolve/main/Tess-72B-v1.5b.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Tess-72B-v1.5b-i1-GGUF/resolve/main/Tess-72B-v1.5b.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 51.4 | | | [PART 1](https://huggingface.co/mradermacher/Tess-72B-v1.5b-i1-GGUF/resolve/main/Tess-72B-v1.5b.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Tess-72B-v1.5b-i1-GGUF/resolve/main/Tess-72B-v1.5b.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 59.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Pearl-3x7B-GGUF
mradermacher
2024-05-06T05:09:47Z
75
1
transformers
[ "transformers", "gguf", "moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "dvilasuero/DistilabelBeagle14-7B", "beowolx/CodeNinja-1.0-OpenChat-7B", "WizardLM/WizardMath-7B-V1.1", "Maths", "Code", "Python", "en", "base_model:louisbrulenaudet/Pearl-3x7B", "base_model:quantized:louisbrulenaudet/Pearl-3x7B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-07T18:52:17Z
--- base_model: louisbrulenaudet/Pearl-3x7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - moe - frankenmoe - merge - mergekit - lazymergekit - dvilasuero/DistilabelBeagle14-7B - beowolx/CodeNinja-1.0-OpenChat-7B - WizardLM/WizardMath-7B-V1.1 - Maths - Code - Python --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/louisbrulenaudet/Pearl-3x7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Pearl-3x7B-GGUF/resolve/main/Pearl-3x7B.Q2_K.gguf) | Q2_K | 6.9 | | | [GGUF](https://huggingface.co/mradermacher/Pearl-3x7B-GGUF/resolve/main/Pearl-3x7B.IQ3_XS.gguf) | IQ3_XS | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/Pearl-3x7B-GGUF/resolve/main/Pearl-3x7B.Q3_K_S.gguf) | Q3_K_S | 8.1 | | | [GGUF](https://huggingface.co/mradermacher/Pearl-3x7B-GGUF/resolve/main/Pearl-3x7B.IQ3_S.gguf) | IQ3_S | 8.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Pearl-3x7B-GGUF/resolve/main/Pearl-3x7B.IQ3_M.gguf) | IQ3_M | 8.3 | | | [GGUF](https://huggingface.co/mradermacher/Pearl-3x7B-GGUF/resolve/main/Pearl-3x7B.Q3_K_M.gguf) | Q3_K_M | 9.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Pearl-3x7B-GGUF/resolve/main/Pearl-3x7B.Q3_K_L.gguf) | Q3_K_L | 9.7 | | | [GGUF](https://huggingface.co/mradermacher/Pearl-3x7B-GGUF/resolve/main/Pearl-3x7B.IQ4_XS.gguf) | IQ4_XS | 10.1 | | | [GGUF](https://huggingface.co/mradermacher/Pearl-3x7B-GGUF/resolve/main/Pearl-3x7B.Q4_K_S.gguf) | Q4_K_S | 10.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Pearl-3x7B-GGUF/resolve/main/Pearl-3x7B.Q4_K_M.gguf) | Q4_K_M | 11.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Pearl-3x7B-GGUF/resolve/main/Pearl-3x7B.Q5_K_S.gguf) | Q5_K_S | 12.8 | | | [GGUF](https://huggingface.co/mradermacher/Pearl-3x7B-GGUF/resolve/main/Pearl-3x7B.Q5_K_M.gguf) | Q5_K_M | 13.2 | | | [GGUF](https://huggingface.co/mradermacher/Pearl-3x7B-GGUF/resolve/main/Pearl-3x7B.Q6_K.gguf) | Q6_K | 15.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Pearl-3x7B-GGUF/resolve/main/Pearl-3x7B.Q8_0.gguf) | Q8_0 | 19.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Fuse-Dolphin-7B-GGUF
mradermacher
2024-05-06T05:09:41Z
3
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "en", "base_model:bunnycore/Fuse-Dolphin-7B", "base_model:quantized:bunnycore/Fuse-Dolphin-7B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-07T21:15:42Z
--- base_model: bunnycore/Fuse-Dolphin-7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/bunnycore/Fuse-Dolphin-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Fuse-Dolphin-7B-GGUF/resolve/main/Fuse-Dolphin-7B.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Fuse-Dolphin-7B-GGUF/resolve/main/Fuse-Dolphin-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Fuse-Dolphin-7B-GGUF/resolve/main/Fuse-Dolphin-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Fuse-Dolphin-7B-GGUF/resolve/main/Fuse-Dolphin-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Fuse-Dolphin-7B-GGUF/resolve/main/Fuse-Dolphin-7B.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Fuse-Dolphin-7B-GGUF/resolve/main/Fuse-Dolphin-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Fuse-Dolphin-7B-GGUF/resolve/main/Fuse-Dolphin-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Fuse-Dolphin-7B-GGUF/resolve/main/Fuse-Dolphin-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Fuse-Dolphin-7B-GGUF/resolve/main/Fuse-Dolphin-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Fuse-Dolphin-7B-GGUF/resolve/main/Fuse-Dolphin-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Fuse-Dolphin-7B-GGUF/resolve/main/Fuse-Dolphin-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Fuse-Dolphin-7B-GGUF/resolve/main/Fuse-Dolphin-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Fuse-Dolphin-7B-GGUF/resolve/main/Fuse-Dolphin-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Fuse-Dolphin-7B-GGUF/resolve/main/Fuse-Dolphin-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/WinterGoddess-1.4x-70b-32k-i1-GGUF
mradermacher
2024-05-06T05:09:38Z
50
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:ChuckMcSneed/WinterGoddess-1.4x-70b-32k", "base_model:quantized:ChuckMcSneed/WinterGoddess-1.4x-70b-32k", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-04-07T21:20:06Z
--- base_model: ChuckMcSneed/WinterGoddess-1.4x-70b-32k language: - en library_name: transformers license: llama2 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/ChuckMcSneed/WinterGoddess-1.4x-70b-32k <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-i1-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.i1-IQ1_S.gguf) | i1-IQ1_S | 14.6 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-i1-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.i1-IQ1_M.gguf) | i1-IQ1_M | 16.0 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-i1-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.4 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-i1-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.4 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-i1-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.i1-IQ2_S.gguf) | i1-IQ2_S | 21.5 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-i1-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.i1-IQ2_M.gguf) | i1-IQ2_M | 23.3 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-i1-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.i1-Q2_K.gguf) | i1-Q2_K | 25.6 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-i1-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 26.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-i1-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.4 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-i1-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.i1-IQ3_S.gguf) | i1-IQ3_S | 30.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-i1-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-i1-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.i1-IQ3_M.gguf) | i1-IQ3_M | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-i1-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-i1-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-i1-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.i1-IQ4_XS.gguf) | i1-IQ4_XS | 36.9 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-i1-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.i1-Q4_0.gguf) | i1-Q4_0 | 39.1 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-i1-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.3 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-i1-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-i1-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.6 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-i1-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.i1-Q5_K_M.gguf) | i1-Q5_K_M | 48.9 | | | [PART 1](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-i1-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-i1-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 56.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF
mradermacher
2024-05-06T05:09:35Z
221
0
transformers
[ "transformers", "gguf", "en", "base_model:Aeala/GPT4-x-AlpacaDente2-30b", "base_model:quantized:Aeala/GPT4-x-AlpacaDente2-30b", "endpoints_compatible", "region:us" ]
null
2024-04-07T21:25:07Z
--- base_model: Aeala/GPT4-x-AlpacaDente2-30b language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/Aeala/GPT4-x-AlpacaDente2-30b **This uses only 40k tokens of my standard set, as the model overflowed with more.** <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-IQ1_S.gguf) | i1-IQ1_S | 7.2 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-IQ1_M.gguf) | i1-IQ1_M | 7.8 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 9.7 | | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-IQ2_M.gguf) | i1-IQ2_M | 11.3 | | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-Q2_K.gguf) | i1-Q2_K | 12.1 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.4 | | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-IQ3_S.gguf) | i1-IQ3_S | 14.2 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.2 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-IQ3_M.gguf) | i1-IQ3_M | 15.0 | | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 15.9 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.4 | | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-Q4_0.gguf) | i1-Q4_0 | 18.5 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.6 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 19.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.5 | | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.1 | | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-Q6_K.gguf) | i1-Q6_K | 26.8 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-i1-GGUF
mradermacher
2024-05-06T05:09:20Z
17
0
transformers
[ "transformers", "gguf", "text-generation-inference", "merge", "en", "base_model:brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity", "base_model:quantized:brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-08T00:38:32Z
--- base_model: brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity language: - en library_name: transformers license: other license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE license_name: yi-license quantized_by: mradermacher tags: - text-generation-inference - merge --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-i1-GGUF/resolve/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.i1-IQ1_S.gguf) | i1-IQ1_S | 7.6 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-i1-GGUF/resolve/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.i1-IQ1_M.gguf) | i1-IQ1_M | 8.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-i1-GGUF/resolve/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.4 | | | [GGUF](https://huggingface.co/mradermacher/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-i1-GGUF/resolve/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.4 | | | [GGUF](https://huggingface.co/mradermacher/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-i1-GGUF/resolve/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.i1-IQ2_S.gguf) | i1-IQ2_S | 11.0 | | | [GGUF](https://huggingface.co/mradermacher/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-i1-GGUF/resolve/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.i1-IQ2_M.gguf) | i1-IQ2_M | 11.9 | | | [GGUF](https://huggingface.co/mradermacher/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-i1-GGUF/resolve/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.i1-Q2_K.gguf) | i1-Q2_K | 12.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-i1-GGUF/resolve/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 13.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-i1-GGUF/resolve/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.i1-IQ3_XS.gguf) | i1-IQ3_XS | 14.3 | | | [GGUF](https://huggingface.co/mradermacher/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-i1-GGUF/resolve/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.i1-Q3_K_S.gguf) | i1-Q3_K_S | 15.1 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-i1-GGUF/resolve/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.i1-IQ3_S.gguf) | i1-IQ3_S | 15.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-i1-GGUF/resolve/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.i1-IQ3_M.gguf) | i1-IQ3_M | 15.7 | | | [GGUF](https://huggingface.co/mradermacher/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-i1-GGUF/resolve/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.8 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-i1-GGUF/resolve/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.i1-Q3_K_L.gguf) | i1-Q3_K_L | 18.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-i1-GGUF/resolve/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.i1-IQ4_XS.gguf) | i1-IQ4_XS | 18.6 | | | [GGUF](https://huggingface.co/mradermacher/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-i1-GGUF/resolve/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.i1-Q4_0.gguf) | i1-Q4_0 | 19.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-i1-GGUF/resolve/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.i1-Q4_K_S.gguf) | i1-Q4_K_S | 19.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-i1-GGUF/resolve/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-i1-GGUF/resolve/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.i1-Q5_K_S.gguf) | i1-Q5_K_S | 23.8 | | | [GGUF](https://huggingface.co/mradermacher/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-i1-GGUF/resolve/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.i1-Q5_K_M.gguf) | i1-Q5_K_M | 24.4 | | | [GGUF](https://huggingface.co/mradermacher/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-i1-GGUF/resolve/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.i1-Q6_K.gguf) | i1-Q6_K | 28.3 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Honyaku-7b-v2-GGUF
mradermacher
2024-05-06T05:09:17Z
7
0
transformers
[ "transformers", "gguf", "en", "base_model:aixsatoshi/Honyaku-7b-v2", "base_model:quantized:aixsatoshi/Honyaku-7b-v2", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-08T02:22:57Z
--- base_model: aixsatoshi/Honyaku-7b-v2 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/aixsatoshi/Honyaku-7b-v2 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Honyaku-7b-v2-GGUF/resolve/main/Honyaku-7b-v2.Q2_K.gguf) | Q2_K | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Honyaku-7b-v2-GGUF/resolve/main/Honyaku-7b-v2.IQ3_XS.gguf) | IQ3_XS | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/Honyaku-7b-v2-GGUF/resolve/main/Honyaku-7b-v2.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Honyaku-7b-v2-GGUF/resolve/main/Honyaku-7b-v2.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Honyaku-7b-v2-GGUF/resolve/main/Honyaku-7b-v2.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Honyaku-7b-v2-GGUF/resolve/main/Honyaku-7b-v2.Q3_K_M.gguf) | Q3_K_M | 3.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Honyaku-7b-v2-GGUF/resolve/main/Honyaku-7b-v2.Q3_K_L.gguf) | Q3_K_L | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Honyaku-7b-v2-GGUF/resolve/main/Honyaku-7b-v2.IQ4_XS.gguf) | IQ4_XS | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Honyaku-7b-v2-GGUF/resolve/main/Honyaku-7b-v2.Q4_K_S.gguf) | Q4_K_S | 4.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Honyaku-7b-v2-GGUF/resolve/main/Honyaku-7b-v2.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Honyaku-7b-v2-GGUF/resolve/main/Honyaku-7b-v2.Q5_K_S.gguf) | Q5_K_S | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Honyaku-7b-v2-GGUF/resolve/main/Honyaku-7b-v2.Q5_K_M.gguf) | Q5_K_M | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Honyaku-7b-v2-GGUF/resolve/main/Honyaku-7b-v2.Q6_K.gguf) | Q6_K | 6.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Honyaku-7b-v2-GGUF/resolve/main/Honyaku-7b-v2.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/GPT4-x-AlpacaDente-30b-GGUF
mradermacher
2024-05-06T05:09:15Z
9
0
transformers
[ "transformers", "gguf", "en", "base_model:Aeala/GPT4-x-AlpacaDente-30b", "base_model:quantized:Aeala/GPT4-x-AlpacaDente-30b", "endpoints_compatible", "region:us" ]
null
2024-04-08T03:27:38Z
--- base_model: Aeala/GPT4-x-AlpacaDente-30b language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Aeala/GPT4-x-AlpacaDente-30b <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/GPT4-x-AlpacaDente-30b-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente-30b-GGUF/resolve/main/GPT4-x-AlpacaDente-30b.Q2_K.gguf) | Q2_K | 12.1 | | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente-30b-GGUF/resolve/main/GPT4-x-AlpacaDente-30b.IQ3_XS.gguf) | IQ3_XS | 13.4 | | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente-30b-GGUF/resolve/main/GPT4-x-AlpacaDente-30b.IQ3_S.gguf) | IQ3_S | 14.2 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente-30b-GGUF/resolve/main/GPT4-x-AlpacaDente-30b.Q3_K_S.gguf) | Q3_K_S | 14.2 | | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente-30b-GGUF/resolve/main/GPT4-x-AlpacaDente-30b.IQ3_M.gguf) | IQ3_M | 15.0 | | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente-30b-GGUF/resolve/main/GPT4-x-AlpacaDente-30b.Q3_K_M.gguf) | Q3_K_M | 15.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente-30b-GGUF/resolve/main/GPT4-x-AlpacaDente-30b.Q3_K_L.gguf) | Q3_K_L | 17.4 | | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente-30b-GGUF/resolve/main/GPT4-x-AlpacaDente-30b.IQ4_XS.gguf) | IQ4_XS | 17.6 | | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente-30b-GGUF/resolve/main/GPT4-x-AlpacaDente-30b.Q4_K_S.gguf) | Q4_K_S | 18.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente-30b-GGUF/resolve/main/GPT4-x-AlpacaDente-30b.Q4_K_M.gguf) | Q4_K_M | 19.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente-30b-GGUF/resolve/main/GPT4-x-AlpacaDente-30b.Q5_K_S.gguf) | Q5_K_S | 22.5 | | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente-30b-GGUF/resolve/main/GPT4-x-AlpacaDente-30b.Q5_K_M.gguf) | Q5_K_M | 23.1 | | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente-30b-GGUF/resolve/main/GPT4-x-AlpacaDente-30b.Q6_K.gguf) | Q6_K | 26.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente-30b-GGUF/resolve/main/GPT4-x-AlpacaDente-30b.Q8_0.gguf) | Q8_0 | 34.7 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Chimera-Apex-7B-GGUF
mradermacher
2024-05-06T05:09:05Z
295
2
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "en", "base_model:bunnycore/Chimera-Apex-7B", "base_model:quantized:bunnycore/Chimera-Apex-7B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-08T03:57:50Z
--- base_model: bunnycore/Chimera-Apex-7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/bunnycore/Chimera-Apex-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Chimera-Apex-7B-GGUF/resolve/main/Chimera-Apex-7B.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Chimera-Apex-7B-GGUF/resolve/main/Chimera-Apex-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Chimera-Apex-7B-GGUF/resolve/main/Chimera-Apex-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Chimera-Apex-7B-GGUF/resolve/main/Chimera-Apex-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Chimera-Apex-7B-GGUF/resolve/main/Chimera-Apex-7B.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Chimera-Apex-7B-GGUF/resolve/main/Chimera-Apex-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Chimera-Apex-7B-GGUF/resolve/main/Chimera-Apex-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Chimera-Apex-7B-GGUF/resolve/main/Chimera-Apex-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Chimera-Apex-7B-GGUF/resolve/main/Chimera-Apex-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Chimera-Apex-7B-GGUF/resolve/main/Chimera-Apex-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Chimera-Apex-7B-GGUF/resolve/main/Chimera-Apex-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Chimera-Apex-7B-GGUF/resolve/main/Chimera-Apex-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Chimera-Apex-7B-GGUF/resolve/main/Chimera-Apex-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Chimera-Apex-7B-GGUF/resolve/main/Chimera-Apex-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/BiscuitRP-8x7B-GGUF
mradermacher
2024-05-06T05:09:02Z
33
1
transformers
[ "transformers", "gguf", "rp", "roleplay", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-08T04:04:27Z
--- base_model: Fredithefish/BiscuitRP-8x7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - rp - roleplay --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Fredithefish/BiscuitRP-8x7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/BiscuitRP-8x7B-GGUF/resolve/main/BiscuitRP-8x7B.Q2_K.gguf) | Q2_K | 17.4 | | | [GGUF](https://huggingface.co/mradermacher/BiscuitRP-8x7B-GGUF/resolve/main/BiscuitRP-8x7B.IQ3_XS.gguf) | IQ3_XS | 19.4 | | | [GGUF](https://huggingface.co/mradermacher/BiscuitRP-8x7B-GGUF/resolve/main/BiscuitRP-8x7B.IQ3_S.gguf) | IQ3_S | 20.5 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/BiscuitRP-8x7B-GGUF/resolve/main/BiscuitRP-8x7B.Q3_K_S.gguf) | Q3_K_S | 20.5 | | | [GGUF](https://huggingface.co/mradermacher/BiscuitRP-8x7B-GGUF/resolve/main/BiscuitRP-8x7B.IQ3_M.gguf) | IQ3_M | 21.5 | | | [GGUF](https://huggingface.co/mradermacher/BiscuitRP-8x7B-GGUF/resolve/main/BiscuitRP-8x7B.Q3_K_M.gguf) | Q3_K_M | 22.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/BiscuitRP-8x7B-GGUF/resolve/main/BiscuitRP-8x7B.Q3_K_L.gguf) | Q3_K_L | 24.3 | | | [GGUF](https://huggingface.co/mradermacher/BiscuitRP-8x7B-GGUF/resolve/main/BiscuitRP-8x7B.IQ4_XS.gguf) | IQ4_XS | 25.5 | | | [GGUF](https://huggingface.co/mradermacher/BiscuitRP-8x7B-GGUF/resolve/main/BiscuitRP-8x7B.Q4_K_S.gguf) | Q4_K_S | 26.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/BiscuitRP-8x7B-GGUF/resolve/main/BiscuitRP-8x7B.Q4_K_M.gguf) | Q4_K_M | 28.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/BiscuitRP-8x7B-GGUF/resolve/main/BiscuitRP-8x7B.Q5_K_S.gguf) | Q5_K_S | 32.3 | | | [GGUF](https://huggingface.co/mradermacher/BiscuitRP-8x7B-GGUF/resolve/main/BiscuitRP-8x7B.Q5_K_M.gguf) | Q5_K_M | 33.3 | | | [GGUF](https://huggingface.co/mradermacher/BiscuitRP-8x7B-GGUF/resolve/main/BiscuitRP-8x7B.Q6_K.gguf) | Q6_K | 38.5 | very good quality | | [PART 1](https://huggingface.co/mradermacher/BiscuitRP-8x7B-GGUF/resolve/main/BiscuitRP-8x7B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/BiscuitRP-8x7B-GGUF/resolve/main/BiscuitRP-8x7B.Q8_0.gguf.part2of2) | Q8_0 | 49.7 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/CelestiaRP-8x7B-i1-GGUF
mradermacher
2024-05-06T05:08:58Z
2
1
transformers
[ "transformers", "gguf", "en", "endpoints_compatible", "region:us" ]
null
2024-04-08T05:20:14Z
--- base_model: Fredithefish/CelestiaRP-8x7B language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/Fredithefish/CelestiaRP-8x7B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/CelestiaRP-8x7B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/CelestiaRP-8x7B-i1-GGUF/resolve/main/CelestiaRP-8x7B.i1-IQ1_S.gguf) | i1-IQ1_S | 9.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/CelestiaRP-8x7B-i1-GGUF/resolve/main/CelestiaRP-8x7B.i1-IQ1_M.gguf) | i1-IQ1_M | 10.8 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/CelestiaRP-8x7B-i1-GGUF/resolve/main/CelestiaRP-8x7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 12.5 | | | [GGUF](https://huggingface.co/mradermacher/CelestiaRP-8x7B-i1-GGUF/resolve/main/CelestiaRP-8x7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 14.0 | | | [GGUF](https://huggingface.co/mradermacher/CelestiaRP-8x7B-i1-GGUF/resolve/main/CelestiaRP-8x7B.i1-IQ2_S.gguf) | i1-IQ2_S | 14.0 | | | [GGUF](https://huggingface.co/mradermacher/CelestiaRP-8x7B-i1-GGUF/resolve/main/CelestiaRP-8x7B.i1-IQ2_M.gguf) | i1-IQ2_M | 15.4 | | | [GGUF](https://huggingface.co/mradermacher/CelestiaRP-8x7B-i1-GGUF/resolve/main/CelestiaRP-8x7B.i1-Q2_K.gguf) | i1-Q2_K | 17.4 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/CelestiaRP-8x7B-i1-GGUF/resolve/main/CelestiaRP-8x7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 18.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/CelestiaRP-8x7B-i1-GGUF/resolve/main/CelestiaRP-8x7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 19.5 | | | [GGUF](https://huggingface.co/mradermacher/CelestiaRP-8x7B-i1-GGUF/resolve/main/CelestiaRP-8x7B.i1-IQ3_S.gguf) | i1-IQ3_S | 20.5 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/CelestiaRP-8x7B-i1-GGUF/resolve/main/CelestiaRP-8x7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 20.5 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/CelestiaRP-8x7B-i1-GGUF/resolve/main/CelestiaRP-8x7B.i1-IQ3_M.gguf) | i1-IQ3_M | 21.5 | | | [GGUF](https://huggingface.co/mradermacher/CelestiaRP-8x7B-i1-GGUF/resolve/main/CelestiaRP-8x7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 22.6 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/CelestiaRP-8x7B-i1-GGUF/resolve/main/CelestiaRP-8x7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 24.3 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/CelestiaRP-8x7B-i1-GGUF/resolve/main/CelestiaRP-8x7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 25.2 | | | [GGUF](https://huggingface.co/mradermacher/CelestiaRP-8x7B-i1-GGUF/resolve/main/CelestiaRP-8x7B.i1-Q4_0.gguf) | i1-Q4_0 | 26.7 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/CelestiaRP-8x7B-i1-GGUF/resolve/main/CelestiaRP-8x7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 26.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/CelestiaRP-8x7B-i1-GGUF/resolve/main/CelestiaRP-8x7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 28.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/CelestiaRP-8x7B-i1-GGUF/resolve/main/CelestiaRP-8x7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 32.3 | | | [GGUF](https://huggingface.co/mradermacher/CelestiaRP-8x7B-i1-GGUF/resolve/main/CelestiaRP-8x7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 33.3 | | | [GGUF](https://huggingface.co/mradermacher/CelestiaRP-8x7B-i1-GGUF/resolve/main/CelestiaRP-8x7B.i1-Q6_K.gguf) | i1-Q6_K | 38.5 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/zephyr-7b-alpha-GGUF
mradermacher
2024-05-06T05:08:56Z
59
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "en", "dataset:stingning/ultrachat", "dataset:openbmb/UltraFeedback", "base_model:HuggingFaceH4/zephyr-7b-alpha", "base_model:quantized:HuggingFaceH4/zephyr-7b-alpha", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-08T05:46:11Z
--- base_model: HuggingFaceH4/zephyr-7b-alpha datasets: - stingning/ultrachat - openbmb/UltraFeedback language: - en library_name: transformers license: mit quantized_by: mradermacher tags: - generated_from_trainer --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/zephyr-7b-alpha-GGUF/resolve/main/zephyr-7b-alpha.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/zephyr-7b-alpha-GGUF/resolve/main/zephyr-7b-alpha.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/zephyr-7b-alpha-GGUF/resolve/main/zephyr-7b-alpha.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/zephyr-7b-alpha-GGUF/resolve/main/zephyr-7b-alpha.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/zephyr-7b-alpha-GGUF/resolve/main/zephyr-7b-alpha.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/zephyr-7b-alpha-GGUF/resolve/main/zephyr-7b-alpha.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/zephyr-7b-alpha-GGUF/resolve/main/zephyr-7b-alpha.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/zephyr-7b-alpha-GGUF/resolve/main/zephyr-7b-alpha.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/zephyr-7b-alpha-GGUF/resolve/main/zephyr-7b-alpha.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/zephyr-7b-alpha-GGUF/resolve/main/zephyr-7b-alpha.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/zephyr-7b-alpha-GGUF/resolve/main/zephyr-7b-alpha.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/zephyr-7b-alpha-GGUF/resolve/main/zephyr-7b-alpha.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/zephyr-7b-alpha-GGUF/resolve/main/zephyr-7b-alpha.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/zephyr-7b-alpha-GGUF/resolve/main/zephyr-7b-alpha.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/zephyr-7b-beta-GGUF
mradermacher
2024-05-06T05:08:45Z
89
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "en", "dataset:HuggingFaceH4/ultrachat_200k", "dataset:HuggingFaceH4/ultrafeedback_binarized", "base_model:HuggingFaceH4/zephyr-7b-beta", "base_model:quantized:HuggingFaceH4/zephyr-7b-beta", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-08T07:40:20Z
--- base_model: HuggingFaceH4/zephyr-7b-beta datasets: - HuggingFaceH4/ultrachat_200k - HuggingFaceH4/ultrafeedback_binarized language: - en library_name: transformers license: mit quantized_by: mradermacher tags: - generated_from_trainer --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/HuggingFaceH4/zephyr-7b-beta <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/zephyr-7b-beta-GGUF/resolve/main/zephyr-7b-beta.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/zephyr-7b-beta-GGUF/resolve/main/zephyr-7b-beta.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/zephyr-7b-beta-GGUF/resolve/main/zephyr-7b-beta.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/zephyr-7b-beta-GGUF/resolve/main/zephyr-7b-beta.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/zephyr-7b-beta-GGUF/resolve/main/zephyr-7b-beta.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/zephyr-7b-beta-GGUF/resolve/main/zephyr-7b-beta.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/zephyr-7b-beta-GGUF/resolve/main/zephyr-7b-beta.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/zephyr-7b-beta-GGUF/resolve/main/zephyr-7b-beta.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/zephyr-7b-beta-GGUF/resolve/main/zephyr-7b-beta.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/zephyr-7b-beta-GGUF/resolve/main/zephyr-7b-beta.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/zephyr-7b-beta-GGUF/resolve/main/zephyr-7b-beta.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/zephyr-7b-beta-GGUF/resolve/main/zephyr-7b-beta.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/zephyr-7b-beta-GGUF/resolve/main/zephyr-7b-beta.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/zephyr-7b-beta-GGUF/resolve/main/zephyr-7b-beta.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/MadMix-v0.2-GGUF
mradermacher
2024-05-06T05:08:30Z
74
0
transformers
[ "transformers", "gguf", "mistral", "merge", "openchat", "7b", "zephyr", "en", "base_model:Fredithefish/MadMix-v0.2", "base_model:quantized:Fredithefish/MadMix-v0.2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-08T09:36:49Z
--- base_model: Fredithefish/MadMix-v0.2 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - mistral - merge - openchat - 7b - zephyr --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Fredithefish/MadMix-v0.2 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MadMix-v0.2-GGUF/resolve/main/MadMix-v0.2.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/MadMix-v0.2-GGUF/resolve/main/MadMix-v0.2.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/MadMix-v0.2-GGUF/resolve/main/MadMix-v0.2.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/MadMix-v0.2-GGUF/resolve/main/MadMix-v0.2.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/MadMix-v0.2-GGUF/resolve/main/MadMix-v0.2.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/MadMix-v0.2-GGUF/resolve/main/MadMix-v0.2.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MadMix-v0.2-GGUF/resolve/main/MadMix-v0.2.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/MadMix-v0.2-GGUF/resolve/main/MadMix-v0.2.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/MadMix-v0.2-GGUF/resolve/main/MadMix-v0.2.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MadMix-v0.2-GGUF/resolve/main/MadMix-v0.2.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MadMix-v0.2-GGUF/resolve/main/MadMix-v0.2.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/MadMix-v0.2-GGUF/resolve/main/MadMix-v0.2.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/MadMix-v0.2-GGUF/resolve/main/MadMix-v0.2.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/MadMix-v0.2-GGUF/resolve/main/MadMix-v0.2.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->