modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-26 12:28:48
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
498 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-26 12:28:16
card
stringlengths
11
1.01M
mradermacher/Sina-Loki-7b-Merge-GGUF
mradermacher
2024-05-06T06:01:41Z
26
0
transformers
[ "transformers", "gguf", "mistral", "merge", "en", "base_model:Azazelle/Sina-Loki-7b-Merge", "base_model:quantized:Azazelle/Sina-Loki-7b-Merge", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
null
2024-03-24T02:21:57Z
--- base_model: Azazelle/Sina-Loki-7b-Merge language: - en library_name: transformers license: cc-by-4.0 quantized_by: mradermacher tags: - mistral - merge --- ## About static quants of https://huggingface.co/Azazelle/Sina-Loki-7b-Merge <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Sina-Loki-7b-Merge-GGUF/resolve/main/Sina-Loki-7b-Merge.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Sina-Loki-7b-Merge-GGUF/resolve/main/Sina-Loki-7b-Merge.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Sina-Loki-7b-Merge-GGUF/resolve/main/Sina-Loki-7b-Merge.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Sina-Loki-7b-Merge-GGUF/resolve/main/Sina-Loki-7b-Merge.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Sina-Loki-7b-Merge-GGUF/resolve/main/Sina-Loki-7b-Merge.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Sina-Loki-7b-Merge-GGUF/resolve/main/Sina-Loki-7b-Merge.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Sina-Loki-7b-Merge-GGUF/resolve/main/Sina-Loki-7b-Merge.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Sina-Loki-7b-Merge-GGUF/resolve/main/Sina-Loki-7b-Merge.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Sina-Loki-7b-Merge-GGUF/resolve/main/Sina-Loki-7b-Merge.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Sina-Loki-7b-Merge-GGUF/resolve/main/Sina-Loki-7b-Merge.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Sina-Loki-7b-Merge-GGUF/resolve/main/Sina-Loki-7b-Merge.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Sina-Loki-7b-Merge-GGUF/resolve/main/Sina-Loki-7b-Merge.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Sina-Loki-7b-Merge-GGUF/resolve/main/Sina-Loki-7b-Merge.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Sina-Loki-7b-Merge-GGUF/resolve/main/Sina-Loki-7b-Merge.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Sina-Loki-7b-Merge-GGUF/resolve/main/Sina-Loki-7b-Merge.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Sina-Loki-7b-Merge-GGUF/resolve/main/Sina-Loki-7b-Merge.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
AigizK/w2v-bert-2.0-mhr-CV17.0
AigizK
2024-05-06T06:01:39Z
77
0
transformers
[ "transformers", "safetensors", "wav2vec2-bert", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_17_0", "base_model:facebook/w2v-bert-2.0", "base_model:finetune:facebook/w2v-bert-2.0", "license:mit", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-04T09:57:01Z
--- license: mit tags: - generated_from_trainer base_model: facebook/w2v-bert-2.0 datasets: - common_voice_17_0 model-index: - name: w2v-bert-2.0-mhr-CV17.0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # w2v-bert-2.0-mhr-CV17.0 This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on the common_voice_17_0 dataset. It achieves the following results on the evaluation set: - eval_loss: inf - eval_wer: 0.1681 - eval_wer: 0.0317 - eval_runtime: 543.2858 - eval_samples_per_second: 27.84 - eval_steps_per_second: 3.481 - step: 2400 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.40.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
mradermacher/Rudra-7b-GGUF
mradermacher
2024-05-06T06:01:35Z
80
1
transformers
[ "transformers", "gguf", "sa", "dataset:saucam/sans_data", "base_model:saucam/Rudra-7b", "base_model:quantized:saucam/Rudra-7b", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-24T02:30:36Z
--- base_model: saucam/Rudra-7b datasets: - saucam/sans_data language: - sa library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About static quants of https://huggingface.co/saucam/Rudra-7b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Rudra-7b-GGUF/resolve/main/Rudra-7b.Q2_K.gguf) | Q2_K | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Rudra-7b-GGUF/resolve/main/Rudra-7b.IQ3_XS.gguf) | IQ3_XS | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Rudra-7b-GGUF/resolve/main/Rudra-7b.IQ3_S.gguf) | IQ3_S | 4.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Rudra-7b-GGUF/resolve/main/Rudra-7b.Q3_K_S.gguf) | Q3_K_S | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Rudra-7b-GGUF/resolve/main/Rudra-7b.IQ3_M.gguf) | IQ3_M | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Rudra-7b-GGUF/resolve/main/Rudra-7b.Q3_K_M.gguf) | Q3_K_M | 4.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Rudra-7b-GGUF/resolve/main/Rudra-7b.Q3_K_L.gguf) | Q3_K_L | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/Rudra-7b-GGUF/resolve/main/Rudra-7b.IQ4_XS.gguf) | IQ4_XS | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Rudra-7b-GGUF/resolve/main/Rudra-7b.Q4_0.gguf) | Q4_0 | 5.1 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Rudra-7b-GGUF/resolve/main/Rudra-7b.IQ4_NL.gguf) | IQ4_NL | 5.1 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Rudra-7b-GGUF/resolve/main/Rudra-7b.Q4_K_S.gguf) | Q4_K_S | 5.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Rudra-7b-GGUF/resolve/main/Rudra-7b.Q4_K_M.gguf) | Q4_K_M | 5.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Rudra-7b-GGUF/resolve/main/Rudra-7b.Q5_K_S.gguf) | Q5_K_S | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/Rudra-7b-GGUF/resolve/main/Rudra-7b.Q5_K_M.gguf) | Q5_K_M | 6.2 | | | [GGUF](https://huggingface.co/mradermacher/Rudra-7b-GGUF/resolve/main/Rudra-7b.Q6_K.gguf) | Q6_K | 7.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Rudra-7b-GGUF/resolve/main/Rudra-7b.Q8_0.gguf) | Q8_0 | 9.2 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/megameditron-120b-i1-GGUF
mradermacher
2024-05-06T06:01:30Z
10
0
transformers
[ "transformers", "gguf", "en", "base_model:ibivibiv/megameditron-120b", "base_model:quantized:ibivibiv/megameditron-120b", "endpoints_compatible", "region:us" ]
null
2024-03-24T02:54:22Z
--- base_model: ibivibiv/megameditron-120b language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About weighted/imatrix quants of https://huggingface.co/ibivibiv/megameditron-120b <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/megameditron-120b-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/megameditron-120b-i1-GGUF/resolve/main/megameditron-120b.i1-IQ1_S.gguf) | i1-IQ1_S | 26.2 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/megameditron-120b-i1-GGUF/resolve/main/megameditron-120b.i1-IQ1_M.gguf) | i1-IQ1_M | 28.7 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/megameditron-120b-i1-GGUF/resolve/main/megameditron-120b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 32.8 | | | [GGUF](https://huggingface.co/mradermacher/megameditron-120b-i1-GGUF/resolve/main/megameditron-120b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 36.3 | | | [GGUF](https://huggingface.co/mradermacher/megameditron-120b-i1-GGUF/resolve/main/megameditron-120b.i1-IQ2_S.gguf) | i1-IQ2_S | 38.1 | | | [GGUF](https://huggingface.co/mradermacher/megameditron-120b-i1-GGUF/resolve/main/megameditron-120b.i1-IQ2_M.gguf) | i1-IQ2_M | 41.4 | | | [GGUF](https://huggingface.co/mradermacher/megameditron-120b-i1-GGUF/resolve/main/megameditron-120b.i1-Q2_K.gguf) | i1-Q2_K | 45.1 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/megameditron-120b-i1-GGUF/resolve/main/megameditron-120b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 47.1 | lower quality | | [PART 1](https://huggingface.co/mradermacher/megameditron-120b-i1-GGUF/resolve/main/megameditron-120b.i1-IQ3_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/megameditron-120b-i1-GGUF/resolve/main/megameditron-120b.i1-IQ3_XS.gguf.part2of2) | i1-IQ3_XS | 50.1 | | | [PART 1](https://huggingface.co/mradermacher/megameditron-120b-i1-GGUF/resolve/main/megameditron-120b.i1-Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/megameditron-120b-i1-GGUF/resolve/main/megameditron-120b.i1-Q3_K_S.gguf.part2of2) | i1-Q3_K_S | 52.7 | IQ3_XS probably better | | [PART 1](https://huggingface.co/mradermacher/megameditron-120b-i1-GGUF/resolve/main/megameditron-120b.i1-IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/megameditron-120b-i1-GGUF/resolve/main/megameditron-120b.i1-IQ3_S.gguf.part2of2) | i1-IQ3_S | 52.9 | beats Q3_K* | | [PART 1](https://huggingface.co/mradermacher/megameditron-120b-i1-GGUF/resolve/main/megameditron-120b.i1-IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/megameditron-120b-i1-GGUF/resolve/main/megameditron-120b.i1-IQ3_M.gguf.part2of2) | i1-IQ3_M | 54.7 | | | [PART 1](https://huggingface.co/mradermacher/megameditron-120b-i1-GGUF/resolve/main/megameditron-120b.i1-Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/megameditron-120b-i1-GGUF/resolve/main/megameditron-120b.i1-Q3_K_M.gguf.part2of2) | i1-Q3_K_M | 58.8 | IQ3_S probably better | | [PART 1](https://huggingface.co/mradermacher/megameditron-120b-i1-GGUF/resolve/main/megameditron-120b.i1-Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/megameditron-120b-i1-GGUF/resolve/main/megameditron-120b.i1-Q3_K_L.gguf.part2of2) | i1-Q3_K_L | 63.9 | IQ3_M probably better | | [PART 1](https://huggingface.co/mradermacher/megameditron-120b-i1-GGUF/resolve/main/megameditron-120b.i1-IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/megameditron-120b-i1-GGUF/resolve/main/megameditron-120b.i1-IQ4_XS.gguf.part2of2) | i1-IQ4_XS | 65.1 | | | [PART 1](https://huggingface.co/mradermacher/megameditron-120b-i1-GGUF/resolve/main/megameditron-120b.i1-IQ4_NL.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/megameditron-120b-i1-GGUF/resolve/main/megameditron-120b.i1-IQ4_NL.gguf.part2of2) | i1-IQ4_NL | 68.8 | prefer IQ4_XS | | [PART 1](https://huggingface.co/mradermacher/megameditron-120b-i1-GGUF/resolve/main/megameditron-120b.i1-Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/megameditron-120b-i1-GGUF/resolve/main/megameditron-120b.i1-Q4_0.gguf.part2of2) | i1-Q4_0 | 68.9 | fast, low quality | | [PART 1](https://huggingface.co/mradermacher/megameditron-120b-i1-GGUF/resolve/main/megameditron-120b.i1-Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/megameditron-120b-i1-GGUF/resolve/main/megameditron-120b.i1-Q4_K_S.gguf.part2of2) | i1-Q4_K_S | 69.2 | optimal size/speed/quality | | [PART 1](https://huggingface.co/mradermacher/megameditron-120b-i1-GGUF/resolve/main/megameditron-120b.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/megameditron-120b-i1-GGUF/resolve/main/megameditron-120b.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 73.1 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/megameditron-120b-i1-GGUF/resolve/main/megameditron-120b.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/megameditron-120b-i1-GGUF/resolve/main/megameditron-120b.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 83.7 | | | [PART 1](https://huggingface.co/mradermacher/megameditron-120b-i1-GGUF/resolve/main/megameditron-120b.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/megameditron-120b-i1-GGUF/resolve/main/megameditron-120b.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 86.0 | | | [PART 1](https://huggingface.co/mradermacher/megameditron-120b-i1-GGUF/resolve/main/megameditron-120b.i1-Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/megameditron-120b-i1-GGUF/resolve/main/megameditron-120b.i1-Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/megameditron-120b-i1-GGUF/resolve/main/megameditron-120b.i1-Q6_K.gguf.part3of3) | i1-Q6_K | 99.6 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
tsavage68/chat_700_STEPS_03beta_1e6rate_CDPOSFT
tsavage68
2024-05-06T06:01:30Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "dpo", "generated_from_trainer", "conversational", "base_model:tsavage68/chat_600STEPS_1e8rate_SFT", "base_model:finetune:tsavage68/chat_600STEPS_1e8rate_SFT", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-06T05:55:01Z
--- base_model: tsavage68/chat_600STEPS_1e8rate_SFT tags: - trl - dpo - generated_from_trainer model-index: - name: chat_700_STEPS_03beta_1e6rate_CDPOSFT results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # chat_700_STEPS_03beta_1e6rate_CDPOSFT This model is a fine-tuned version of [tsavage68/chat_600STEPS_1e8rate_SFT](https://huggingface.co/tsavage68/chat_600STEPS_1e8rate_SFT) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6706 - Rewards/chosen: -0.2188 - Rewards/rejected: -0.3671 - Rewards/accuracies: 0.5143 - Rewards/margins: 0.1484 - Logps/rejected: -20.0258 - Logps/chosen: -17.4839 - Logits/rejected: -0.6007 - Logits/chosen: -0.6005 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 700 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6903 | 0.0977 | 50 | 0.6898 | 0.0339 | 0.0260 | 0.4264 | 0.0078 | -18.7152 | -16.6418 | -0.6000 | -0.5999 | | 0.6568 | 0.1953 | 100 | 0.6714 | -0.1082 | -0.1762 | 0.5099 | 0.0680 | -19.3893 | -17.1151 | -0.6152 | -0.6151 | | 0.7125 | 0.2930 | 150 | 0.6838 | -0.1101 | -0.1755 | 0.4791 | 0.0653 | -19.3869 | -17.1217 | -0.5952 | -0.5950 | | 0.7095 | 0.3906 | 200 | 0.6820 | -0.1564 | -0.2410 | 0.5055 | 0.0846 | -19.6053 | -17.2759 | -0.5844 | -0.5842 | | 0.7264 | 0.4883 | 250 | 0.6859 | -0.0974 | -0.1989 | 0.4967 | 0.1016 | -19.4651 | -17.0792 | -0.5778 | -0.5776 | | 0.6767 | 0.5859 | 300 | 0.6737 | -0.2009 | -0.3435 | 0.5121 | 0.1426 | -19.9470 | -17.4243 | -0.6046 | -0.6044 | | 0.6546 | 0.6836 | 350 | 0.6776 | -0.2753 | -0.4068 | 0.5033 | 0.1316 | -20.1581 | -17.6722 | -0.5869 | -0.5867 | | 0.6473 | 0.7812 | 400 | 0.6697 | -0.2700 | -0.4199 | 0.5209 | 0.1499 | -20.2016 | -17.6546 | -0.6084 | -0.6082 | | 0.68 | 0.8789 | 450 | 0.6720 | -0.2073 | -0.3505 | 0.5121 | 0.1432 | -19.9703 | -17.4455 | -0.5885 | -0.5883 | | 0.6626 | 0.9766 | 500 | 0.6726 | -0.2140 | -0.3584 | 0.5099 | 0.1444 | -19.9967 | -17.4681 | -0.5948 | -0.5946 | | 0.3861 | 1.0742 | 550 | 0.6702 | -0.2078 | -0.3569 | 0.5209 | 0.1492 | -19.9917 | -17.4471 | -0.5992 | -0.5990 | | 0.4031 | 1.1719 | 600 | 0.6720 | -0.2186 | -0.3641 | 0.5121 | 0.1455 | -20.0158 | -17.4834 | -0.6004 | -0.6002 | | 0.4139 | 1.2695 | 650 | 0.6703 | -0.2170 | -0.3648 | 0.5121 | 0.1478 | -20.0179 | -17.4778 | -0.6006 | -0.6004 | | 0.3251 | 1.3672 | 700 | 0.6706 | -0.2188 | -0.3671 | 0.5143 | 0.1484 | -20.0258 | -17.4839 | -0.6007 | -0.6005 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.0.0+cu117 - Datasets 2.19.0 - Tokenizers 0.19.1
mradermacher/Flammen-Kunoichi-7B-GGUF
mradermacher
2024-05-06T06:01:14Z
42
2
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:nbeerbower/Flammen-Kunoichi-7B", "base_model:quantized:nbeerbower/Flammen-Kunoichi-7B", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-03-24T06:30:42Z
--- base_model: nbeerbower/Flammen-Kunoichi-7B language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher tags: - mergekit - merge --- ## About static quants of https://huggingface.co/nbeerbower/Flammen-Kunoichi-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Flammen-Kunoichi-7B-GGUF/resolve/main/Flammen-Kunoichi-7B.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Flammen-Kunoichi-7B-GGUF/resolve/main/Flammen-Kunoichi-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Flammen-Kunoichi-7B-GGUF/resolve/main/Flammen-Kunoichi-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Flammen-Kunoichi-7B-GGUF/resolve/main/Flammen-Kunoichi-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Flammen-Kunoichi-7B-GGUF/resolve/main/Flammen-Kunoichi-7B.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Flammen-Kunoichi-7B-GGUF/resolve/main/Flammen-Kunoichi-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Flammen-Kunoichi-7B-GGUF/resolve/main/Flammen-Kunoichi-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Flammen-Kunoichi-7B-GGUF/resolve/main/Flammen-Kunoichi-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Flammen-Kunoichi-7B-GGUF/resolve/main/Flammen-Kunoichi-7B.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Flammen-Kunoichi-7B-GGUF/resolve/main/Flammen-Kunoichi-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Flammen-Kunoichi-7B-GGUF/resolve/main/Flammen-Kunoichi-7B.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Flammen-Kunoichi-7B-GGUF/resolve/main/Flammen-Kunoichi-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Flammen-Kunoichi-7B-GGUF/resolve/main/Flammen-Kunoichi-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Flammen-Kunoichi-7B-GGUF/resolve/main/Flammen-Kunoichi-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Flammen-Kunoichi-7B-GGUF/resolve/main/Flammen-Kunoichi-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Flammen-Kunoichi-7B-GGUF/resolve/main/Flammen-Kunoichi-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/StrangeBru-7B-GGUF
mradermacher
2024-05-06T06:01:11Z
61
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:nbeerbower/StrangeBru-7B", "base_model:quantized:nbeerbower/StrangeBru-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-24T06:58:36Z
--- base_model: nbeerbower/StrangeBru-7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - mergekit - merge --- ## About static quants of https://huggingface.co/nbeerbower/StrangeBru-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/StrangeBru-7B-GGUF/resolve/main/StrangeBru-7B.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/StrangeBru-7B-GGUF/resolve/main/StrangeBru-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/StrangeBru-7B-GGUF/resolve/main/StrangeBru-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/StrangeBru-7B-GGUF/resolve/main/StrangeBru-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/StrangeBru-7B-GGUF/resolve/main/StrangeBru-7B.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/StrangeBru-7B-GGUF/resolve/main/StrangeBru-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/StrangeBru-7B-GGUF/resolve/main/StrangeBru-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/StrangeBru-7B-GGUF/resolve/main/StrangeBru-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/StrangeBru-7B-GGUF/resolve/main/StrangeBru-7B.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/StrangeBru-7B-GGUF/resolve/main/StrangeBru-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/StrangeBru-7B-GGUF/resolve/main/StrangeBru-7B.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/StrangeBru-7B-GGUF/resolve/main/StrangeBru-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/StrangeBru-7B-GGUF/resolve/main/StrangeBru-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/StrangeBru-7B-GGUF/resolve/main/StrangeBru-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/StrangeBru-7B-GGUF/resolve/main/StrangeBru-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/StrangeBru-7B-GGUF/resolve/main/StrangeBru-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/dutiful-wildflower-GGUF
mradermacher
2024-05-06T06:01:08Z
11
1
transformers
[ "transformers", "gguf", "en", "endpoints_compatible", "region:us" ]
null
2024-03-24T07:39:55Z
--- base_model: harir/dutiful-wildflower language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About static quants of https://huggingface.co/harir/dutiful-wildflower <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/dutiful-wildflower-GGUF/resolve/main/dutiful-wildflower.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/dutiful-wildflower-GGUF/resolve/main/dutiful-wildflower.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/dutiful-wildflower-GGUF/resolve/main/dutiful-wildflower.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/dutiful-wildflower-GGUF/resolve/main/dutiful-wildflower.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/dutiful-wildflower-GGUF/resolve/main/dutiful-wildflower.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/dutiful-wildflower-GGUF/resolve/main/dutiful-wildflower.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/dutiful-wildflower-GGUF/resolve/main/dutiful-wildflower.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/dutiful-wildflower-GGUF/resolve/main/dutiful-wildflower.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/dutiful-wildflower-GGUF/resolve/main/dutiful-wildflower.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/dutiful-wildflower-GGUF/resolve/main/dutiful-wildflower.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/dutiful-wildflower-GGUF/resolve/main/dutiful-wildflower.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/dutiful-wildflower-GGUF/resolve/main/dutiful-wildflower.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/dutiful-wildflower-GGUF/resolve/main/dutiful-wildflower.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/dutiful-wildflower-GGUF/resolve/main/dutiful-wildflower.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/dutiful-wildflower-GGUF/resolve/main/dutiful-wildflower.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/dutiful-wildflower-GGUF/resolve/main/dutiful-wildflower.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/dashing-firefly-GGUF
mradermacher
2024-05-06T06:01:06Z
5
0
transformers
[ "transformers", "gguf", "en", "endpoints_compatible", "region:us", "conversational" ]
null
2024-03-24T08:21:04Z
--- base_model: harir/dashing-firefly language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About static quants of https://huggingface.co/harir/dashing-firefly <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/dashing-firefly-GGUF/resolve/main/dashing-firefly.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/dashing-firefly-GGUF/resolve/main/dashing-firefly.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/dashing-firefly-GGUF/resolve/main/dashing-firefly.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/dashing-firefly-GGUF/resolve/main/dashing-firefly.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/dashing-firefly-GGUF/resolve/main/dashing-firefly.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/dashing-firefly-GGUF/resolve/main/dashing-firefly.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/dashing-firefly-GGUF/resolve/main/dashing-firefly.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/dashing-firefly-GGUF/resolve/main/dashing-firefly.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/dashing-firefly-GGUF/resolve/main/dashing-firefly.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/dashing-firefly-GGUF/resolve/main/dashing-firefly.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/dashing-firefly-GGUF/resolve/main/dashing-firefly.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/dashing-firefly-GGUF/resolve/main/dashing-firefly.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/dashing-firefly-GGUF/resolve/main/dashing-firefly.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/dashing-firefly-GGUF/resolve/main/dashing-firefly.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/dashing-firefly-GGUF/resolve/main/dashing-firefly.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/dashing-firefly-GGUF/resolve/main/dashing-firefly.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/MeliodasT3q-7B-GGUF
mradermacher
2024-05-06T06:01:00Z
79
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "automerger", "en", "base_model:automerger/MeliodasT3q-7B", "base_model:quantized:automerger/MeliodasT3q-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-24T09:04:02Z
--- base_model: automerger/MeliodasT3q-7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - automerger --- ## About static quants of https://huggingface.co/automerger/MeliodasT3q-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MeliodasT3q-7B-GGUF/resolve/main/MeliodasT3q-7B.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/MeliodasT3q-7B-GGUF/resolve/main/MeliodasT3q-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/MeliodasT3q-7B-GGUF/resolve/main/MeliodasT3q-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/MeliodasT3q-7B-GGUF/resolve/main/MeliodasT3q-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/MeliodasT3q-7B-GGUF/resolve/main/MeliodasT3q-7B.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/MeliodasT3q-7B-GGUF/resolve/main/MeliodasT3q-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MeliodasT3q-7B-GGUF/resolve/main/MeliodasT3q-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/MeliodasT3q-7B-GGUF/resolve/main/MeliodasT3q-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/MeliodasT3q-7B-GGUF/resolve/main/MeliodasT3q-7B.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/MeliodasT3q-7B-GGUF/resolve/main/MeliodasT3q-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MeliodasT3q-7B-GGUF/resolve/main/MeliodasT3q-7B.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/MeliodasT3q-7B-GGUF/resolve/main/MeliodasT3q-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MeliodasT3q-7B-GGUF/resolve/main/MeliodasT3q-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/MeliodasT3q-7B-GGUF/resolve/main/MeliodasT3q-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/MeliodasT3q-7B-GGUF/resolve/main/MeliodasT3q-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/MeliodasT3q-7B-GGUF/resolve/main/MeliodasT3q-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Transcendental-Maid-7B-GGUF
mradermacher
2024-05-06T06:00:55Z
69
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:nbeerbower/Transcendental-Maid-7B", "base_model:quantized:nbeerbower/Transcendental-Maid-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-24T10:01:56Z
--- base_model: nbeerbower/Transcendental-Maid-7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - mergekit - merge --- ## About static quants of https://huggingface.co/nbeerbower/Transcendental-Maid-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Transcendental-Maid-7B-GGUF/resolve/main/Transcendental-Maid-7B.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Transcendental-Maid-7B-GGUF/resolve/main/Transcendental-Maid-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Transcendental-Maid-7B-GGUF/resolve/main/Transcendental-Maid-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Transcendental-Maid-7B-GGUF/resolve/main/Transcendental-Maid-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Transcendental-Maid-7B-GGUF/resolve/main/Transcendental-Maid-7B.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Transcendental-Maid-7B-GGUF/resolve/main/Transcendental-Maid-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Transcendental-Maid-7B-GGUF/resolve/main/Transcendental-Maid-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Transcendental-Maid-7B-GGUF/resolve/main/Transcendental-Maid-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Transcendental-Maid-7B-GGUF/resolve/main/Transcendental-Maid-7B.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Transcendental-Maid-7B-GGUF/resolve/main/Transcendental-Maid-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Transcendental-Maid-7B-GGUF/resolve/main/Transcendental-Maid-7B.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Transcendental-Maid-7B-GGUF/resolve/main/Transcendental-Maid-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Transcendental-Maid-7B-GGUF/resolve/main/Transcendental-Maid-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Transcendental-Maid-7B-GGUF/resolve/main/Transcendental-Maid-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Transcendental-Maid-7B-GGUF/resolve/main/Transcendental-Maid-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Transcendental-Maid-7B-GGUF/resolve/main/Transcendental-Maid-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/chronos-hermes-13b-v2-GGUF
mradermacher
2024-05-06T06:00:42Z
120
0
transformers
[ "transformers", "gguf", "llama", "llama-2", "pytorch", "chatbot", "storywriting", "generalist-model", "en", "base_model:Austism/chronos-hermes-13b-v2", "base_model:quantized:Austism/chronos-hermes-13b-v2", "license:other", "endpoints_compatible", "region:us" ]
null
2024-03-24T12:46:47Z
--- base_model: Austism/chronos-hermes-13b-v2 language: - en library_name: transformers license: other quantized_by: mradermacher tags: - llama - llama-2 - pytorch - chatbot - storywriting - generalist-model --- ## About static quants of https://huggingface.co/Austism/chronos-hermes-13b-v2 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/chronos-hermes-13b-v2-GGUF/resolve/main/chronos-hermes-13b-v2.Q2_K.gguf) | Q2_K | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/chronos-hermes-13b-v2-GGUF/resolve/main/chronos-hermes-13b-v2.IQ3_XS.gguf) | IQ3_XS | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/chronos-hermes-13b-v2-GGUF/resolve/main/chronos-hermes-13b-v2.IQ3_S.gguf) | IQ3_S | 6.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/chronos-hermes-13b-v2-GGUF/resolve/main/chronos-hermes-13b-v2.Q3_K_S.gguf) | Q3_K_S | 6.0 | | | [GGUF](https://huggingface.co/mradermacher/chronos-hermes-13b-v2-GGUF/resolve/main/chronos-hermes-13b-v2.IQ3_M.gguf) | IQ3_M | 6.3 | | | [GGUF](https://huggingface.co/mradermacher/chronos-hermes-13b-v2-GGUF/resolve/main/chronos-hermes-13b-v2.Q3_K_M.gguf) | Q3_K_M | 6.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/chronos-hermes-13b-v2-GGUF/resolve/main/chronos-hermes-13b-v2.Q3_K_L.gguf) | Q3_K_L | 7.2 | | | [GGUF](https://huggingface.co/mradermacher/chronos-hermes-13b-v2-GGUF/resolve/main/chronos-hermes-13b-v2.IQ4_XS.gguf) | IQ4_XS | 7.3 | | | [GGUF](https://huggingface.co/mradermacher/chronos-hermes-13b-v2-GGUF/resolve/main/chronos-hermes-13b-v2.Q4_0.gguf) | Q4_0 | 7.7 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/chronos-hermes-13b-v2-GGUF/resolve/main/chronos-hermes-13b-v2.IQ4_NL.gguf) | IQ4_NL | 7.7 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/chronos-hermes-13b-v2-GGUF/resolve/main/chronos-hermes-13b-v2.Q4_K_S.gguf) | Q4_K_S | 7.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/chronos-hermes-13b-v2-GGUF/resolve/main/chronos-hermes-13b-v2.Q4_K_M.gguf) | Q4_K_M | 8.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/chronos-hermes-13b-v2-GGUF/resolve/main/chronos-hermes-13b-v2.Q5_K_S.gguf) | Q5_K_S | 9.3 | | | [GGUF](https://huggingface.co/mradermacher/chronos-hermes-13b-v2-GGUF/resolve/main/chronos-hermes-13b-v2.Q5_K_M.gguf) | Q5_K_M | 9.5 | | | [GGUF](https://huggingface.co/mradermacher/chronos-hermes-13b-v2-GGUF/resolve/main/chronos-hermes-13b-v2.Q6_K.gguf) | Q6_K | 11.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/chronos-hermes-13b-v2-GGUF/resolve/main/chronos-hermes-13b-v2.Q8_0.gguf) | Q8_0 | 14.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/erlesen-leo-7b-30K-GGUF
mradermacher
2024-05-06T06:00:37Z
2
0
transformers
[ "transformers", "gguf", "en", "base_model:MSLars/erlesen-leo-7b-30K", "base_model:quantized:MSLars/erlesen-leo-7b-30K", "endpoints_compatible", "region:us" ]
null
2024-03-24T13:36:07Z
--- base_model: MSLars/erlesen-leo-7b-30K language: - en library_name: transformers quantized_by: mradermacher --- ## About static quants of https://huggingface.co/MSLars/erlesen-leo-7b-30K <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/erlesen-leo-7b-30K-GGUF/resolve/main/erlesen-leo-7b-30K.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/erlesen-leo-7b-30K-GGUF/resolve/main/erlesen-leo-7b-30K.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/erlesen-leo-7b-30K-GGUF/resolve/main/erlesen-leo-7b-30K.IQ3_S.gguf) | IQ3_S | 3.2 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/erlesen-leo-7b-30K-GGUF/resolve/main/erlesen-leo-7b-30K.Q3_K_S.gguf) | Q3_K_S | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/erlesen-leo-7b-30K-GGUF/resolve/main/erlesen-leo-7b-30K.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/erlesen-leo-7b-30K-GGUF/resolve/main/erlesen-leo-7b-30K.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/erlesen-leo-7b-30K-GGUF/resolve/main/erlesen-leo-7b-30K.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/erlesen-leo-7b-30K-GGUF/resolve/main/erlesen-leo-7b-30K.IQ4_XS.gguf) | IQ4_XS | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/erlesen-leo-7b-30K-GGUF/resolve/main/erlesen-leo-7b-30K.Q4_0.gguf) | Q4_0 | 4.1 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/erlesen-leo-7b-30K-GGUF/resolve/main/erlesen-leo-7b-30K.IQ4_NL.gguf) | IQ4_NL | 4.1 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/erlesen-leo-7b-30K-GGUF/resolve/main/erlesen-leo-7b-30K.Q4_K_S.gguf) | Q4_K_S | 4.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/erlesen-leo-7b-30K-GGUF/resolve/main/erlesen-leo-7b-30K.Q4_K_M.gguf) | Q4_K_M | 4.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/erlesen-leo-7b-30K-GGUF/resolve/main/erlesen-leo-7b-30K.Q5_K_S.gguf) | Q5_K_S | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/erlesen-leo-7b-30K-GGUF/resolve/main/erlesen-leo-7b-30K.Q5_K_M.gguf) | Q5_K_M | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/erlesen-leo-7b-30K-GGUF/resolve/main/erlesen-leo-7b-30K.Q6_K.gguf) | Q6_K | 5.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/erlesen-leo-7b-30K-GGUF/resolve/main/erlesen-leo-7b-30K.Q8_0.gguf) | Q8_0 | 7.4 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Bernstein-120b-i1-GGUF
mradermacher
2024-05-06T06:00:30Z
1
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:bcse/Bernstein-120b", "base_model:quantized:bcse/Bernstein-120b", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-03-24T14:12:55Z
--- base_model: bcse/Bernstein-120b language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About weighted/imatrix quants of https://huggingface.co/bcse/Bernstein-120b <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Bernstein-120b-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Bernstein-120b-i1-GGUF/resolve/main/Bernstein-120b.i1-IQ1_S.gguf) | i1-IQ1_S | 25.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Bernstein-120b-i1-GGUF/resolve/main/Bernstein-120b.i1-IQ1_M.gguf) | i1-IQ1_M | 27.8 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Bernstein-120b-i1-GGUF/resolve/main/Bernstein-120b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 32.2 | | | [GGUF](https://huggingface.co/mradermacher/Bernstein-120b-i1-GGUF/resolve/main/Bernstein-120b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 35.8 | | | [GGUF](https://huggingface.co/mradermacher/Bernstein-120b-i1-GGUF/resolve/main/Bernstein-120b.i1-IQ2_S.gguf) | i1-IQ2_S | 37.6 | | | [GGUF](https://huggingface.co/mradermacher/Bernstein-120b-i1-GGUF/resolve/main/Bernstein-120b.i1-IQ2_M.gguf) | i1-IQ2_M | 40.9 | | | [GGUF](https://huggingface.co/mradermacher/Bernstein-120b-i1-GGUF/resolve/main/Bernstein-120b.i1-Q2_K.gguf) | i1-Q2_K | 44.6 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Bernstein-120b-i1-GGUF/resolve/main/Bernstein-120b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 46.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Bernstein-120b-i1-GGUF/resolve/main/Bernstein-120b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 49.6 | | | [PART 1](https://huggingface.co/mradermacher/Bernstein-120b-i1-GGUF/resolve/main/Bernstein-120b.i1-Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Bernstein-120b-i1-GGUF/resolve/main/Bernstein-120b.i1-Q3_K_S.gguf.part2of2) | i1-Q3_K_S | 52.2 | IQ3_XS probably better | | [PART 1](https://huggingface.co/mradermacher/Bernstein-120b-i1-GGUF/resolve/main/Bernstein-120b.i1-IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Bernstein-120b-i1-GGUF/resolve/main/Bernstein-120b.i1-IQ3_S.gguf.part2of2) | i1-IQ3_S | 52.4 | beats Q3_K* | | [PART 1](https://huggingface.co/mradermacher/Bernstein-120b-i1-GGUF/resolve/main/Bernstein-120b.i1-IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Bernstein-120b-i1-GGUF/resolve/main/Bernstein-120b.i1-IQ3_M.gguf.part2of2) | i1-IQ3_M | 54.2 | | | [PART 1](https://huggingface.co/mradermacher/Bernstein-120b-i1-GGUF/resolve/main/Bernstein-120b.i1-Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Bernstein-120b-i1-GGUF/resolve/main/Bernstein-120b.i1-Q3_K_M.gguf.part2of2) | i1-Q3_K_M | 58.2 | IQ3_S probably better | | [PART 1](https://huggingface.co/mradermacher/Bernstein-120b-i1-GGUF/resolve/main/Bernstein-120b.i1-Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Bernstein-120b-i1-GGUF/resolve/main/Bernstein-120b.i1-Q3_K_L.gguf.part2of2) | i1-Q3_K_L | 63.4 | IQ3_M probably better | | [PART 1](https://huggingface.co/mradermacher/Bernstein-120b-i1-GGUF/resolve/main/Bernstein-120b.i1-IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Bernstein-120b-i1-GGUF/resolve/main/Bernstein-120b.i1-IQ4_XS.gguf.part2of2) | i1-IQ4_XS | 64.6 | | | [PART 1](https://huggingface.co/mradermacher/Bernstein-120b-i1-GGUF/resolve/main/Bernstein-120b.i1-IQ4_NL.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Bernstein-120b-i1-GGUF/resolve/main/Bernstein-120b.i1-IQ4_NL.gguf.part2of2) | i1-IQ4_NL | 68.3 | prefer IQ4_XS | | [PART 1](https://huggingface.co/mradermacher/Bernstein-120b-i1-GGUF/resolve/main/Bernstein-120b.i1-Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Bernstein-120b-i1-GGUF/resolve/main/Bernstein-120b.i1-Q4_0.gguf.part2of2) | i1-Q4_0 | 68.4 | fast, low quality | | [PART 1](https://huggingface.co/mradermacher/Bernstein-120b-i1-GGUF/resolve/main/Bernstein-120b.i1-Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Bernstein-120b-i1-GGUF/resolve/main/Bernstein-120b.i1-Q4_K_S.gguf.part2of2) | i1-Q4_K_S | 68.7 | optimal size/speed/quality | | [PART 1](https://huggingface.co/mradermacher/Bernstein-120b-i1-GGUF/resolve/main/Bernstein-120b.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Bernstein-120b-i1-GGUF/resolve/main/Bernstein-120b.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 72.6 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/Bernstein-120b-i1-GGUF/resolve/main/Bernstein-120b.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Bernstein-120b-i1-GGUF/resolve/main/Bernstein-120b.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 83.2 | | | [PART 1](https://huggingface.co/mradermacher/Bernstein-120b-i1-GGUF/resolve/main/Bernstein-120b.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Bernstein-120b-i1-GGUF/resolve/main/Bernstein-120b.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 85.4 | | | [PART 1](https://huggingface.co/mradermacher/Bernstein-120b-i1-GGUF/resolve/main/Bernstein-120b.i1-Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Bernstein-120b-i1-GGUF/resolve/main/Bernstein-120b.i1-Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Bernstein-120b-i1-GGUF/resolve/main/Bernstein-120b.i1-Q6_K.gguf.part3of3) | i1-Q6_K | 99.1 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/EinBase-70B-v0.1-full-i1-GGUF
mradermacher
2024-05-06T06:00:24Z
9
0
transformers
[ "transformers", "gguf", "en", "base_model:SF-Foundation/EinBase-70B-v0.1-full", "base_model:quantized:SF-Foundation/EinBase-70B-v0.1-full", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-03-24T17:40:00Z
--- base_model: SF-Foundation/EinBase-70B-v0.1-full language: - en library_name: transformers quantized_by: mradermacher --- ## About weighted/imatrix quants of https://huggingface.co/SF-Foundation/EinBase-70B-v0.1-full <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-i1-GGUF/resolve/main/EinBase-70B-v0.1-full.i1-IQ1_S.gguf) | i1-IQ1_S | 15.0 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-i1-GGUF/resolve/main/EinBase-70B-v0.1-full.i1-IQ1_M.gguf) | i1-IQ1_M | 16.0 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-i1-GGUF/resolve/main/EinBase-70B-v0.1-full.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.7 | | | [GGUF](https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-i1-GGUF/resolve/main/EinBase-70B-v0.1-full.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.8 | | | [GGUF](https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-i1-GGUF/resolve/main/EinBase-70B-v0.1-full.i1-IQ2_S.gguf) | i1-IQ2_S | 21.8 | | | [GGUF](https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-i1-GGUF/resolve/main/EinBase-70B-v0.1-full.i1-IQ2_M.gguf) | i1-IQ2_M | 23.7 | | | [GGUF](https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-i1-GGUF/resolve/main/EinBase-70B-v0.1-full.i1-Q2_K.gguf) | i1-Q2_K | 25.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-i1-GGUF/resolve/main/EinBase-70B-v0.1-full.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-i1-GGUF/resolve/main/EinBase-70B-v0.1-full.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.7 | | | [GGUF](https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-i1-GGUF/resolve/main/EinBase-70B-v0.1-full.i1-IQ3_S.gguf) | i1-IQ3_S | 30.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-i1-GGUF/resolve/main/EinBase-70B-v0.1-full.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.3 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-i1-GGUF/resolve/main/EinBase-70B-v0.1-full.i1-IQ3_M.gguf) | i1-IQ3_M | 31.4 | | | [GGUF](https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-i1-GGUF/resolve/main/EinBase-70B-v0.1-full.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.7 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-i1-GGUF/resolve/main/EinBase-70B-v0.1-full.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.6 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-i1-GGUF/resolve/main/EinBase-70B-v0.1-full.i1-IQ4_XS.gguf) | i1-IQ4_XS | 37.2 | | | [GGUF](https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-i1-GGUF/resolve/main/EinBase-70B-v0.1-full.i1-IQ4_NL.gguf) | i1-IQ4_NL | 39.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-i1-GGUF/resolve/main/EinBase-70B-v0.1-full.i1-Q4_0.gguf) | i1-Q4_0 | 39.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-i1-GGUF/resolve/main/EinBase-70B-v0.1-full.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-i1-GGUF/resolve/main/EinBase-70B-v0.1-full.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-i1-GGUF/resolve/main/EinBase-70B-v0.1-full.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.9 | | | [GGUF](https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-i1-GGUF/resolve/main/EinBase-70B-v0.1-full.i1-Q5_K_M.gguf) | i1-Q5_K_M | 49.2 | | | [PART 1](https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-i1-GGUF/resolve/main/EinBase-70B-v0.1-full.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-i1-GGUF/resolve/main/EinBase-70B-v0.1-full.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 57.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Hyperion-3.0-Mistral-7B-DPO-GGUF
mradermacher
2024-05-06T06:00:18Z
33
0
transformers
[ "transformers", "gguf", "en", "dataset:Locutusque/hyperion-dpo-v1.0", "base_model:Locutusque/Hyperion-3.0-Mistral-7B-DPO", "base_model:quantized:Locutusque/Hyperion-3.0-Mistral-7B-DPO", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-24T20:36:49Z
--- base_model: Locutusque/Hyperion-3.0-Mistral-7B-DPO datasets: - Locutusque/hyperion-dpo-v1.0 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About static quants of https://huggingface.co/Locutusque/Hyperion-3.0-Mistral-7B-DPO <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Hyperion-3.0-Mistral-7B-DPO-GGUF/resolve/main/Hyperion-3.0-Mistral-7B-DPO.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Hyperion-3.0-Mistral-7B-DPO-GGUF/resolve/main/Hyperion-3.0-Mistral-7B-DPO.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Hyperion-3.0-Mistral-7B-DPO-GGUF/resolve/main/Hyperion-3.0-Mistral-7B-DPO.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Hyperion-3.0-Mistral-7B-DPO-GGUF/resolve/main/Hyperion-3.0-Mistral-7B-DPO.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Hyperion-3.0-Mistral-7B-DPO-GGUF/resolve/main/Hyperion-3.0-Mistral-7B-DPO.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Hyperion-3.0-Mistral-7B-DPO-GGUF/resolve/main/Hyperion-3.0-Mistral-7B-DPO.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Hyperion-3.0-Mistral-7B-DPO-GGUF/resolve/main/Hyperion-3.0-Mistral-7B-DPO.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Hyperion-3.0-Mistral-7B-DPO-GGUF/resolve/main/Hyperion-3.0-Mistral-7B-DPO.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Hyperion-3.0-Mistral-7B-DPO-GGUF/resolve/main/Hyperion-3.0-Mistral-7B-DPO.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Hyperion-3.0-Mistral-7B-DPO-GGUF/resolve/main/Hyperion-3.0-Mistral-7B-DPO.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Hyperion-3.0-Mistral-7B-DPO-GGUF/resolve/main/Hyperion-3.0-Mistral-7B-DPO.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Hyperion-3.0-Mistral-7B-DPO-GGUF/resolve/main/Hyperion-3.0-Mistral-7B-DPO.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Hyperion-3.0-Mistral-7B-DPO-GGUF/resolve/main/Hyperion-3.0-Mistral-7B-DPO.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Hyperion-3.0-Mistral-7B-DPO-GGUF/resolve/main/Hyperion-3.0-Mistral-7B-DPO.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Hyperion-3.0-Mistral-7B-DPO-GGUF/resolve/main/Hyperion-3.0-Mistral-7B-DPO.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Hyperion-3.0-Mistral-7B-DPO-GGUF/resolve/main/Hyperion-3.0-Mistral-7B-DPO.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Rivoli_7B_SLERP-GGUF
mradermacher
2024-05-06T05:59:23Z
143
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "CultriX/NeuralTrix-bf16", "AurelPx/Percival_01-7b-slerp", "en", "base_model:louisgrc/Rivoli_7B_SLERP", "base_model:quantized:louisgrc/Rivoli_7B_SLERP", "endpoints_compatible", "region:us" ]
null
2024-03-25T08:45:42Z
--- base_model: louisgrc/Rivoli_7B_SLERP language: - en library_name: transformers quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - CultriX/NeuralTrix-bf16 - AurelPx/Percival_01-7b-slerp --- ## About static quants of https://huggingface.co/louisgrc/Rivoli_7B_SLERP <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Rivoli_7B_SLERP-GGUF/resolve/main/Rivoli_7B_SLERP.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Rivoli_7B_SLERP-GGUF/resolve/main/Rivoli_7B_SLERP.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Rivoli_7B_SLERP-GGUF/resolve/main/Rivoli_7B_SLERP.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Rivoli_7B_SLERP-GGUF/resolve/main/Rivoli_7B_SLERP.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Rivoli_7B_SLERP-GGUF/resolve/main/Rivoli_7B_SLERP.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Rivoli_7B_SLERP-GGUF/resolve/main/Rivoli_7B_SLERP.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Rivoli_7B_SLERP-GGUF/resolve/main/Rivoli_7B_SLERP.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Rivoli_7B_SLERP-GGUF/resolve/main/Rivoli_7B_SLERP.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Rivoli_7B_SLERP-GGUF/resolve/main/Rivoli_7B_SLERP.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Rivoli_7B_SLERP-GGUF/resolve/main/Rivoli_7B_SLERP.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Rivoli_7B_SLERP-GGUF/resolve/main/Rivoli_7B_SLERP.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Rivoli_7B_SLERP-GGUF/resolve/main/Rivoli_7B_SLERP.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Rivoli_7B_SLERP-GGUF/resolve/main/Rivoli_7B_SLERP.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Rivoli_7B_SLERP-GGUF/resolve/main/Rivoli_7B_SLERP.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Rivoli_7B_SLERP-GGUF/resolve/main/Rivoli_7B_SLERP.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Rivoli_7B_SLERP-GGUF/resolve/main/Rivoli_7B_SLERP.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/XOrcaSlimWin-13B-GGUF
mradermacher
2024-05-06T05:59:15Z
54
0
transformers
[ "transformers", "gguf", "en", "base_model:Masterjp123/XOrcaSlimWin-13B", "base_model:quantized:Masterjp123/XOrcaSlimWin-13B", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-03-25T11:29:10Z
--- base_model: Masterjp123/XOrcaSlimWin-13B language: - en library_name: transformers license: llama2 quantized_by: mradermacher --- ## About static quants of https://huggingface.co/Masterjp123/XOrcaSlimWin-13B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/XOrcaSlimWin-13B-GGUF/resolve/main/XOrcaSlimWin-13B.Q2_K.gguf) | Q2_K | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/XOrcaSlimWin-13B-GGUF/resolve/main/XOrcaSlimWin-13B.IQ3_XS.gguf) | IQ3_XS | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/XOrcaSlimWin-13B-GGUF/resolve/main/XOrcaSlimWin-13B.IQ3_S.gguf) | IQ3_S | 6.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/XOrcaSlimWin-13B-GGUF/resolve/main/XOrcaSlimWin-13B.Q3_K_S.gguf) | Q3_K_S | 6.0 | | | [GGUF](https://huggingface.co/mradermacher/XOrcaSlimWin-13B-GGUF/resolve/main/XOrcaSlimWin-13B.IQ3_M.gguf) | IQ3_M | 6.3 | | | [GGUF](https://huggingface.co/mradermacher/XOrcaSlimWin-13B-GGUF/resolve/main/XOrcaSlimWin-13B.Q3_K_M.gguf) | Q3_K_M | 6.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/XOrcaSlimWin-13B-GGUF/resolve/main/XOrcaSlimWin-13B.Q3_K_L.gguf) | Q3_K_L | 7.2 | | | [GGUF](https://huggingface.co/mradermacher/XOrcaSlimWin-13B-GGUF/resolve/main/XOrcaSlimWin-13B.IQ4_XS.gguf) | IQ4_XS | 7.3 | | | [GGUF](https://huggingface.co/mradermacher/XOrcaSlimWin-13B-GGUF/resolve/main/XOrcaSlimWin-13B.Q4_0.gguf) | Q4_0 | 7.7 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/XOrcaSlimWin-13B-GGUF/resolve/main/XOrcaSlimWin-13B.IQ4_NL.gguf) | IQ4_NL | 7.7 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/XOrcaSlimWin-13B-GGUF/resolve/main/XOrcaSlimWin-13B.Q4_K_S.gguf) | Q4_K_S | 7.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/XOrcaSlimWin-13B-GGUF/resolve/main/XOrcaSlimWin-13B.Q4_K_M.gguf) | Q4_K_M | 8.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/XOrcaSlimWin-13B-GGUF/resolve/main/XOrcaSlimWin-13B.Q5_K_S.gguf) | Q5_K_S | 9.3 | | | [GGUF](https://huggingface.co/mradermacher/XOrcaSlimWin-13B-GGUF/resolve/main/XOrcaSlimWin-13B.Q5_K_M.gguf) | Q5_K_M | 9.5 | | | [GGUF](https://huggingface.co/mradermacher/XOrcaSlimWin-13B-GGUF/resolve/main/XOrcaSlimWin-13B.Q6_K.gguf) | Q6_K | 11.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/XOrcaSlimWin-13B-GGUF/resolve/main/XOrcaSlimWin-13B.Q8_0.gguf) | Q8_0 | 14.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/MoNeuTrix-MoE-4x7B-GGUF
mradermacher
2024-05-06T05:59:09Z
36
0
transformers
[ "transformers", "gguf", "moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "CultriX/MonaTrix-v4", "mlabonne/OmniTruthyBeagle-7B-v0", "CultriX/MoNeuTrix-7B-v1", "paulml/OmniBeagleSquaredMBX-v3-7B", "en", "base_model:CultriX/MoNeuTrix-MoE-4x7B", "base_model:quantized:CultriX/MoNeuTrix-MoE-4x7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-25T14:36:55Z
--- base_model: CultriX/MoNeuTrix-MoE-4x7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - moe - frankenmoe - merge - mergekit - lazymergekit - CultriX/MonaTrix-v4 - mlabonne/OmniTruthyBeagle-7B-v0 - CultriX/MoNeuTrix-7B-v1 - paulml/OmniBeagleSquaredMBX-v3-7B --- ## About static quants of https://huggingface.co/CultriX/MoNeuTrix-MoE-4x7B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/MoNeuTrix-MoE-4x7B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MoNeuTrix-MoE-4x7B-GGUF/resolve/main/MoNeuTrix-MoE-4x7B.Q2_K.gguf) | Q2_K | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/MoNeuTrix-MoE-4x7B-GGUF/resolve/main/MoNeuTrix-MoE-4x7B.IQ3_XS.gguf) | IQ3_XS | 10.1 | | | [GGUF](https://huggingface.co/mradermacher/MoNeuTrix-MoE-4x7B-GGUF/resolve/main/MoNeuTrix-MoE-4x7B.Q3_K_S.gguf) | Q3_K_S | 10.7 | | | [GGUF](https://huggingface.co/mradermacher/MoNeuTrix-MoE-4x7B-GGUF/resolve/main/MoNeuTrix-MoE-4x7B.IQ3_S.gguf) | IQ3_S | 10.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/MoNeuTrix-MoE-4x7B-GGUF/resolve/main/MoNeuTrix-MoE-4x7B.IQ3_M.gguf) | IQ3_M | 10.9 | | | [GGUF](https://huggingface.co/mradermacher/MoNeuTrix-MoE-4x7B-GGUF/resolve/main/MoNeuTrix-MoE-4x7B.Q3_K_M.gguf) | Q3_K_M | 11.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MoNeuTrix-MoE-4x7B-GGUF/resolve/main/MoNeuTrix-MoE-4x7B.Q3_K_L.gguf) | Q3_K_L | 12.8 | | | [GGUF](https://huggingface.co/mradermacher/MoNeuTrix-MoE-4x7B-GGUF/resolve/main/MoNeuTrix-MoE-4x7B.IQ4_XS.gguf) | IQ4_XS | 13.3 | | | [GGUF](https://huggingface.co/mradermacher/MoNeuTrix-MoE-4x7B-GGUF/resolve/main/MoNeuTrix-MoE-4x7B.Q4_0.gguf) | Q4_0 | 13.9 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/MoNeuTrix-MoE-4x7B-GGUF/resolve/main/MoNeuTrix-MoE-4x7B.Q4_K_S.gguf) | Q4_K_S | 14.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MoNeuTrix-MoE-4x7B-GGUF/resolve/main/MoNeuTrix-MoE-4x7B.IQ4_NL.gguf) | IQ4_NL | 14.0 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/MoNeuTrix-MoE-4x7B-GGUF/resolve/main/MoNeuTrix-MoE-4x7B.Q4_K_M.gguf) | Q4_K_M | 14.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MoNeuTrix-MoE-4x7B-GGUF/resolve/main/MoNeuTrix-MoE-4x7B.Q5_K_S.gguf) | Q5_K_S | 16.9 | | | [GGUF](https://huggingface.co/mradermacher/MoNeuTrix-MoE-4x7B-GGUF/resolve/main/MoNeuTrix-MoE-4x7B.Q5_K_M.gguf) | Q5_K_M | 17.4 | | | [GGUF](https://huggingface.co/mradermacher/MoNeuTrix-MoE-4x7B-GGUF/resolve/main/MoNeuTrix-MoE-4x7B.Q6_K.gguf) | Q6_K | 20.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/MoNeuTrix-MoE-4x7B-GGUF/resolve/main/MoNeuTrix-MoE-4x7B.Q8_0.gguf) | Q8_0 | 25.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Phind-Codefuse-34B-GGUF
mradermacher
2024-05-06T05:59:07Z
54
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "Phind/Phind-CodeLlama-34B-v2", "codefuse-ai/CodeFuse-CodeLlama-34B", "en", "base_model:saucam/Phind-Codefuse-34B", "base_model:quantized:saucam/Phind-Codefuse-34B", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-03-25T14:39:56Z
--- base_model: saucam/Phind-Codefuse-34B language: - en library_name: transformers license: mit quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - Phind/Phind-CodeLlama-34B-v2 - codefuse-ai/CodeFuse-CodeLlama-34B --- ## About static quants of https://huggingface.co/saucam/Phind-Codefuse-34B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Phind-Codefuse-34B-GGUF/resolve/main/Phind-Codefuse-34B.Q2_K.gguf) | Q2_K | 12.9 | | | [GGUF](https://huggingface.co/mradermacher/Phind-Codefuse-34B-GGUF/resolve/main/Phind-Codefuse-34B.IQ3_XS.gguf) | IQ3_XS | 14.3 | | | [GGUF](https://huggingface.co/mradermacher/Phind-Codefuse-34B-GGUF/resolve/main/Phind-Codefuse-34B.Q3_K_S.gguf) | Q3_K_S | 15.0 | | | [GGUF](https://huggingface.co/mradermacher/Phind-Codefuse-34B-GGUF/resolve/main/Phind-Codefuse-34B.IQ3_S.gguf) | IQ3_S | 15.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Phind-Codefuse-34B-GGUF/resolve/main/Phind-Codefuse-34B.IQ3_M.gguf) | IQ3_M | 15.6 | | | [GGUF](https://huggingface.co/mradermacher/Phind-Codefuse-34B-GGUF/resolve/main/Phind-Codefuse-34B.Q3_K_M.gguf) | Q3_K_M | 16.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Phind-Codefuse-34B-GGUF/resolve/main/Phind-Codefuse-34B.Q3_K_L.gguf) | Q3_K_L | 18.2 | | | [GGUF](https://huggingface.co/mradermacher/Phind-Codefuse-34B-GGUF/resolve/main/Phind-Codefuse-34B.IQ4_XS.gguf) | IQ4_XS | 18.6 | | | [GGUF](https://huggingface.co/mradermacher/Phind-Codefuse-34B-GGUF/resolve/main/Phind-Codefuse-34B.Q4_0.gguf) | Q4_0 | 19.5 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Phind-Codefuse-34B-GGUF/resolve/main/Phind-Codefuse-34B.Q4_K_S.gguf) | Q4_K_S | 19.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Phind-Codefuse-34B-GGUF/resolve/main/Phind-Codefuse-34B.IQ4_NL.gguf) | IQ4_NL | 19.6 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Phind-Codefuse-34B-GGUF/resolve/main/Phind-Codefuse-34B.Q4_K_M.gguf) | Q4_K_M | 20.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Phind-Codefuse-34B-GGUF/resolve/main/Phind-Codefuse-34B.Q5_K_S.gguf) | Q5_K_S | 23.6 | | | [GGUF](https://huggingface.co/mradermacher/Phind-Codefuse-34B-GGUF/resolve/main/Phind-Codefuse-34B.Q5_K_M.gguf) | Q5_K_M | 24.2 | | | [GGUF](https://huggingface.co/mradermacher/Phind-Codefuse-34B-GGUF/resolve/main/Phind-Codefuse-34B.Q6_K.gguf) | Q6_K | 28.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Phind-Codefuse-34B-GGUF/resolve/main/Phind-Codefuse-34B.Q8_0.gguf) | Q8_0 | 36.2 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/NeuralMona_MoE-4x7B-GGUF
mradermacher
2024-05-06T05:58:30Z
294
0
transformers
[ "transformers", "gguf", "moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "CultriX/MonaTrix-v4", "mlabonne/OmniTruthyBeagle-7B-v0", "CultriX/MoNeuTrix-7B-v1", "paulml/OmniBeagleSquaredMBX-v3-7B", "en", "base_model:CultriX/NeuralMona_MoE-4x7B", "base_model:quantized:CultriX/NeuralMona_MoE-4x7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-25T20:19:41Z
--- base_model: CultriX/NeuralMona_MoE-4x7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - moe - frankenmoe - merge - mergekit - lazymergekit - CultriX/MonaTrix-v4 - mlabonne/OmniTruthyBeagle-7B-v0 - CultriX/MoNeuTrix-7B-v1 - paulml/OmniBeagleSquaredMBX-v3-7B --- ## About static quants of https://huggingface.co/CultriX/NeuralMona_MoE-4x7B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/NeuralMona_MoE-4x7B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/NeuralMona_MoE-4x7B-GGUF/resolve/main/NeuralMona_MoE-4x7B.Q2_K.gguf) | Q2_K | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/NeuralMona_MoE-4x7B-GGUF/resolve/main/NeuralMona_MoE-4x7B.IQ3_XS.gguf) | IQ3_XS | 10.1 | | | [GGUF](https://huggingface.co/mradermacher/NeuralMona_MoE-4x7B-GGUF/resolve/main/NeuralMona_MoE-4x7B.Q3_K_S.gguf) | Q3_K_S | 10.7 | | | [GGUF](https://huggingface.co/mradermacher/NeuralMona_MoE-4x7B-GGUF/resolve/main/NeuralMona_MoE-4x7B.IQ3_S.gguf) | IQ3_S | 10.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/NeuralMona_MoE-4x7B-GGUF/resolve/main/NeuralMona_MoE-4x7B.IQ3_M.gguf) | IQ3_M | 10.9 | | | [GGUF](https://huggingface.co/mradermacher/NeuralMona_MoE-4x7B-GGUF/resolve/main/NeuralMona_MoE-4x7B.Q3_K_M.gguf) | Q3_K_M | 11.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/NeuralMona_MoE-4x7B-GGUF/resolve/main/NeuralMona_MoE-4x7B.Q3_K_L.gguf) | Q3_K_L | 12.8 | | | [GGUF](https://huggingface.co/mradermacher/NeuralMona_MoE-4x7B-GGUF/resolve/main/NeuralMona_MoE-4x7B.IQ4_XS.gguf) | IQ4_XS | 13.3 | | | [GGUF](https://huggingface.co/mradermacher/NeuralMona_MoE-4x7B-GGUF/resolve/main/NeuralMona_MoE-4x7B.Q4_0.gguf) | Q4_0 | 13.9 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/NeuralMona_MoE-4x7B-GGUF/resolve/main/NeuralMona_MoE-4x7B.Q4_K_S.gguf) | Q4_K_S | 14.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/NeuralMona_MoE-4x7B-GGUF/resolve/main/NeuralMona_MoE-4x7B.IQ4_NL.gguf) | IQ4_NL | 14.0 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/NeuralMona_MoE-4x7B-GGUF/resolve/main/NeuralMona_MoE-4x7B.Q4_K_M.gguf) | Q4_K_M | 14.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/NeuralMona_MoE-4x7B-GGUF/resolve/main/NeuralMona_MoE-4x7B.Q5_K_S.gguf) | Q5_K_S | 16.9 | | | [GGUF](https://huggingface.co/mradermacher/NeuralMona_MoE-4x7B-GGUF/resolve/main/NeuralMona_MoE-4x7B.Q5_K_M.gguf) | Q5_K_M | 17.4 | | | [GGUF](https://huggingface.co/mradermacher/NeuralMona_MoE-4x7B-GGUF/resolve/main/NeuralMona_MoE-4x7B.Q6_K.gguf) | Q6_K | 20.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/NeuralMona_MoE-4x7B-GGUF/resolve/main/NeuralMona_MoE-4x7B.Q8_0.gguf) | Q8_0 | 25.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/PALO-7B-GGUF
mradermacher
2024-05-06T05:57:51Z
41
0
transformers
[ "transformers", "gguf", "en", "base_model:MBZUAI/PALO-7B", "base_model:quantized:MBZUAI/PALO-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-25T23:33:08Z
--- base_model: MBZUAI/PALO-7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About static quants of https://huggingface.co/MBZUAI/PALO-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/PALO-7B-GGUF/resolve/main/PALO-7B.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/PALO-7B-GGUF/resolve/main/PALO-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/PALO-7B-GGUF/resolve/main/PALO-7B.IQ3_S.gguf) | IQ3_S | 3.2 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/PALO-7B-GGUF/resolve/main/PALO-7B.Q3_K_S.gguf) | Q3_K_S | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/PALO-7B-GGUF/resolve/main/PALO-7B.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/PALO-7B-GGUF/resolve/main/PALO-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/PALO-7B-GGUF/resolve/main/PALO-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/PALO-7B-GGUF/resolve/main/PALO-7B.IQ4_XS.gguf) | IQ4_XS | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/PALO-7B-GGUF/resolve/main/PALO-7B.Q4_0.gguf) | Q4_0 | 4.1 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/PALO-7B-GGUF/resolve/main/PALO-7B.IQ4_NL.gguf) | IQ4_NL | 4.1 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/PALO-7B-GGUF/resolve/main/PALO-7B.Q4_K_S.gguf) | Q4_K_S | 4.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/PALO-7B-GGUF/resolve/main/PALO-7B.Q4_K_M.gguf) | Q4_K_M | 4.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/PALO-7B-GGUF/resolve/main/PALO-7B.Q5_K_S.gguf) | Q5_K_S | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/PALO-7B-GGUF/resolve/main/PALO-7B.Q5_K_M.gguf) | Q5_K_M | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/PALO-7B-GGUF/resolve/main/PALO-7B.Q6_K.gguf) | Q6_K | 5.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/PALO-7B-GGUF/resolve/main/PALO-7B.Q8_0.gguf) | Q8_0 | 7.4 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Fasciculus-Arcuatus-7B-slerp-GGUF
mradermacher
2024-05-06T05:57:47Z
75
1
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "macadeliccc/MonarchLake-7B", "Kukedlc/NeoCortex-7B-slerp", "en", "base_model:Kukedlc/Fasciculus-Arcuatus-7B-slerp", "base_model:quantized:Kukedlc/Fasciculus-Arcuatus-7B-slerp", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-26T00:07:55Z
--- base_model: Kukedlc/Fasciculus-Arcuatus-7B-slerp language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - macadeliccc/MonarchLake-7B - Kukedlc/NeoCortex-7B-slerp --- ## About static quants of https://huggingface.co/Kukedlc/Fasciculus-Arcuatus-7B-slerp <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Fasciculus-Arcuatus-7B-slerp-GGUF/resolve/main/Fasciculus-Arcuatus-7B-slerp.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Fasciculus-Arcuatus-7B-slerp-GGUF/resolve/main/Fasciculus-Arcuatus-7B-slerp.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Fasciculus-Arcuatus-7B-slerp-GGUF/resolve/main/Fasciculus-Arcuatus-7B-slerp.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Fasciculus-Arcuatus-7B-slerp-GGUF/resolve/main/Fasciculus-Arcuatus-7B-slerp.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Fasciculus-Arcuatus-7B-slerp-GGUF/resolve/main/Fasciculus-Arcuatus-7B-slerp.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Fasciculus-Arcuatus-7B-slerp-GGUF/resolve/main/Fasciculus-Arcuatus-7B-slerp.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Fasciculus-Arcuatus-7B-slerp-GGUF/resolve/main/Fasciculus-Arcuatus-7B-slerp.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Fasciculus-Arcuatus-7B-slerp-GGUF/resolve/main/Fasciculus-Arcuatus-7B-slerp.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Fasciculus-Arcuatus-7B-slerp-GGUF/resolve/main/Fasciculus-Arcuatus-7B-slerp.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Fasciculus-Arcuatus-7B-slerp-GGUF/resolve/main/Fasciculus-Arcuatus-7B-slerp.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Fasciculus-Arcuatus-7B-slerp-GGUF/resolve/main/Fasciculus-Arcuatus-7B-slerp.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Fasciculus-Arcuatus-7B-slerp-GGUF/resolve/main/Fasciculus-Arcuatus-7B-slerp.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Fasciculus-Arcuatus-7B-slerp-GGUF/resolve/main/Fasciculus-Arcuatus-7B-slerp.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Fasciculus-Arcuatus-7B-slerp-GGUF/resolve/main/Fasciculus-Arcuatus-7B-slerp.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Fasciculus-Arcuatus-7B-slerp-GGUF/resolve/main/Fasciculus-Arcuatus-7B-slerp.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Fasciculus-Arcuatus-7B-slerp-GGUF/resolve/main/Fasciculus-Arcuatus-7B-slerp.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/KoSOLAR-10.7B-v1.0-GGUF
mradermacher
2024-05-06T05:57:41Z
17
0
transformers
[ "transformers", "gguf", "merge", "en", "base_model:rrw-x2/KoSOLAR-10.7B-v1.0", "base_model:quantized:rrw-x2/KoSOLAR-10.7B-v1.0", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-03-26T00:18:09Z
--- base_model: rrw-x2/KoSOLAR-10.7B-v1.0 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge --- ## About static quants of https://huggingface.co/rrw-x2/KoSOLAR-10.7B-v1.0 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/KoSOLAR-10.7B-v1.0-GGUF/resolve/main/KoSOLAR-10.7B-v1.0.Q2_K.gguf) | Q2_K | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/KoSOLAR-10.7B-v1.0-GGUF/resolve/main/KoSOLAR-10.7B-v1.0.IQ3_XS.gguf) | IQ3_XS | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/KoSOLAR-10.7B-v1.0-GGUF/resolve/main/KoSOLAR-10.7B-v1.0.Q3_K_S.gguf) | Q3_K_S | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/KoSOLAR-10.7B-v1.0-GGUF/resolve/main/KoSOLAR-10.7B-v1.0.IQ3_S.gguf) | IQ3_S | 5.2 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/KoSOLAR-10.7B-v1.0-GGUF/resolve/main/KoSOLAR-10.7B-v1.0.IQ3_M.gguf) | IQ3_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/KoSOLAR-10.7B-v1.0-GGUF/resolve/main/KoSOLAR-10.7B-v1.0.Q3_K_M.gguf) | Q3_K_M | 5.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/KoSOLAR-10.7B-v1.0-GGUF/resolve/main/KoSOLAR-10.7B-v1.0.Q3_K_L.gguf) | Q3_K_L | 6.2 | | | [GGUF](https://huggingface.co/mradermacher/KoSOLAR-10.7B-v1.0-GGUF/resolve/main/KoSOLAR-10.7B-v1.0.IQ4_XS.gguf) | IQ4_XS | 6.3 | | | [GGUF](https://huggingface.co/mradermacher/KoSOLAR-10.7B-v1.0-GGUF/resolve/main/KoSOLAR-10.7B-v1.0.Q4_0.gguf) | Q4_0 | 6.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/KoSOLAR-10.7B-v1.0-GGUF/resolve/main/KoSOLAR-10.7B-v1.0.Q4_K_S.gguf) | Q4_K_S | 6.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/KoSOLAR-10.7B-v1.0-GGUF/resolve/main/KoSOLAR-10.7B-v1.0.IQ4_NL.gguf) | IQ4_NL | 6.7 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/KoSOLAR-10.7B-v1.0-GGUF/resolve/main/KoSOLAR-10.7B-v1.0.Q4_K_M.gguf) | Q4_K_M | 7.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/KoSOLAR-10.7B-v1.0-GGUF/resolve/main/KoSOLAR-10.7B-v1.0.Q5_K_S.gguf) | Q5_K_S | 7.9 | | | [GGUF](https://huggingface.co/mradermacher/KoSOLAR-10.7B-v1.0-GGUF/resolve/main/KoSOLAR-10.7B-v1.0.Q5_K_M.gguf) | Q5_K_M | 8.1 | | | [GGUF](https://huggingface.co/mradermacher/KoSOLAR-10.7B-v1.0-GGUF/resolve/main/KoSOLAR-10.7B-v1.0.Q6_K.gguf) | Q6_K | 9.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/KoSOLAR-10.7B-v1.0-GGUF/resolve/main/KoSOLAR-10.7B-v1.0.Q8_0.gguf) | Q8_0 | 11.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Neural-Krishna-Multiverse-7b-v3-GGUF
mradermacher
2024-05-06T05:57:38Z
21
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "Kukedlc/Neural-Krishna-Multiverse-7b-v2", "yam-peleg/Experiment26-7B", "en", "base_model:Kukedlc/Neural-Krishna-Multiverse-7b-v3", "base_model:quantized:Kukedlc/Neural-Krishna-Multiverse-7b-v3", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-26T00:45:49Z
--- base_model: Kukedlc/Neural-Krishna-Multiverse-7b-v3 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - Kukedlc/Neural-Krishna-Multiverse-7b-v2 - yam-peleg/Experiment26-7B --- ## About static quants of https://huggingface.co/Kukedlc/Neural-Krishna-Multiverse-7b-v3 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Neural-Krishna-Multiverse-7b-v3-GGUF/resolve/main/Neural-Krishna-Multiverse-7b-v3.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Neural-Krishna-Multiverse-7b-v3-GGUF/resolve/main/Neural-Krishna-Multiverse-7b-v3.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Neural-Krishna-Multiverse-7b-v3-GGUF/resolve/main/Neural-Krishna-Multiverse-7b-v3.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Neural-Krishna-Multiverse-7b-v3-GGUF/resolve/main/Neural-Krishna-Multiverse-7b-v3.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Neural-Krishna-Multiverse-7b-v3-GGUF/resolve/main/Neural-Krishna-Multiverse-7b-v3.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Neural-Krishna-Multiverse-7b-v3-GGUF/resolve/main/Neural-Krishna-Multiverse-7b-v3.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Neural-Krishna-Multiverse-7b-v3-GGUF/resolve/main/Neural-Krishna-Multiverse-7b-v3.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Neural-Krishna-Multiverse-7b-v3-GGUF/resolve/main/Neural-Krishna-Multiverse-7b-v3.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Neural-Krishna-Multiverse-7b-v3-GGUF/resolve/main/Neural-Krishna-Multiverse-7b-v3.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Neural-Krishna-Multiverse-7b-v3-GGUF/resolve/main/Neural-Krishna-Multiverse-7b-v3.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Neural-Krishna-Multiverse-7b-v3-GGUF/resolve/main/Neural-Krishna-Multiverse-7b-v3.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Neural-Krishna-Multiverse-7b-v3-GGUF/resolve/main/Neural-Krishna-Multiverse-7b-v3.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Neural-Krishna-Multiverse-7b-v3-GGUF/resolve/main/Neural-Krishna-Multiverse-7b-v3.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Neural-Krishna-Multiverse-7b-v3-GGUF/resolve/main/Neural-Krishna-Multiverse-7b-v3.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Neural-Krishna-Multiverse-7b-v3-GGUF/resolve/main/Neural-Krishna-Multiverse-7b-v3.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Neural-Krishna-Multiverse-7b-v3-GGUF/resolve/main/Neural-Krishna-Multiverse-7b-v3.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/IreneRP-Neural-7B-slerp-GGUF
mradermacher
2024-05-06T05:56:54Z
14
1
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "Virt-io/Irene-RP-v3-7B", "NurtureAI/neural-chat-7b-v3-16k", "en", "base_model:Smuggling1710/IreneRP-Neural-7B-slerp", "base_model:quantized:Smuggling1710/IreneRP-Neural-7B-slerp", "endpoints_compatible", "region:us" ]
null
2024-03-26T02:53:27Z
--- base_model: Smuggling1710/IreneRP-Neural-7B-slerp language: - en library_name: transformers quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - Virt-io/Irene-RP-v3-7B - NurtureAI/neural-chat-7b-v3-16k --- ## About static quants of https://huggingface.co/Smuggling1710/IreneRP-Neural-7B-slerp <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/IreneRP-Neural-7B-slerp-GGUF/resolve/main/IreneRP-Neural-7B-slerp.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/IreneRP-Neural-7B-slerp-GGUF/resolve/main/IreneRP-Neural-7B-slerp.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/IreneRP-Neural-7B-slerp-GGUF/resolve/main/IreneRP-Neural-7B-slerp.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/IreneRP-Neural-7B-slerp-GGUF/resolve/main/IreneRP-Neural-7B-slerp.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/IreneRP-Neural-7B-slerp-GGUF/resolve/main/IreneRP-Neural-7B-slerp.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/IreneRP-Neural-7B-slerp-GGUF/resolve/main/IreneRP-Neural-7B-slerp.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/IreneRP-Neural-7B-slerp-GGUF/resolve/main/IreneRP-Neural-7B-slerp.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/IreneRP-Neural-7B-slerp-GGUF/resolve/main/IreneRP-Neural-7B-slerp.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/IreneRP-Neural-7B-slerp-GGUF/resolve/main/IreneRP-Neural-7B-slerp.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/IreneRP-Neural-7B-slerp-GGUF/resolve/main/IreneRP-Neural-7B-slerp.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/IreneRP-Neural-7B-slerp-GGUF/resolve/main/IreneRP-Neural-7B-slerp.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/IreneRP-Neural-7B-slerp-GGUF/resolve/main/IreneRP-Neural-7B-slerp.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/IreneRP-Neural-7B-slerp-GGUF/resolve/main/IreneRP-Neural-7B-slerp.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/IreneRP-Neural-7B-slerp-GGUF/resolve/main/IreneRP-Neural-7B-slerp.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/IreneRP-Neural-7B-slerp-GGUF/resolve/main/IreneRP-Neural-7B-slerp.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/IreneRP-Neural-7B-slerp-GGUF/resolve/main/IreneRP-Neural-7B-slerp.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Babaroga-7B-slerp-GGUF
mradermacher
2024-05-06T05:56:52Z
22
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "Stopwolf/Gunj-7B-v2-full", "mlabonne/AlphaMonarch-7B", "en", "base_model:IntellyaDS/Babaroga-7B-slerp", "base_model:quantized:IntellyaDS/Babaroga-7B-slerp", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-03-26T04:20:31Z
--- base_model: Stopwolf/Babaroga-7B-slerp language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - Stopwolf/Gunj-7B-v2-full - mlabonne/AlphaMonarch-7B --- ## About static quants of https://huggingface.co/Stopwolf/Babaroga-7B-slerp <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Babaroga-7B-slerp-GGUF/resolve/main/Babaroga-7B-slerp.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Babaroga-7B-slerp-GGUF/resolve/main/Babaroga-7B-slerp.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Babaroga-7B-slerp-GGUF/resolve/main/Babaroga-7B-slerp.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Babaroga-7B-slerp-GGUF/resolve/main/Babaroga-7B-slerp.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Babaroga-7B-slerp-GGUF/resolve/main/Babaroga-7B-slerp.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Babaroga-7B-slerp-GGUF/resolve/main/Babaroga-7B-slerp.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Babaroga-7B-slerp-GGUF/resolve/main/Babaroga-7B-slerp.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Babaroga-7B-slerp-GGUF/resolve/main/Babaroga-7B-slerp.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Babaroga-7B-slerp-GGUF/resolve/main/Babaroga-7B-slerp.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Babaroga-7B-slerp-GGUF/resolve/main/Babaroga-7B-slerp.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Babaroga-7B-slerp-GGUF/resolve/main/Babaroga-7B-slerp.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Babaroga-7B-slerp-GGUF/resolve/main/Babaroga-7B-slerp.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Babaroga-7B-slerp-GGUF/resolve/main/Babaroga-7B-slerp.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Babaroga-7B-slerp-GGUF/resolve/main/Babaroga-7B-slerp.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Babaroga-7B-slerp-GGUF/resolve/main/Babaroga-7B-slerp.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Babaroga-7B-slerp-GGUF/resolve/main/Babaroga-7B-slerp.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Rogue-Rose-103b-v0.2-GGUF
mradermacher
2024-05-06T05:56:49Z
37
0
transformers
[ "transformers", "gguf", "en", "base_model:sophosympatheia/Rogue-Rose-103b-v0.2", "base_model:quantized:sophosympatheia/Rogue-Rose-103b-v0.2", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-03-26T04:41:11Z
--- base_model: sophosympatheia/Rogue-Rose-103b-v0.2 language: - en library_name: transformers license: llama2 quantized_by: mradermacher --- ## About static quants of https://huggingface.co/sophosympatheia/Rogue-Rose-103b-v0.2 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-GGUF/resolve/main/Rogue-Rose-103b-v0.2.Q2_K.gguf) | Q2_K | 38.3 | | | [GGUF](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-GGUF/resolve/main/Rogue-Rose-103b-v0.2.IQ3_XS.gguf) | IQ3_XS | 42.6 | | | [GGUF](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-GGUF/resolve/main/Rogue-Rose-103b-v0.2.Q3_K_S.gguf) | Q3_K_S | 44.9 | | | [GGUF](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-GGUF/resolve/main/Rogue-Rose-103b-v0.2.IQ3_S.gguf) | IQ3_S | 45.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-GGUF/resolve/main/Rogue-Rose-103b-v0.2.IQ3_M.gguf) | IQ3_M | 46.5 | | | [PART 1](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-GGUF/resolve/main/Rogue-Rose-103b-v0.2.Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-GGUF/resolve/main/Rogue-Rose-103b-v0.2.Q3_K_M.gguf.part2of2) | Q3_K_M | 50.0 | lower quality | | [PART 1](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-GGUF/resolve/main/Rogue-Rose-103b-v0.2.Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-GGUF/resolve/main/Rogue-Rose-103b-v0.2.Q3_K_L.gguf.part2of2) | Q3_K_L | 54.5 | | | [PART 1](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-GGUF/resolve/main/Rogue-Rose-103b-v0.2.IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-GGUF/resolve/main/Rogue-Rose-103b-v0.2.IQ4_XS.gguf.part2of2) | IQ4_XS | 56.0 | | | [PART 1](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-GGUF/resolve/main/Rogue-Rose-103b-v0.2.Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-GGUF/resolve/main/Rogue-Rose-103b-v0.2.Q4_0.gguf.part2of2) | Q4_0 | 58.5 | fast, low quality | | [PART 1](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-GGUF/resolve/main/Rogue-Rose-103b-v0.2.Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-GGUF/resolve/main/Rogue-Rose-103b-v0.2.Q4_K_S.gguf.part2of2) | Q4_K_S | 59.0 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-GGUF/resolve/main/Rogue-Rose-103b-v0.2.IQ4_NL.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-GGUF/resolve/main/Rogue-Rose-103b-v0.2.IQ4_NL.gguf.part2of2) | IQ4_NL | 59.1 | prefer IQ4_XS | | [PART 1](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-GGUF/resolve/main/Rogue-Rose-103b-v0.2.Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-GGUF/resolve/main/Rogue-Rose-103b-v0.2.Q4_K_M.gguf.part2of2) | Q4_K_M | 62.3 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-GGUF/resolve/main/Rogue-Rose-103b-v0.2.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-GGUF/resolve/main/Rogue-Rose-103b-v0.2.Q5_K_S.gguf.part2of2) | Q5_K_S | 71.4 | | | [PART 1](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-GGUF/resolve/main/Rogue-Rose-103b-v0.2.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-GGUF/resolve/main/Rogue-Rose-103b-v0.2.Q5_K_M.gguf.part2of2) | Q5_K_M | 73.3 | | | [PART 1](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-GGUF/resolve/main/Rogue-Rose-103b-v0.2.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-GGUF/resolve/main/Rogue-Rose-103b-v0.2.Q6_K.gguf.part2of2) | Q6_K | 85.1 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-GGUF/resolve/main/Rogue-Rose-103b-v0.2.Q8_0.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-GGUF/resolve/main/Rogue-Rose-103b-v0.2.Q8_0.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-GGUF/resolve/main/Rogue-Rose-103b-v0.2.Q8_0.gguf.part3of3) | Q8_0 | 110.0 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Gunj-7B-v2-full-GGUF
mradermacher
2024-05-06T05:56:46Z
2
0
transformers
[ "transformers", "gguf", "en", "endpoints_compatible", "region:us", "conversational" ]
null
2024-03-26T04:49:54Z
--- base_model: Stopwolf/Gunj-7B-v2-full language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About static quants of https://huggingface.co/Stopwolf/Gunj-7B-v2-full <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Gunj-7B-v2-full-GGUF/resolve/main/Gunj-7B-v2-full.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Gunj-7B-v2-full-GGUF/resolve/main/Gunj-7B-v2-full.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Gunj-7B-v2-full-GGUF/resolve/main/Gunj-7B-v2-full.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Gunj-7B-v2-full-GGUF/resolve/main/Gunj-7B-v2-full.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Gunj-7B-v2-full-GGUF/resolve/main/Gunj-7B-v2-full.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Gunj-7B-v2-full-GGUF/resolve/main/Gunj-7B-v2-full.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Gunj-7B-v2-full-GGUF/resolve/main/Gunj-7B-v2-full.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Gunj-7B-v2-full-GGUF/resolve/main/Gunj-7B-v2-full.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Gunj-7B-v2-full-GGUF/resolve/main/Gunj-7B-v2-full.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Gunj-7B-v2-full-GGUF/resolve/main/Gunj-7B-v2-full.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Gunj-7B-v2-full-GGUF/resolve/main/Gunj-7B-v2-full.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Gunj-7B-v2-full-GGUF/resolve/main/Gunj-7B-v2-full.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Gunj-7B-v2-full-GGUF/resolve/main/Gunj-7B-v2-full.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Gunj-7B-v2-full-GGUF/resolve/main/Gunj-7B-v2-full.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Gunj-7B-v2-full-GGUF/resolve/main/Gunj-7B-v2-full.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Gunj-7B-v2-full-GGUF/resolve/main/Gunj-7B-v2-full.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Threnystril-v2.0-7B-slerp-GGUF
mradermacher
2024-05-06T05:56:42Z
20
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "ozayezerceli/Threnystril-7B-slerp", "ozayezerceli/BetterSaul-7B-slerp", "en", "base_model:newmindai/Threnystril-v2.0-7B-slerp", "base_model:quantized:newmindai/Threnystril-v2.0-7B-slerp", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-03-26T05:09:34Z
--- base_model: ozayezerceli/Threnystril-v2.0-7B-slerp language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - ozayezerceli/Threnystril-7B-slerp - ozayezerceli/BetterSaul-7B-slerp --- ## About static quants of https://huggingface.co/ozayezerceli/Threnystril-v2.0-7B-slerp <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Threnystril-v2.0-7B-slerp-GGUF/resolve/main/Threnystril-v2.0-7B-slerp.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Threnystril-v2.0-7B-slerp-GGUF/resolve/main/Threnystril-v2.0-7B-slerp.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Threnystril-v2.0-7B-slerp-GGUF/resolve/main/Threnystril-v2.0-7B-slerp.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Threnystril-v2.0-7B-slerp-GGUF/resolve/main/Threnystril-v2.0-7B-slerp.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Threnystril-v2.0-7B-slerp-GGUF/resolve/main/Threnystril-v2.0-7B-slerp.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Threnystril-v2.0-7B-slerp-GGUF/resolve/main/Threnystril-v2.0-7B-slerp.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Threnystril-v2.0-7B-slerp-GGUF/resolve/main/Threnystril-v2.0-7B-slerp.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Threnystril-v2.0-7B-slerp-GGUF/resolve/main/Threnystril-v2.0-7B-slerp.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Threnystril-v2.0-7B-slerp-GGUF/resolve/main/Threnystril-v2.0-7B-slerp.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Threnystril-v2.0-7B-slerp-GGUF/resolve/main/Threnystril-v2.0-7B-slerp.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Threnystril-v2.0-7B-slerp-GGUF/resolve/main/Threnystril-v2.0-7B-slerp.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Threnystril-v2.0-7B-slerp-GGUF/resolve/main/Threnystril-v2.0-7B-slerp.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Threnystril-v2.0-7B-slerp-GGUF/resolve/main/Threnystril-v2.0-7B-slerp.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Threnystril-v2.0-7B-slerp-GGUF/resolve/main/Threnystril-v2.0-7B-slerp.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Threnystril-v2.0-7B-slerp-GGUF/resolve/main/Threnystril-v2.0-7B-slerp.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Threnystril-v2.0-7B-slerp-GGUF/resolve/main/Threnystril-v2.0-7B-slerp.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
DevsDoCode/LLama-3-8b-Uncensored-Q3_K_S-GGUF
DevsDoCode
2024-05-06T05:56:14Z
7
2
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-05T06:04:53Z
--- library_name: transformers tags: - llama-cpp - gguf-my-repo --- <div align="center"> <!-- Replace `#` with your actual links --> <a href="https://youtube.com/@devsdocode"><img alt="YouTube" src="https://img.shields.io/badge/YouTube-FF0000?style=for-the-badge&logo=youtube&logoColor=white"></a> <a href="https://t.me/devsdocode"><img alt="Telegram" src="https://img.shields.io/badge/Telegram-2CA5E0?style=for-the-badge&logo=telegram&logoColor=white"></a> <a href="https://www.instagram.com/sree.shades_/"><img alt="Instagram" src="https://img.shields.io/badge/Instagram-E4405F?style=for-the-badge&logo=instagram&logoColor=white"></a> <a href="https://www.linkedin.com/in/developer-sreejan/"><img alt="LinkedIn" src="https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white"></a> <a href="https://buymeacoffee.com/devsdocode"><img alt="Buy Me A Coffee" src="https://img.shields.io/badge/Buy%20Me%20A%20Coffee-FFDD00?style=for-the-badge&logo=buymeacoffee&logoColor=black"></a> </div> ## Crafted with ❤️ by Devs Do Code (Sree) ### GGUF Technical Specifications Delve into the intricacies of GGUF, a meticulously crafted format that builds upon the robust foundation of the GGJT model. Tailored for heightened extensibility and user-centric functionality, GGUF introduces a suite of indispensable features: **Single-file Deployment:** Streamline distribution and loading effortlessly. GGUF models have been meticulously architected for seamless deployment, necessitating no external files for supplementary information. **Extensibility:** Safeguard the future of your models. GGUF seamlessly accommodates the integration of new features into GGML-based executors, ensuring compatibility with existing models. **mmap Compatibility:** Prioritize efficiency. GGUF models are purposefully engineered to support mmap, facilitating rapid loading and saving, thus optimizing your workflow. **User-Friendly:** Simplify your coding endeavors. Load and save models effortlessly, irrespective of the programming language used, obviating the dependency on external libraries. **Full Information:** A comprehensive repository in a single file. GGUF models encapsulate all requisite information for loading, eliminating the need for users to furnish additional data. The differentiator between GGJT and GGUF lies in the deliberate adoption of a key-value structure for hyperparameters (now termed metadata). Bid farewell to untyped lists, and embrace a structured approach that seamlessly accommodates new metadata without compromising compatibility with existing models. Augment your model with supplementary information for enhanced inference and model identification. **QUANTIZATION_METHODS:** | Method | Quantization | Advantages | Trade-offs | |---|---|---|---| | q2_k | 2-bit integers | Significant model size reduction | Minimal impact on accuracy | | q3_k_l | 3-bit integers | Balance between model size reduction and accuracy preservation | Moderate impact on accuracy | | q3_k_m | 3-bit integers | Enhanced accuracy with mixed precision | Increased computational complexity | | q3_k_s | 3-bit integers | Improved model efficiency with structured pruning | Reduced accuracy | | q4_0 | 4-bit integers | Significant model size reduction | Moderate impact on accuracy | | q4_1 | 4-bit integers | Enhanced accuracy with mixed precision | Increased computational complexity | | q4_k_m | 4-bit integers | Optimized model size and accuracy with mixed precision and structured pruning | Reduced accuracy | | q4_k_s | 4-bit integers | Improved model efficiency with structured pruning | Reduced accuracy | | q5_0 | 5-bit integers | Balance between model size reduction and accuracy preservation | Moderate impact on accuracy | | q5_1 | 5-bit integers | Enhanced accuracy with mixed precision | Increased computational complexity | | q5_k_m | 5-bit integers | Optimized model size and accuracy with mixed precision and structured pruning | Reduced accuracy | | q5_k_s | 5-bit integers | Improved model efficiency with structured pruning | Reduced accuracy | | q6_k | 6-bit integers | Balance between model size reduction and accuracy preservation | Moderate impact on accuracy | | q8_0 | 8-bit integers | Significant model size reduction | Minimal impact on accuracy | <div align="center"> <!-- Replace `#` with your actual links --> <a href="https://youtube.com/@devsdocode"><img alt="YouTube" src="https://img.shields.io/badge/YouTube-FF0000?style=for-the-badge&logo=youtube&logoColor=white"></a> <a href="https://t.me/devsdocode"><img alt="Telegram" src="https://img.shields.io/badge/Telegram-2CA5E0?style=for-the-badge&logo=telegram&logoColor=white"></a> <a href="https://www.instagram.com/sree.shades_/"><img alt="Instagram" src="https://img.shields.io/badge/Instagram-E4405F?style=for-the-badge&logo=instagram&logoColor=white"></a> <a href="https://www.linkedin.com/in/developer-sreejan/"><img alt="LinkedIn" src="https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white"></a> <a href="https://buymeacoffee.com/devsdocode"><img alt="Buy Me A Coffee" src="https://img.shields.io/badge/Buy%20Me%20A%20Coffee-FFDD00?style=for-the-badge&logo=buymeacoffee&logoColor=black"></a> </div>
mradermacher/BeagleNuBuRPInfinWestLakev2-IreneRP-Neural-7B-slerp-GGUF
mradermacher
2024-05-06T05:56:09Z
122
1
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "Smuggling1710/BuRPInfinWestLakev2-IreneRP-Neural-7B-slerp", "mlabonne/NeuralBeagle14-7B", "en", "base_model:Smuggling1710/BeagleNuBuRPInfinWestLakev2-IreneRP-Neural-7B-slerp", "base_model:quantized:Smuggling1710/BeagleNuBuRPInfinWestLakev2-IreneRP-Neural-7B-slerp", "endpoints_compatible", "region:us" ]
null
2024-03-26T09:08:53Z
--- base_model: Smuggling1710/BeagleNuBuRPInfinWestLakev2-IreneRP-Neural-7B-slerp language: - en library_name: transformers quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - Smuggling1710/BuRPInfinWestLakev2-IreneRP-Neural-7B-slerp - mlabonne/NeuralBeagle14-7B --- ## About static quants of https://huggingface.co/Smuggling1710/BeagleNuBuRPInfinWestLakev2-IreneRP-Neural-7B-slerp <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/BeagleNuBuRPInfinWestLakev2-IreneRP-Neural-7B-slerp-GGUF/resolve/main/BeagleNuBuRPInfinWestLakev2-IreneRP-Neural-7B-slerp.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/BeagleNuBuRPInfinWestLakev2-IreneRP-Neural-7B-slerp-GGUF/resolve/main/BeagleNuBuRPInfinWestLakev2-IreneRP-Neural-7B-slerp.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/BeagleNuBuRPInfinWestLakev2-IreneRP-Neural-7B-slerp-GGUF/resolve/main/BeagleNuBuRPInfinWestLakev2-IreneRP-Neural-7B-slerp.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/BeagleNuBuRPInfinWestLakev2-IreneRP-Neural-7B-slerp-GGUF/resolve/main/BeagleNuBuRPInfinWestLakev2-IreneRP-Neural-7B-slerp.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/BeagleNuBuRPInfinWestLakev2-IreneRP-Neural-7B-slerp-GGUF/resolve/main/BeagleNuBuRPInfinWestLakev2-IreneRP-Neural-7B-slerp.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/BeagleNuBuRPInfinWestLakev2-IreneRP-Neural-7B-slerp-GGUF/resolve/main/BeagleNuBuRPInfinWestLakev2-IreneRP-Neural-7B-slerp.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/BeagleNuBuRPInfinWestLakev2-IreneRP-Neural-7B-slerp-GGUF/resolve/main/BeagleNuBuRPInfinWestLakev2-IreneRP-Neural-7B-slerp.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/BeagleNuBuRPInfinWestLakev2-IreneRP-Neural-7B-slerp-GGUF/resolve/main/BeagleNuBuRPInfinWestLakev2-IreneRP-Neural-7B-slerp.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/BeagleNuBuRPInfinWestLakev2-IreneRP-Neural-7B-slerp-GGUF/resolve/main/BeagleNuBuRPInfinWestLakev2-IreneRP-Neural-7B-slerp.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/BeagleNuBuRPInfinWestLakev2-IreneRP-Neural-7B-slerp-GGUF/resolve/main/BeagleNuBuRPInfinWestLakev2-IreneRP-Neural-7B-slerp.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/BeagleNuBuRPInfinWestLakev2-IreneRP-Neural-7B-slerp-GGUF/resolve/main/BeagleNuBuRPInfinWestLakev2-IreneRP-Neural-7B-slerp.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/BeagleNuBuRPInfinWestLakev2-IreneRP-Neural-7B-slerp-GGUF/resolve/main/BeagleNuBuRPInfinWestLakev2-IreneRP-Neural-7B-slerp.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/BeagleNuBuRPInfinWestLakev2-IreneRP-Neural-7B-slerp-GGUF/resolve/main/BeagleNuBuRPInfinWestLakev2-IreneRP-Neural-7B-slerp.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/BeagleNuBuRPInfinWestLakev2-IreneRP-Neural-7B-slerp-GGUF/resolve/main/BeagleNuBuRPInfinWestLakev2-IreneRP-Neural-7B-slerp.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/BeagleNuBuRPInfinWestLakev2-IreneRP-Neural-7B-slerp-GGUF/resolve/main/BeagleNuBuRPInfinWestLakev2-IreneRP-Neural-7B-slerp.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/BeagleNuBuRPInfinWestLakev2-IreneRP-Neural-7B-slerp-GGUF/resolve/main/BeagleNuBuRPInfinWestLakev2-IreneRP-Neural-7B-slerp.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/TraumaticaX0-GGUF
mradermacher
2024-05-06T05:55:51Z
4
0
transformers
[ "transformers", "gguf", "en", "endpoints_compatible", "region:us", "conversational" ]
null
2024-03-26T12:32:41Z
--- base_model: 0x0grandpa0/TraumaticaX0 language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About static quants of https://huggingface.co/0x0grandpa0/TraumaticaX0 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/TraumaticaX0-GGUF/resolve/main/TraumaticaX0.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/TraumaticaX0-GGUF/resolve/main/TraumaticaX0.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/TraumaticaX0-GGUF/resolve/main/TraumaticaX0.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/TraumaticaX0-GGUF/resolve/main/TraumaticaX0.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/TraumaticaX0-GGUF/resolve/main/TraumaticaX0.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/TraumaticaX0-GGUF/resolve/main/TraumaticaX0.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/TraumaticaX0-GGUF/resolve/main/TraumaticaX0.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/TraumaticaX0-GGUF/resolve/main/TraumaticaX0.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/TraumaticaX0-GGUF/resolve/main/TraumaticaX0.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/TraumaticaX0-GGUF/resolve/main/TraumaticaX0.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/TraumaticaX0-GGUF/resolve/main/TraumaticaX0.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/TraumaticaX0-GGUF/resolve/main/TraumaticaX0.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/TraumaticaX0-GGUF/resolve/main/TraumaticaX0.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/TraumaticaX0-GGUF/resolve/main/TraumaticaX0.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/TraumaticaX0-GGUF/resolve/main/TraumaticaX0.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/TraumaticaX0-GGUF/resolve/main/TraumaticaX0.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
joyle/bert-finetuned-ner
joyle
2024-05-06T05:55:46Z
107
0
transformers
[ "transformers", "safetensors", "bert", "token-classification", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-05-06T00:51:51Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0621 - Precision: 0.9297 - Recall: 0.9477 - F1: 0.9386 - Accuracy: 0.9864 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.072 | 1.0 | 1756 | 0.0647 | 0.8982 | 0.9323 | 0.9149 | 0.9817 | | 0.0352 | 2.0 | 3512 | 0.0666 | 0.9305 | 0.9443 | 0.9374 | 0.9853 | | 0.0211 | 3.0 | 5268 | 0.0621 | 0.9297 | 0.9477 | 0.9386 | 0.9864 | ### Framework versions - Transformers 4.39.2 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
mradermacher/NeoCortex-7B-slerp-GGUF
mradermacher
2024-05-06T05:55:41Z
88
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "Kukedlc/Neural4gsm8k", "macadeliccc/WestLake-7B-v2-laser-truthy-dpo", "en", "base_model:Kukedlc/NeoCortex-7B-slerp", "base_model:quantized:Kukedlc/NeoCortex-7B-slerp", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-26T12:37:54Z
--- base_model: Kukedlc/NeoCortex-7B-slerp language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - Kukedlc/Neural4gsm8k - macadeliccc/WestLake-7B-v2-laser-truthy-dpo --- ## About static quants of https://huggingface.co/Kukedlc/NeoCortex-7B-slerp <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/NeoCortex-7B-slerp-GGUF/resolve/main/NeoCortex-7B-slerp.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/NeoCortex-7B-slerp-GGUF/resolve/main/NeoCortex-7B-slerp.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/NeoCortex-7B-slerp-GGUF/resolve/main/NeoCortex-7B-slerp.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/NeoCortex-7B-slerp-GGUF/resolve/main/NeoCortex-7B-slerp.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/NeoCortex-7B-slerp-GGUF/resolve/main/NeoCortex-7B-slerp.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/NeoCortex-7B-slerp-GGUF/resolve/main/NeoCortex-7B-slerp.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/NeoCortex-7B-slerp-GGUF/resolve/main/NeoCortex-7B-slerp.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/NeoCortex-7B-slerp-GGUF/resolve/main/NeoCortex-7B-slerp.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/NeoCortex-7B-slerp-GGUF/resolve/main/NeoCortex-7B-slerp.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/NeoCortex-7B-slerp-GGUF/resolve/main/NeoCortex-7B-slerp.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/NeoCortex-7B-slerp-GGUF/resolve/main/NeoCortex-7B-slerp.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/NeoCortex-7B-slerp-GGUF/resolve/main/NeoCortex-7B-slerp.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/NeoCortex-7B-slerp-GGUF/resolve/main/NeoCortex-7B-slerp.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/NeoCortex-7B-slerp-GGUF/resolve/main/NeoCortex-7B-slerp.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/NeoCortex-7B-slerp-GGUF/resolve/main/NeoCortex-7B-slerp.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/NeoCortex-7B-slerp-GGUF/resolve/main/NeoCortex-7B-slerp.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/MidnightVelvetBlaze-GGUF
mradermacher
2024-05-06T05:55:38Z
75
0
transformers
[ "transformers", "gguf", "en", "endpoints_compatible", "region:us", "conversational" ]
null
2024-03-26T13:11:16Z
--- base_model: 0x0grandpa0/MidnightVelvetBlaze language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About static quants of https://huggingface.co/0x0grandpa0/MidnightVelvetBlaze <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MidnightVelvetBlaze-GGUF/resolve/main/MidnightVelvetBlaze.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/MidnightVelvetBlaze-GGUF/resolve/main/MidnightVelvetBlaze.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/MidnightVelvetBlaze-GGUF/resolve/main/MidnightVelvetBlaze.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/MidnightVelvetBlaze-GGUF/resolve/main/MidnightVelvetBlaze.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/MidnightVelvetBlaze-GGUF/resolve/main/MidnightVelvetBlaze.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/MidnightVelvetBlaze-GGUF/resolve/main/MidnightVelvetBlaze.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MidnightVelvetBlaze-GGUF/resolve/main/MidnightVelvetBlaze.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/MidnightVelvetBlaze-GGUF/resolve/main/MidnightVelvetBlaze.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/MidnightVelvetBlaze-GGUF/resolve/main/MidnightVelvetBlaze.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/MidnightVelvetBlaze-GGUF/resolve/main/MidnightVelvetBlaze.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MidnightVelvetBlaze-GGUF/resolve/main/MidnightVelvetBlaze.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/MidnightVelvetBlaze-GGUF/resolve/main/MidnightVelvetBlaze.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MidnightVelvetBlaze-GGUF/resolve/main/MidnightVelvetBlaze.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/MidnightVelvetBlaze-GGUF/resolve/main/MidnightVelvetBlaze.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/MidnightVelvetBlaze-GGUF/resolve/main/MidnightVelvetBlaze.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/MidnightVelvetBlaze-GGUF/resolve/main/MidnightVelvetBlaze.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/HoloViolet-7B-GGUF
mradermacher
2024-05-06T05:55:30Z
261
1
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "GreenNode/GreenNode-mini-7B-multilingual-v1olet", "KoboldAI/Mistral-7B-Holodeck-1", "en", "base_model:son-of-man/HoloViolet-7B", "base_model:quantized:son-of-man/HoloViolet-7B", "endpoints_compatible", "region:us" ]
null
2024-03-26T13:47:50Z
--- base_model: son-of-man/HoloViolet-7B language: - en library_name: transformers quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - GreenNode/GreenNode-mini-7B-multilingual-v1olet - KoboldAI/Mistral-7B-Holodeck-1 --- ## About static quants of https://huggingface.co/son-of-man/HoloViolet-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/HoloViolet-7B-GGUF/resolve/main/HoloViolet-7B.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/HoloViolet-7B-GGUF/resolve/main/HoloViolet-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/HoloViolet-7B-GGUF/resolve/main/HoloViolet-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/HoloViolet-7B-GGUF/resolve/main/HoloViolet-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/HoloViolet-7B-GGUF/resolve/main/HoloViolet-7B.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/HoloViolet-7B-GGUF/resolve/main/HoloViolet-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/HoloViolet-7B-GGUF/resolve/main/HoloViolet-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/HoloViolet-7B-GGUF/resolve/main/HoloViolet-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/HoloViolet-7B-GGUF/resolve/main/HoloViolet-7B.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/HoloViolet-7B-GGUF/resolve/main/HoloViolet-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/HoloViolet-7B-GGUF/resolve/main/HoloViolet-7B.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/HoloViolet-7B-GGUF/resolve/main/HoloViolet-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/HoloViolet-7B-GGUF/resolve/main/HoloViolet-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/HoloViolet-7B-GGUF/resolve/main/HoloViolet-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/HoloViolet-7B-GGUF/resolve/main/HoloViolet-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/HoloViolet-7B-GGUF/resolve/main/HoloViolet-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/NeuTrixOmniBe-DPO-GGUF
mradermacher
2024-05-06T05:55:13Z
118
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "#dpo", "MaximeLabonne", "#mergeofmerge", "en", "base_model:Kukedlc/NeuTrixOmniBe-DPO", "base_model:quantized:Kukedlc/NeuTrixOmniBe-DPO", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-26T16:47:44Z
--- base_model: Kukedlc/NeuTrixOmniBe-DPO language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - '#dpo' - MaximeLabonne - '#mergeofmerge' --- ## About static quants of https://huggingface.co/Kukedlc/NeuTrixOmniBe-DPO <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/NeuTrixOmniBe-DPO-GGUF/resolve/main/NeuTrixOmniBe-DPO.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/NeuTrixOmniBe-DPO-GGUF/resolve/main/NeuTrixOmniBe-DPO.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/NeuTrixOmniBe-DPO-GGUF/resolve/main/NeuTrixOmniBe-DPO.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/NeuTrixOmniBe-DPO-GGUF/resolve/main/NeuTrixOmniBe-DPO.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/NeuTrixOmniBe-DPO-GGUF/resolve/main/NeuTrixOmniBe-DPO.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/NeuTrixOmniBe-DPO-GGUF/resolve/main/NeuTrixOmniBe-DPO.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/NeuTrixOmniBe-DPO-GGUF/resolve/main/NeuTrixOmniBe-DPO.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/NeuTrixOmniBe-DPO-GGUF/resolve/main/NeuTrixOmniBe-DPO.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/NeuTrixOmniBe-DPO-GGUF/resolve/main/NeuTrixOmniBe-DPO.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/NeuTrixOmniBe-DPO-GGUF/resolve/main/NeuTrixOmniBe-DPO.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/NeuTrixOmniBe-DPO-GGUF/resolve/main/NeuTrixOmniBe-DPO.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/NeuTrixOmniBe-DPO-GGUF/resolve/main/NeuTrixOmniBe-DPO.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/NeuTrixOmniBe-DPO-GGUF/resolve/main/NeuTrixOmniBe-DPO.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/NeuTrixOmniBe-DPO-GGUF/resolve/main/NeuTrixOmniBe-DPO.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/NeuTrixOmniBe-DPO-GGUF/resolve/main/NeuTrixOmniBe-DPO.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/NeuTrixOmniBe-DPO-GGUF/resolve/main/NeuTrixOmniBe-DPO.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/mahibot-to_finetune-V4-GGUF
mradermacher
2024-05-06T05:55:09Z
5
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "mistral", "trl", "sft", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-26T16:50:21Z
--- base_model: mahiatlinux/mahibot-to_finetune-V4 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - mistral - trl - sft --- ## About static quants of https://huggingface.co/mahiatlinux/mahibot-to_finetune-V4 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/mahibot-to_finetune-V4-GGUF/resolve/main/mahibot-to_finetune-V4.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/mahibot-to_finetune-V4-GGUF/resolve/main/mahibot-to_finetune-V4.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/mahibot-to_finetune-V4-GGUF/resolve/main/mahibot-to_finetune-V4.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/mahibot-to_finetune-V4-GGUF/resolve/main/mahibot-to_finetune-V4.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/mahibot-to_finetune-V4-GGUF/resolve/main/mahibot-to_finetune-V4.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/mahibot-to_finetune-V4-GGUF/resolve/main/mahibot-to_finetune-V4.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/mahibot-to_finetune-V4-GGUF/resolve/main/mahibot-to_finetune-V4.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/mahibot-to_finetune-V4-GGUF/resolve/main/mahibot-to_finetune-V4.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/mahibot-to_finetune-V4-GGUF/resolve/main/mahibot-to_finetune-V4.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/mahibot-to_finetune-V4-GGUF/resolve/main/mahibot-to_finetune-V4.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/mahibot-to_finetune-V4-GGUF/resolve/main/mahibot-to_finetune-V4.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/mahibot-to_finetune-V4-GGUF/resolve/main/mahibot-to_finetune-V4.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/mahibot-to_finetune-V4-GGUF/resolve/main/mahibot-to_finetune-V4.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/mahibot-to_finetune-V4-GGUF/resolve/main/mahibot-to_finetune-V4.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/mahibot-to_finetune-V4-GGUF/resolve/main/mahibot-to_finetune-V4.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/mahibot-to_finetune-V4-GGUF/resolve/main/mahibot-to_finetune-V4.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/WikiHow-Mistral-Instruct-7B-GGUF
mradermacher
2024-05-06T05:54:59Z
106
1
transformers
[ "transformers", "gguf", "wikihow", "tutorial", "educational", "en", "dataset:ajibawa-2023/WikiHow", "base_model:ajibawa-2023/WikiHow-Mistral-Instruct-7B", "base_model:quantized:ajibawa-2023/WikiHow-Mistral-Instruct-7B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-03-26T17:25:33Z
--- base_model: ajibawa-2023/WikiHow-Mistral-Instruct-7B datasets: - ajibawa-2023/WikiHow language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - wikihow - tutorial - educational --- ## About static quants of https://huggingface.co/ajibawa-2023/WikiHow-Mistral-Instruct-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/WikiHow-Mistral-Instruct-7B-GGUF/resolve/main/WikiHow-Mistral-Instruct-7B.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/WikiHow-Mistral-Instruct-7B-GGUF/resolve/main/WikiHow-Mistral-Instruct-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/WikiHow-Mistral-Instruct-7B-GGUF/resolve/main/WikiHow-Mistral-Instruct-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/WikiHow-Mistral-Instruct-7B-GGUF/resolve/main/WikiHow-Mistral-Instruct-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/WikiHow-Mistral-Instruct-7B-GGUF/resolve/main/WikiHow-Mistral-Instruct-7B.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/WikiHow-Mistral-Instruct-7B-GGUF/resolve/main/WikiHow-Mistral-Instruct-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/WikiHow-Mistral-Instruct-7B-GGUF/resolve/main/WikiHow-Mistral-Instruct-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/WikiHow-Mistral-Instruct-7B-GGUF/resolve/main/WikiHow-Mistral-Instruct-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/WikiHow-Mistral-Instruct-7B-GGUF/resolve/main/WikiHow-Mistral-Instruct-7B.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/WikiHow-Mistral-Instruct-7B-GGUF/resolve/main/WikiHow-Mistral-Instruct-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/WikiHow-Mistral-Instruct-7B-GGUF/resolve/main/WikiHow-Mistral-Instruct-7B.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/WikiHow-Mistral-Instruct-7B-GGUF/resolve/main/WikiHow-Mistral-Instruct-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/WikiHow-Mistral-Instruct-7B-GGUF/resolve/main/WikiHow-Mistral-Instruct-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/WikiHow-Mistral-Instruct-7B-GGUF/resolve/main/WikiHow-Mistral-Instruct-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/WikiHow-Mistral-Instruct-7B-GGUF/resolve/main/WikiHow-Mistral-Instruct-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/WikiHow-Mistral-Instruct-7B-GGUF/resolve/main/WikiHow-Mistral-Instruct-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/ShadowDolph-7B-v1-GGUF
mradermacher
2024-05-06T05:54:35Z
18
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "mahiatlinux/merged1and2-and-dolphin", "automerger/YamShadow-7B", "en", "base_model:mahiatlinux/ShadowDolph-7B-v1", "base_model:quantized:mahiatlinux/ShadowDolph-7B-v1", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-03-26T17:56:04Z
--- base_model: mahiatlinux/ShadowDolph-7B-v1 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - mahiatlinux/merged1and2-and-dolphin - automerger/YamShadow-7B --- ## About static quants of https://huggingface.co/mahiatlinux/ShadowDolph-7B-v1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/ShadowDolph-7B-v1-GGUF/resolve/main/ShadowDolph-7B-v1.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/ShadowDolph-7B-v1-GGUF/resolve/main/ShadowDolph-7B-v1.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/ShadowDolph-7B-v1-GGUF/resolve/main/ShadowDolph-7B-v1.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/ShadowDolph-7B-v1-GGUF/resolve/main/ShadowDolph-7B-v1.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/ShadowDolph-7B-v1-GGUF/resolve/main/ShadowDolph-7B-v1.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/ShadowDolph-7B-v1-GGUF/resolve/main/ShadowDolph-7B-v1.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/ShadowDolph-7B-v1-GGUF/resolve/main/ShadowDolph-7B-v1.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/ShadowDolph-7B-v1-GGUF/resolve/main/ShadowDolph-7B-v1.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/ShadowDolph-7B-v1-GGUF/resolve/main/ShadowDolph-7B-v1.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/ShadowDolph-7B-v1-GGUF/resolve/main/ShadowDolph-7B-v1.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ShadowDolph-7B-v1-GGUF/resolve/main/ShadowDolph-7B-v1.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/ShadowDolph-7B-v1-GGUF/resolve/main/ShadowDolph-7B-v1.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ShadowDolph-7B-v1-GGUF/resolve/main/ShadowDolph-7B-v1.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/ShadowDolph-7B-v1-GGUF/resolve/main/ShadowDolph-7B-v1.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/ShadowDolph-7B-v1-GGUF/resolve/main/ShadowDolph-7B-v1.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/ShadowDolph-7B-v1-GGUF/resolve/main/ShadowDolph-7B-v1.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/orthorus-125b-moe-GGUF
mradermacher
2024-05-06T05:54:32Z
1
0
transformers
[ "transformers", "gguf", "moe", "en", "base_model:ibivibiv/orthorus-125b-moe", "base_model:quantized:ibivibiv/orthorus-125b-moe", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-02-20T21:04:33Z
--- base_model: ibivibiv/orthorus-125b-moe language: - en library_name: transformers license: llama2 quantized_by: mradermacher tags: - moe --- ## About static quants of https://huggingface.co/ibivibiv/orthorus-125b-moe <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/orthorus-125b-moe-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/orthorus-125b-moe-GGUF/resolve/main/orthorus-125b-moe.Q2_K.gguf) | Q2_K | 46.8 | | | [PART 1](https://huggingface.co/mradermacher/orthorus-125b-moe-GGUF/resolve/main/orthorus-125b-moe.IQ3_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/orthorus-125b-moe-GGUF/resolve/main/orthorus-125b-moe.IQ3_XS.gguf.part2of2) | IQ3_XS | 51.3 | | | [PART 1](https://huggingface.co/mradermacher/orthorus-125b-moe-GGUF/resolve/main/orthorus-125b-moe.Q3_K_XS.gguf.split-aa) [PART 2](https://huggingface.co/mradermacher/orthorus-125b-moe-GGUF/resolve/main/orthorus-125b-moe.Q3_K_XS.gguf.split-ab) | Q3_K_XS | 51.6 | | | [PART 1](https://huggingface.co/mradermacher/orthorus-125b-moe-GGUF/resolve/main/orthorus-125b-moe.IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/orthorus-125b-moe-GGUF/resolve/main/orthorus-125b-moe.IQ3_S.gguf.part2of2) | IQ3_S | 54.2 | beats Q3_K* | | [PART 1](https://huggingface.co/mradermacher/orthorus-125b-moe-GGUF/resolve/main/orthorus-125b-moe.Q3_K_S.gguf.split-aa) [PART 2](https://huggingface.co/mradermacher/orthorus-125b-moe-GGUF/resolve/main/orthorus-125b-moe.Q3_K_S.gguf.split-ab) | Q3_K_S | 55.1 | | | [PART 1](https://huggingface.co/mradermacher/orthorus-125b-moe-GGUF/resolve/main/orthorus-125b-moe.IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/orthorus-125b-moe-GGUF/resolve/main/orthorus-125b-moe.IQ3_M.gguf.part2of2) | IQ3_M | 55.6 | | | [PART 1](https://huggingface.co/mradermacher/orthorus-125b-moe-GGUF/resolve/main/orthorus-125b-moe.Q3_K_M.gguf.split-aa) [PART 2](https://huggingface.co/mradermacher/orthorus-125b-moe-GGUF/resolve/main/orthorus-125b-moe.Q3_K_M.gguf.split-ab) | Q3_K_M | 61.1 | lower quality | | [PART 1](https://huggingface.co/mradermacher/orthorus-125b-moe-GGUF/resolve/main/orthorus-125b-moe.Q3_K_L.gguf.split-aa) [PART 2](https://huggingface.co/mradermacher/orthorus-125b-moe-GGUF/resolve/main/orthorus-125b-moe.Q3_K_L.gguf.split-ab) | Q3_K_L | 66.1 | | | [PART 1](https://huggingface.co/mradermacher/orthorus-125b-moe-GGUF/resolve/main/orthorus-125b-moe.IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/orthorus-125b-moe-GGUF/resolve/main/orthorus-125b-moe.IQ4_XS.gguf.part2of2) | IQ4_XS | 67.6 | | | [PART 1](https://huggingface.co/mradermacher/orthorus-125b-moe-GGUF/resolve/main/orthorus-125b-moe.Q4_K_S.gguf.split-aa) [PART 2](https://huggingface.co/mradermacher/orthorus-125b-moe-GGUF/resolve/main/orthorus-125b-moe.Q4_K_S.gguf.split-ab) | Q4_K_S | 72.2 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/orthorus-125b-moe-GGUF/resolve/main/orthorus-125b-moe.Q4_K_M.gguf.split-aa) [PART 2](https://huggingface.co/mradermacher/orthorus-125b-moe-GGUF/resolve/main/orthorus-125b-moe.Q4_K_M.gguf.split-ab) | Q4_K_M | 76.5 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/orthorus-125b-moe-GGUF/resolve/main/orthorus-125b-moe.Q5_K_S.gguf.split-aa) [PART 2](https://huggingface.co/mradermacher/orthorus-125b-moe-GGUF/resolve/main/orthorus-125b-moe.Q5_K_S.gguf.split-ab) | Q5_K_S | 87.2 | | | [PART 1](https://huggingface.co/mradermacher/orthorus-125b-moe-GGUF/resolve/main/orthorus-125b-moe.Q5_K_M.gguf.split-aa) [PART 2](https://huggingface.co/mradermacher/orthorus-125b-moe-GGUF/resolve/main/orthorus-125b-moe.Q5_K_M.gguf.split-ab) | Q5_K_M | 89.7 | | | [PART 1](https://huggingface.co/mradermacher/orthorus-125b-moe-GGUF/resolve/main/orthorus-125b-moe.Q6_K.gguf.split-aa) [PART 2](https://huggingface.co/mradermacher/orthorus-125b-moe-GGUF/resolve/main/orthorus-125b-moe.Q6_K.gguf.split-ab) [PART 3](https://huggingface.co/mradermacher/orthorus-125b-moe-GGUF/resolve/main/orthorus-125b-moe.Q6_K.gguf.split-ac) | Q6_K | 103.8 | very good quality | | [PART 1](https://huggingface.co/mradermacher/orthorus-125b-moe-GGUF/resolve/main/orthorus-125b-moe.Q8_0.gguf.split-aa) [PART 2](https://huggingface.co/mradermacher/orthorus-125b-moe-GGUF/resolve/main/orthorus-125b-moe.Q8_0.gguf.split-ab) [PART 3](https://huggingface.co/mradermacher/orthorus-125b-moe-GGUF/resolve/main/orthorus-125b-moe.Q8_0.gguf.split-ac) | Q8_0 | 134.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Genie-GGUF
mradermacher
2024-05-06T05:54:31Z
32
0
transformers
[ "transformers", "gguf", "en", "dataset:Sadiah/Genie", "base_model:Sadiah/Genie", "base_model:quantized:Sadiah/Genie", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-03-26T17:56:13Z
--- base_model: Sadiah/Genie datasets: - Sadiah/Genie language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher tags: [] --- ## About static quants of https://huggingface.co/Sadiah/Genie <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Genie-GGUF/resolve/main/Genie.Q2_K.gguf) | Q2_K | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/Genie-GGUF/resolve/main/Genie.IQ3_XS.gguf) | IQ3_XS | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Genie-GGUF/resolve/main/Genie.Q3_K_S.gguf) | Q3_K_S | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/Genie-GGUF/resolve/main/Genie.IQ3_S.gguf) | IQ3_S | 3.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Genie-GGUF/resolve/main/Genie.IQ3_M.gguf) | IQ3_M | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Genie-GGUF/resolve/main/Genie.Q3_K_M.gguf) | Q3_K_M | 4.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Genie-GGUF/resolve/main/Genie.Q3_K_L.gguf) | Q3_K_L | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/Genie-GGUF/resolve/main/Genie.IQ4_XS.gguf) | IQ4_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Genie-GGUF/resolve/main/Genie.Q4_0.gguf) | Q4_0 | 4.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Genie-GGUF/resolve/main/Genie.Q4_K_S.gguf) | Q4_K_S | 4.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Genie-GGUF/resolve/main/Genie.IQ4_NL.gguf) | IQ4_NL | 4.7 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Genie-GGUF/resolve/main/Genie.Q4_K_M.gguf) | Q4_K_M | 4.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Genie-GGUF/resolve/main/Genie.Q5_K_S.gguf) | Q5_K_S | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Genie-GGUF/resolve/main/Genie.Q5_K_M.gguf) | Q5_K_M | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/Genie-GGUF/resolve/main/Genie.Q6_K.gguf) | Q6_K | 6.5 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Genie-GGUF/resolve/main/Genie.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/DistHermYam-7B-ties-GGUF
mradermacher
2024-05-06T05:54:25Z
18
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "yam-peleg/Experiment21-7B", "eren23/DistilHermes-2.5-Mistral-7B", "en", "base_model:codegood/DistHermYam-7B-ties", "base_model:quantized:codegood/DistHermYam-7B-ties", "endpoints_compatible", "region:us" ]
null
2024-03-26T19:34:25Z
--- base_model: codegood/DistHermYam-7B-ties language: - en library_name: transformers quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - yam-peleg/Experiment21-7B - eren23/DistilHermes-2.5-Mistral-7B --- ## About static quants of https://huggingface.co/codegood/DistHermYam-7B-ties <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/DistHermYam-7B-ties-GGUF/resolve/main/DistHermYam-7B-ties.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/DistHermYam-7B-ties-GGUF/resolve/main/DistHermYam-7B-ties.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/DistHermYam-7B-ties-GGUF/resolve/main/DistHermYam-7B-ties.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/DistHermYam-7B-ties-GGUF/resolve/main/DistHermYam-7B-ties.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/DistHermYam-7B-ties-GGUF/resolve/main/DistHermYam-7B-ties.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/DistHermYam-7B-ties-GGUF/resolve/main/DistHermYam-7B-ties.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/DistHermYam-7B-ties-GGUF/resolve/main/DistHermYam-7B-ties.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/DistHermYam-7B-ties-GGUF/resolve/main/DistHermYam-7B-ties.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/DistHermYam-7B-ties-GGUF/resolve/main/DistHermYam-7B-ties.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/DistHermYam-7B-ties-GGUF/resolve/main/DistHermYam-7B-ties.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/DistHermYam-7B-ties-GGUF/resolve/main/DistHermYam-7B-ties.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/DistHermYam-7B-ties-GGUF/resolve/main/DistHermYam-7B-ties.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/DistHermYam-7B-ties-GGUF/resolve/main/DistHermYam-7B-ties.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/DistHermYam-7B-ties-GGUF/resolve/main/DistHermYam-7B-ties.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/DistHermYam-7B-ties-GGUF/resolve/main/DistHermYam-7B-ties.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/DistHermYam-7B-ties-GGUF/resolve/main/DistHermYam-7B-ties.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
wuzhongyanqiu/distilbert-base-uncased-finetuned-imdb-accelerate
wuzhongyanqiu
2024-05-06T05:53:59Z
162
0
transformers
[ "transformers", "safetensors", "distilbert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-05-06T05:31:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/Air-Striker-Mixtral-8x7B-ZLoss-GGUF
mradermacher
2024-05-06T05:53:38Z
28
0
transformers
[ "transformers", "gguf", "en", "base_model:sandmanbuzz/Air-Striker-Mixtral-8x7B-ZLoss", "base_model:quantized:sandmanbuzz/Air-Striker-Mixtral-8x7B-ZLoss", "endpoints_compatible", "region:us" ]
null
2024-03-27T03:28:31Z
--- base_model: sandmanbuzz/Air-Striker-Mixtral-8x7B-ZLoss language: - en library_name: transformers quantized_by: mradermacher --- ## About static quants of https://huggingface.co/sandmanbuzz/Air-Striker-Mixtral-8x7B-ZLoss <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Air-Striker-Mixtral-8x7B-ZLoss-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Air-Striker-Mixtral-8x7B-ZLoss-GGUF/resolve/main/Air-Striker-Mixtral-8x7B-ZLoss.Q2_K.gguf) | Q2_K | 17.6 | | | [GGUF](https://huggingface.co/mradermacher/Air-Striker-Mixtral-8x7B-ZLoss-GGUF/resolve/main/Air-Striker-Mixtral-8x7B-ZLoss.IQ3_XS.gguf) | IQ3_XS | 19.5 | | | [GGUF](https://huggingface.co/mradermacher/Air-Striker-Mixtral-8x7B-ZLoss-GGUF/resolve/main/Air-Striker-Mixtral-8x7B-ZLoss.IQ3_S.gguf) | IQ3_S | 20.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Air-Striker-Mixtral-8x7B-ZLoss-GGUF/resolve/main/Air-Striker-Mixtral-8x7B-ZLoss.Q3_K_S.gguf) | Q3_K_S | 20.7 | | | [GGUF](https://huggingface.co/mradermacher/Air-Striker-Mixtral-8x7B-ZLoss-GGUF/resolve/main/Air-Striker-Mixtral-8x7B-ZLoss.IQ3_M.gguf) | IQ3_M | 21.7 | | | [GGUF](https://huggingface.co/mradermacher/Air-Striker-Mixtral-8x7B-ZLoss-GGUF/resolve/main/Air-Striker-Mixtral-8x7B-ZLoss.Q3_K_M.gguf) | Q3_K_M | 22.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Air-Striker-Mixtral-8x7B-ZLoss-GGUF/resolve/main/Air-Striker-Mixtral-8x7B-ZLoss.Q3_K_L.gguf) | Q3_K_L | 24.4 | | | [GGUF](https://huggingface.co/mradermacher/Air-Striker-Mixtral-8x7B-ZLoss-GGUF/resolve/main/Air-Striker-Mixtral-8x7B-ZLoss.IQ4_XS.gguf) | IQ4_XS | 25.6 | | | [GGUF](https://huggingface.co/mradermacher/Air-Striker-Mixtral-8x7B-ZLoss-GGUF/resolve/main/Air-Striker-Mixtral-8x7B-ZLoss.Q4_0.gguf) | Q4_0 | 26.7 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Air-Striker-Mixtral-8x7B-ZLoss-GGUF/resolve/main/Air-Striker-Mixtral-8x7B-ZLoss.IQ4_NL.gguf) | IQ4_NL | 27.0 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Air-Striker-Mixtral-8x7B-ZLoss-GGUF/resolve/main/Air-Striker-Mixtral-8x7B-ZLoss.Q4_K_S.gguf) | Q4_K_S | 27.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Air-Striker-Mixtral-8x7B-ZLoss-GGUF/resolve/main/Air-Striker-Mixtral-8x7B-ZLoss.Q4_K_M.gguf) | Q4_K_M | 28.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Air-Striker-Mixtral-8x7B-ZLoss-GGUF/resolve/main/Air-Striker-Mixtral-8x7B-ZLoss.Q5_K_S.gguf) | Q5_K_S | 32.5 | | | [GGUF](https://huggingface.co/mradermacher/Air-Striker-Mixtral-8x7B-ZLoss-GGUF/resolve/main/Air-Striker-Mixtral-8x7B-ZLoss.Q5_K_M.gguf) | Q5_K_M | 33.5 | | | [GGUF](https://huggingface.co/mradermacher/Air-Striker-Mixtral-8x7B-ZLoss-GGUF/resolve/main/Air-Striker-Mixtral-8x7B-ZLoss.Q6_K.gguf) | Q6_K | 38.6 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Air-Striker-Mixtral-8x7B-ZLoss-GGUF/resolve/main/Air-Striker-Mixtral-8x7B-ZLoss.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Air-Striker-Mixtral-8x7B-ZLoss-GGUF/resolve/main/Air-Striker-Mixtral-8x7B-ZLoss.Q8_0.gguf.part2of2) | Q8_0 | 49.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Rogue-Rose-103b-v0.2-i1-GGUF
mradermacher
2024-05-06T05:53:32Z
47
0
transformers
[ "transformers", "gguf", "en", "base_model:sophosympatheia/Rogue-Rose-103b-v0.2", "base_model:quantized:sophosympatheia/Rogue-Rose-103b-v0.2", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-03-27T06:03:53Z
--- base_model: sophosympatheia/Rogue-Rose-103b-v0.2 language: - en library_name: transformers license: llama2 quantized_by: mradermacher --- ## About weighted/imatrix quants of https://huggingface.co/sophosympatheia/Rogue-Rose-103b-v0.2 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-i1-GGUF/resolve/main/Rogue-Rose-103b-v0.2.i1-IQ1_S.gguf) | i1-IQ1_S | 22.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-i1-GGUF/resolve/main/Rogue-Rose-103b-v0.2.i1-IQ1_M.gguf) | i1-IQ1_M | 24.2 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-i1-GGUF/resolve/main/Rogue-Rose-103b-v0.2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 27.7 | | | [GGUF](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-i1-GGUF/resolve/main/Rogue-Rose-103b-v0.2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 30.8 | | | [GGUF](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-i1-GGUF/resolve/main/Rogue-Rose-103b-v0.2.i1-IQ2_S.gguf) | i1-IQ2_S | 32.3 | | | [GGUF](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-i1-GGUF/resolve/main/Rogue-Rose-103b-v0.2.i1-IQ2_M.gguf) | i1-IQ2_M | 35.1 | | | [GGUF](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-i1-GGUF/resolve/main/Rogue-Rose-103b-v0.2.i1-Q2_K.gguf) | i1-Q2_K | 38.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-i1-GGUF/resolve/main/Rogue-Rose-103b-v0.2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 40.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-i1-GGUF/resolve/main/Rogue-Rose-103b-v0.2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 42.6 | | | [GGUF](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-i1-GGUF/resolve/main/Rogue-Rose-103b-v0.2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 44.9 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-i1-GGUF/resolve/main/Rogue-Rose-103b-v0.2.i1-IQ3_S.gguf) | i1-IQ3_S | 45.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-i1-GGUF/resolve/main/Rogue-Rose-103b-v0.2.i1-IQ3_M.gguf) | i1-IQ3_M | 46.5 | | | [PART 1](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-i1-GGUF/resolve/main/Rogue-Rose-103b-v0.2.i1-Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-i1-GGUF/resolve/main/Rogue-Rose-103b-v0.2.i1-Q3_K_M.gguf.part2of2) | i1-Q3_K_M | 50.0 | IQ3_S probably better | | [PART 1](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-i1-GGUF/resolve/main/Rogue-Rose-103b-v0.2.i1-Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-i1-GGUF/resolve/main/Rogue-Rose-103b-v0.2.i1-Q3_K_L.gguf.part2of2) | i1-Q3_K_L | 54.5 | IQ3_M probably better | | [PART 1](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-i1-GGUF/resolve/main/Rogue-Rose-103b-v0.2.i1-IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-i1-GGUF/resolve/main/Rogue-Rose-103b-v0.2.i1-IQ4_XS.gguf.part2of2) | i1-IQ4_XS | 55.5 | | | [PART 1](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-i1-GGUF/resolve/main/Rogue-Rose-103b-v0.2.i1-IQ4_NL.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-i1-GGUF/resolve/main/Rogue-Rose-103b-v0.2.i1-IQ4_NL.gguf.part2of2) | i1-IQ4_NL | 58.7 | prefer IQ4_XS | | [PART 1](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-i1-GGUF/resolve/main/Rogue-Rose-103b-v0.2.i1-Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-i1-GGUF/resolve/main/Rogue-Rose-103b-v0.2.i1-Q4_0.gguf.part2of2) | i1-Q4_0 | 58.8 | fast, low quality | | [PART 1](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-i1-GGUF/resolve/main/Rogue-Rose-103b-v0.2.i1-Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-i1-GGUF/resolve/main/Rogue-Rose-103b-v0.2.i1-Q4_K_S.gguf.part2of2) | i1-Q4_K_S | 59.0 | optimal size/speed/quality | | [PART 1](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-i1-GGUF/resolve/main/Rogue-Rose-103b-v0.2.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-i1-GGUF/resolve/main/Rogue-Rose-103b-v0.2.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 62.3 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-i1-GGUF/resolve/main/Rogue-Rose-103b-v0.2.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-i1-GGUF/resolve/main/Rogue-Rose-103b-v0.2.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 71.4 | | | [PART 1](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-i1-GGUF/resolve/main/Rogue-Rose-103b-v0.2.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-i1-GGUF/resolve/main/Rogue-Rose-103b-v0.2.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 73.3 | | | [PART 1](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-i1-GGUF/resolve/main/Rogue-Rose-103b-v0.2.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Rogue-Rose-103b-v0.2-i1-GGUF/resolve/main/Rogue-Rose-103b-v0.2.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 85.1 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/MegaQwen-120B-GGUF
mradermacher
2024-05-06T05:52:38Z
1
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "Qwen/Qwen1.5-72B", "en", "base_model:abideen/MegaQwen-120B", "base_model:quantized:abideen/MegaQwen-120B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-03-27T16:44:09Z
--- base_model: abideen/MegaQwen-120B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - Qwen/Qwen1.5-72B --- ## About static quants of https://huggingface.co/abideen/MegaQwen-120B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MegaQwen-120B-GGUF/resolve/main/MegaQwen-120B.Q2_K.gguf) | Q2_K | 47.9 | | | [PART 1](https://huggingface.co/mradermacher/MegaQwen-120B-GGUF/resolve/main/MegaQwen-120B.IQ3_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/MegaQwen-120B-GGUF/resolve/main/MegaQwen-120B.IQ3_XS.gguf.part2of2) | IQ3_XS | 52.8 | | | [PART 1](https://huggingface.co/mradermacher/MegaQwen-120B-GGUF/resolve/main/MegaQwen-120B.IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/MegaQwen-120B-GGUF/resolve/main/MegaQwen-120B.IQ3_S.gguf.part2of2) | IQ3_S | 55.6 | beats Q3_K* | | [PART 1](https://huggingface.co/mradermacher/MegaQwen-120B-GGUF/resolve/main/MegaQwen-120B.Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/MegaQwen-120B-GGUF/resolve/main/MegaQwen-120B.Q3_K_S.gguf.part2of2) | Q3_K_S | 55.6 | | | [PART 1](https://huggingface.co/mradermacher/MegaQwen-120B-GGUF/resolve/main/MegaQwen-120B.IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/MegaQwen-120B-GGUF/resolve/main/MegaQwen-120B.IQ3_M.gguf.part2of2) | IQ3_M | 58.6 | | | [PART 1](https://huggingface.co/mradermacher/MegaQwen-120B-GGUF/resolve/main/MegaQwen-120B.Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/MegaQwen-120B-GGUF/resolve/main/MegaQwen-120B.Q3_K_M.gguf.part2of2) | Q3_K_M | 62.1 | lower quality | | [PART 1](https://huggingface.co/mradermacher/MegaQwen-120B-GGUF/resolve/main/MegaQwen-120B.Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/MegaQwen-120B-GGUF/resolve/main/MegaQwen-120B.Q3_K_L.gguf.part2of2) | Q3_K_L | 67.7 | | | [PART 1](https://huggingface.co/mradermacher/MegaQwen-120B-GGUF/resolve/main/MegaQwen-120B.IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/MegaQwen-120B-GGUF/resolve/main/MegaQwen-120B.IQ4_XS.gguf.part2of2) | IQ4_XS | 68.7 | | | [PART 1](https://huggingface.co/mradermacher/MegaQwen-120B-GGUF/resolve/main/MegaQwen-120B.Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/MegaQwen-120B-GGUF/resolve/main/MegaQwen-120B.Q4_0.gguf.part2of2) | Q4_0 | 72.0 | fast, low quality | | [PART 1](https://huggingface.co/mradermacher/MegaQwen-120B-GGUF/resolve/main/MegaQwen-120B.IQ4_NL.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/MegaQwen-120B-GGUF/resolve/main/MegaQwen-120B.IQ4_NL.gguf.part2of2) | IQ4_NL | 72.4 | prefer IQ4_XS | | [PART 1](https://huggingface.co/mradermacher/MegaQwen-120B-GGUF/resolve/main/MegaQwen-120B.Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/MegaQwen-120B-GGUF/resolve/main/MegaQwen-120B.Q4_K_S.gguf.part2of2) | Q4_K_S | 72.5 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/MegaQwen-120B-GGUF/resolve/main/MegaQwen-120B.Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/MegaQwen-120B-GGUF/resolve/main/MegaQwen-120B.Q4_K_M.gguf.part2of2) | Q4_K_M | 76.9 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/MegaQwen-120B-GGUF/resolve/main/MegaQwen-120B.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/MegaQwen-120B-GGUF/resolve/main/MegaQwen-120B.Q5_K_S.gguf.part2of2) | Q5_K_S | 87.4 | | | [PART 1](https://huggingface.co/mradermacher/MegaQwen-120B-GGUF/resolve/main/MegaQwen-120B.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/MegaQwen-120B-GGUF/resolve/main/MegaQwen-120B.Q5_K_M.gguf.part2of2) | Q5_K_M | 89.9 | | | [PART 1](https://huggingface.co/mradermacher/MegaQwen-120B-GGUF/resolve/main/MegaQwen-120B.Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/MegaQwen-120B-GGUF/resolve/main/MegaQwen-120B.Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/MegaQwen-120B-GGUF/resolve/main/MegaQwen-120B.Q6_K.gguf.part3of3) | Q6_K | 103.8 | very good quality | | [PART 1](https://huggingface.co/mradermacher/MegaQwen-120B-GGUF/resolve/main/MegaQwen-120B.Q8_0.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/MegaQwen-120B-GGUF/resolve/main/MegaQwen-120B.Q8_0.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/MegaQwen-120B-GGUF/resolve/main/MegaQwen-120B.Q8_0.gguf.part3of3) | Q8_0 | 133.7 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/MythoLogic-L2-13b-i1-GGUF
mradermacher
2024-05-06T05:52:08Z
37
0
transformers
[ "transformers", "gguf", "en", "base_model:Gryphe/MythoLogic-L2-13b", "base_model:quantized:Gryphe/MythoLogic-L2-13b", "license:other", "endpoints_compatible", "region:us" ]
null
2024-03-27T19:53:41Z
--- base_model: Gryphe/MythoLogic-L2-13b language: - en library_name: transformers license: other quantized_by: mradermacher --- ## About weighted/imatrix quants of https://huggingface.co/Gryphe/MythoLogic-L2-13b <!-- provided-files --> ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MythoLogic-L2-13b-i1-GGUF/resolve/main/MythoLogic-L2-13b.i1-IQ1_S.gguf) | i1-IQ1_S | 3.2 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/MythoLogic-L2-13b-i1-GGUF/resolve/main/MythoLogic-L2-13b.i1-IQ1_M.gguf) | i1-IQ1_M | 3.5 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/MythoLogic-L2-13b-i1-GGUF/resolve/main/MythoLogic-L2-13b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/MythoLogic-L2-13b-i1-GGUF/resolve/main/MythoLogic-L2-13b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/MythoLogic-L2-13b-i1-GGUF/resolve/main/MythoLogic-L2-13b.i1-IQ2_S.gguf) | i1-IQ2_S | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/MythoLogic-L2-13b-i1-GGUF/resolve/main/MythoLogic-L2-13b.i1-IQ2_M.gguf) | i1-IQ2_M | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/MythoLogic-L2-13b-i1-GGUF/resolve/main/MythoLogic-L2-13b.i1-Q2_K.gguf) | i1-Q2_K | 5.1 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/MythoLogic-L2-13b-i1-GGUF/resolve/main/MythoLogic-L2-13b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MythoLogic-L2-13b-i1-GGUF/resolve/main/MythoLogic-L2-13b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/MythoLogic-L2-13b-i1-GGUF/resolve/main/MythoLogic-L2-13b.i1-IQ3_S.gguf) | i1-IQ3_S | 6.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/MythoLogic-L2-13b-i1-GGUF/resolve/main/MythoLogic-L2-13b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/MythoLogic-L2-13b-i1-GGUF/resolve/main/MythoLogic-L2-13b.i1-IQ3_M.gguf) | i1-IQ3_M | 6.3 | | | [GGUF](https://huggingface.co/mradermacher/MythoLogic-L2-13b-i1-GGUF/resolve/main/MythoLogic-L2-13b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.6 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/MythoLogic-L2-13b-i1-GGUF/resolve/main/MythoLogic-L2-13b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 7.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/MythoLogic-L2-13b-i1-GGUF/resolve/main/MythoLogic-L2-13b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 7.3 | | | [GGUF](https://huggingface.co/mradermacher/MythoLogic-L2-13b-i1-GGUF/resolve/main/MythoLogic-L2-13b.i1-IQ4_NL.gguf) | i1-IQ4_NL | 7.7 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/MythoLogic-L2-13b-i1-GGUF/resolve/main/MythoLogic-L2-13b.i1-Q4_0.gguf) | i1-Q4_0 | 7.7 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/MythoLogic-L2-13b-i1-GGUF/resolve/main/MythoLogic-L2-13b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/MythoLogic-L2-13b-i1-GGUF/resolve/main/MythoLogic-L2-13b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 8.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MythoLogic-L2-13b-i1-GGUF/resolve/main/MythoLogic-L2-13b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 9.3 | | | [GGUF](https://huggingface.co/mradermacher/MythoLogic-L2-13b-i1-GGUF/resolve/main/MythoLogic-L2-13b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 9.5 | | | [GGUF](https://huggingface.co/mradermacher/MythoLogic-L2-13b-i1-GGUF/resolve/main/MythoLogic-L2-13b.i1-Q6_K.gguf) | i1-Q6_K | 11.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Kazbek-7B-GGUF
mradermacher
2024-05-06T05:51:59Z
42
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "alpindale/Mistral-7B-v0.2-hf", "Inv/Konstanta-V4-Alpha-7B", "en", "base_model:Inv/Kazbek-7B", "base_model:quantized:Inv/Kazbek-7B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-03-27T21:07:44Z
--- base_model: Inv/Kazbek-7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - mergekit - merge - alpindale/Mistral-7B-v0.2-hf - Inv/Konstanta-V4-Alpha-7B --- ## About static quants of https://huggingface.co/Inv/Kazbek-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Kazbek-7B-GGUF/resolve/main/Kazbek-7B.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Kazbek-7B-GGUF/resolve/main/Kazbek-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Kazbek-7B-GGUF/resolve/main/Kazbek-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Kazbek-7B-GGUF/resolve/main/Kazbek-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Kazbek-7B-GGUF/resolve/main/Kazbek-7B.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Kazbek-7B-GGUF/resolve/main/Kazbek-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Kazbek-7B-GGUF/resolve/main/Kazbek-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Kazbek-7B-GGUF/resolve/main/Kazbek-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Kazbek-7B-GGUF/resolve/main/Kazbek-7B.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Kazbek-7B-GGUF/resolve/main/Kazbek-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Kazbek-7B-GGUF/resolve/main/Kazbek-7B.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Kazbek-7B-GGUF/resolve/main/Kazbek-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Kazbek-7B-GGUF/resolve/main/Kazbek-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Kazbek-7B-GGUF/resolve/main/Kazbek-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Kazbek-7B-GGUF/resolve/main/Kazbek-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Kazbek-7B-GGUF/resolve/main/Kazbek-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Elbrus-7B-GGUF
mradermacher
2024-05-06T05:51:49Z
23
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "alpindale/Mistral-7B-v0.2-hf", "Inv/Konstanta-V4-Alpha-7B", "en", "base_model:Inv/Elbrus-7B", "base_model:quantized:Inv/Elbrus-7B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-03-27T23:53:35Z
--- base_model: Inv/Elbrus-7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - mergekit - merge - alpindale/Mistral-7B-v0.2-hf - Inv/Konstanta-V4-Alpha-7B --- ## About static quants of https://huggingface.co/Inv/Elbrus-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Elbrus-7B-GGUF/resolve/main/Elbrus-7B.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Elbrus-7B-GGUF/resolve/main/Elbrus-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Elbrus-7B-GGUF/resolve/main/Elbrus-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Elbrus-7B-GGUF/resolve/main/Elbrus-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Elbrus-7B-GGUF/resolve/main/Elbrus-7B.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Elbrus-7B-GGUF/resolve/main/Elbrus-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Elbrus-7B-GGUF/resolve/main/Elbrus-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Elbrus-7B-GGUF/resolve/main/Elbrus-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Elbrus-7B-GGUF/resolve/main/Elbrus-7B.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Elbrus-7B-GGUF/resolve/main/Elbrus-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Elbrus-7B-GGUF/resolve/main/Elbrus-7B.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Elbrus-7B-GGUF/resolve/main/Elbrus-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Elbrus-7B-GGUF/resolve/main/Elbrus-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Elbrus-7B-GGUF/resolve/main/Elbrus-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Elbrus-7B-GGUF/resolve/main/Elbrus-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Elbrus-7B-GGUF/resolve/main/Elbrus-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/MonarchCoder-MoE-2x7B-GGUF
mradermacher
2024-05-06T05:51:25Z
55
0
transformers
[ "transformers", "gguf", "moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "mlabonne/AlphaMonarch-7B", "Syed-Hasan-8503/Tess-Coder-7B-Mistral-v1.0", "en", "base_model:abideen/MonarchCoder-MoE-2x7B", "base_model:quantized:abideen/MonarchCoder-MoE-2x7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-28T01:26:57Z
--- base_model: abideen/MonarchCoder-MoE-2x7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - moe - frankenmoe - merge - mergekit - lazymergekit - mlabonne/AlphaMonarch-7B - Syed-Hasan-8503/Tess-Coder-7B-Mistral-v1.0 --- ## About static quants of https://huggingface.co/abideen/MonarchCoder-MoE-2x7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MonarchCoder-MoE-2x7B-GGUF/resolve/main/MonarchCoder-MoE-2x7B.Q2_K.gguf) | Q2_K | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/MonarchCoder-MoE-2x7B-GGUF/resolve/main/MonarchCoder-MoE-2x7B.IQ3_XS.gguf) | IQ3_XS | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/MonarchCoder-MoE-2x7B-GGUF/resolve/main/MonarchCoder-MoE-2x7B.Q3_K_S.gguf) | Q3_K_S | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/MonarchCoder-MoE-2x7B-GGUF/resolve/main/MonarchCoder-MoE-2x7B.IQ3_S.gguf) | IQ3_S | 5.9 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/MonarchCoder-MoE-2x7B-GGUF/resolve/main/MonarchCoder-MoE-2x7B.IQ3_M.gguf) | IQ3_M | 6.0 | | | [GGUF](https://huggingface.co/mradermacher/MonarchCoder-MoE-2x7B-GGUF/resolve/main/MonarchCoder-MoE-2x7B.Q3_K_M.gguf) | Q3_K_M | 6.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MonarchCoder-MoE-2x7B-GGUF/resolve/main/MonarchCoder-MoE-2x7B.Q3_K_L.gguf) | Q3_K_L | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/MonarchCoder-MoE-2x7B-GGUF/resolve/main/MonarchCoder-MoE-2x7B.IQ4_XS.gguf) | IQ4_XS | 7.2 | | | [GGUF](https://huggingface.co/mradermacher/MonarchCoder-MoE-2x7B-GGUF/resolve/main/MonarchCoder-MoE-2x7B.Q4_0.gguf) | Q4_0 | 7.5 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/MonarchCoder-MoE-2x7B-GGUF/resolve/main/MonarchCoder-MoE-2x7B.Q4_K_S.gguf) | Q4_K_S | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MonarchCoder-MoE-2x7B-GGUF/resolve/main/MonarchCoder-MoE-2x7B.IQ4_NL.gguf) | IQ4_NL | 7.6 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/MonarchCoder-MoE-2x7B-GGUF/resolve/main/MonarchCoder-MoE-2x7B.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MonarchCoder-MoE-2x7B-GGUF/resolve/main/MonarchCoder-MoE-2x7B.Q5_K_S.gguf) | Q5_K_S | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/MonarchCoder-MoE-2x7B-GGUF/resolve/main/MonarchCoder-MoE-2x7B.Q5_K_M.gguf) | Q5_K_M | 9.4 | | | [GGUF](https://huggingface.co/mradermacher/MonarchCoder-MoE-2x7B-GGUF/resolve/main/MonarchCoder-MoE-2x7B.Q6_K.gguf) | Q6_K | 10.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/MonarchCoder-MoE-2x7B-GGUF/resolve/main/MonarchCoder-MoE-2x7B.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Blitz-AI-ULTRA-i1-GGUF
mradermacher
2024-05-06T05:51:20Z
29
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "en", "base_model:DenisTheDev/Blitz-AI-ULTRA", "base_model:quantized:DenisTheDev/Blitz-AI-ULTRA", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-28T02:11:21Z
--- base_model: DenisTheDev/Blitz-AI-ULTRA language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit --- ## About weighted/imatrix quants of https://huggingface.co/DenisTheDev/Blitz-AI-ULTRA <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-i1-GGUF/resolve/main/Blitz-AI-ULTRA.i1-IQ1_S.gguf) | i1-IQ1_S | 32.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-i1-GGUF/resolve/main/Blitz-AI-ULTRA.i1-IQ1_M.gguf) | i1-IQ1_M | 34.8 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-i1-GGUF/resolve/main/Blitz-AI-ULTRA.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 38.8 | | | [GGUF](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-i1-GGUF/resolve/main/Blitz-AI-ULTRA.i1-IQ2_XS.gguf) | i1-IQ2_XS | 42.3 | | | [GGUF](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-i1-GGUF/resolve/main/Blitz-AI-ULTRA.i1-IQ2_S.gguf) | i1-IQ2_S | 45.2 | | | [GGUF](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-i1-GGUF/resolve/main/Blitz-AI-ULTRA.i1-IQ2_M.gguf) | i1-IQ2_M | 48.4 | | | [PART 1](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-i1-GGUF/resolve/main/Blitz-AI-ULTRA.i1-Q2_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-i1-GGUF/resolve/main/Blitz-AI-ULTRA.i1-Q2_K.gguf.part2of2) | i1-Q2_K | 51.6 | IQ3_XXS probably better | | [PART 1](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-i1-GGUF/resolve/main/Blitz-AI-ULTRA.i1-IQ3_XXS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-i1-GGUF/resolve/main/Blitz-AI-ULTRA.i1-IQ3_XXS.gguf.part2of2) | i1-IQ3_XXS | 53.0 | lower quality | | [PART 1](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-i1-GGUF/resolve/main/Blitz-AI-ULTRA.i1-IQ3_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-i1-GGUF/resolve/main/Blitz-AI-ULTRA.i1-IQ3_XS.gguf.part2of2) | i1-IQ3_XS | 56.7 | | | [PART 1](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-i1-GGUF/resolve/main/Blitz-AI-ULTRA.i1-IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-i1-GGUF/resolve/main/Blitz-AI-ULTRA.i1-IQ3_S.gguf.part2of2) | i1-IQ3_S | 59.6 | beats Q3_K* | | [PART 1](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-i1-GGUF/resolve/main/Blitz-AI-ULTRA.i1-Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-i1-GGUF/resolve/main/Blitz-AI-ULTRA.i1-Q3_K_S.gguf.part2of2) | i1-Q3_K_S | 59.6 | IQ3_XS probably better | | [PART 1](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-i1-GGUF/resolve/main/Blitz-AI-ULTRA.i1-IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-i1-GGUF/resolve/main/Blitz-AI-ULTRA.i1-IQ3_M.gguf.part2of2) | i1-IQ3_M | 62.7 | | | [PART 1](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-i1-GGUF/resolve/main/Blitz-AI-ULTRA.i1-Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-i1-GGUF/resolve/main/Blitz-AI-ULTRA.i1-Q3_K_M.gguf.part2of2) | i1-Q3_K_M | 66.3 | IQ3_S probably better | | [PART 1](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-i1-GGUF/resolve/main/Blitz-AI-ULTRA.i1-Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-i1-GGUF/resolve/main/Blitz-AI-ULTRA.i1-Q3_K_L.gguf.part2of2) | i1-Q3_K_L | 72.1 | IQ3_M probably better | | [PART 1](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-i1-GGUF/resolve/main/Blitz-AI-ULTRA.i1-IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-i1-GGUF/resolve/main/Blitz-AI-ULTRA.i1-IQ4_XS.gguf.part2of2) | i1-IQ4_XS | 72.5 | | | [PART 1](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-i1-GGUF/resolve/main/Blitz-AI-ULTRA.i1-IQ4_NL.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-i1-GGUF/resolve/main/Blitz-AI-ULTRA.i1-IQ4_NL.gguf.part2of2) | i1-IQ4_NL | 76.5 | prefer IQ4_XS | | [PART 1](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-i1-GGUF/resolve/main/Blitz-AI-ULTRA.i1-Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-i1-GGUF/resolve/main/Blitz-AI-ULTRA.i1-Q4_0.gguf.part2of2) | i1-Q4_0 | 76.7 | fast, low quality | | [PART 1](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-i1-GGUF/resolve/main/Blitz-AI-ULTRA.i1-Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-i1-GGUF/resolve/main/Blitz-AI-ULTRA.i1-Q4_K_S.gguf.part2of2) | i1-Q4_K_S | 77.0 | optimal size/speed/quality | | [PART 1](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-i1-GGUF/resolve/main/Blitz-AI-ULTRA.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-i1-GGUF/resolve/main/Blitz-AI-ULTRA.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 81.4 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-i1-GGUF/resolve/main/Blitz-AI-ULTRA.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-i1-GGUF/resolve/main/Blitz-AI-ULTRA.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 92.3 | | | [PART 1](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-i1-GGUF/resolve/main/Blitz-AI-ULTRA.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-i1-GGUF/resolve/main/Blitz-AI-ULTRA.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 94.9 | | | [PART 1](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-i1-GGUF/resolve/main/Blitz-AI-ULTRA.i1-Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-i1-GGUF/resolve/main/Blitz-AI-ULTRA.i1-Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-i1-GGUF/resolve/main/Blitz-AI-ULTRA.i1-Q6_K.gguf.part3of3) | i1-Q6_K | 109.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Spaetzle-v8-7b-orpo-GGUF
mradermacher
2024-05-06T05:50:32Z
5
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "en", "base_model:cstr/Spaetzle-v8-7b-orpo", "base_model:quantized:cstr/Spaetzle-v8-7b-orpo", "endpoints_compatible", "region:us", "conversational" ]
null
2024-03-28T04:10:10Z
--- base_model: cstr/Spaetzle-v8-7b-orpo language: - en library_name: transformers quantized_by: mradermacher tags: - generated_from_trainer --- ## About static quants of https://huggingface.co/cstr/Spaetzle-v8-7b-orpo <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Spaetzle-v8-7b-orpo-GGUF/resolve/main/Spaetzle-v8-7b-orpo.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Spaetzle-v8-7b-orpo-GGUF/resolve/main/Spaetzle-v8-7b-orpo.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Spaetzle-v8-7b-orpo-GGUF/resolve/main/Spaetzle-v8-7b-orpo.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Spaetzle-v8-7b-orpo-GGUF/resolve/main/Spaetzle-v8-7b-orpo.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Spaetzle-v8-7b-orpo-GGUF/resolve/main/Spaetzle-v8-7b-orpo.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Spaetzle-v8-7b-orpo-GGUF/resolve/main/Spaetzle-v8-7b-orpo.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Spaetzle-v8-7b-orpo-GGUF/resolve/main/Spaetzle-v8-7b-orpo.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Spaetzle-v8-7b-orpo-GGUF/resolve/main/Spaetzle-v8-7b-orpo.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Spaetzle-v8-7b-orpo-GGUF/resolve/main/Spaetzle-v8-7b-orpo.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Spaetzle-v8-7b-orpo-GGUF/resolve/main/Spaetzle-v8-7b-orpo.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Spaetzle-v8-7b-orpo-GGUF/resolve/main/Spaetzle-v8-7b-orpo.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Spaetzle-v8-7b-orpo-GGUF/resolve/main/Spaetzle-v8-7b-orpo.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Spaetzle-v8-7b-orpo-GGUF/resolve/main/Spaetzle-v8-7b-orpo.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Spaetzle-v8-7b-orpo-GGUF/resolve/main/Spaetzle-v8-7b-orpo.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Spaetzle-v8-7b-orpo-GGUF/resolve/main/Spaetzle-v8-7b-orpo.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Spaetzle-v8-7b-orpo-GGUF/resolve/main/Spaetzle-v8-7b-orpo.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Mistral-v2-orpo-GGUF
mradermacher
2024-05-06T05:50:24Z
56
1
transformers
[ "transformers", "gguf", "en", "dataset:argilla/distilabel-capybara-dpo-7k-binarized", "base_model:abideen/Mistral-v2-orpo", "base_model:quantized:abideen/Mistral-v2-orpo", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-28T04:26:37Z
--- base_model: abideen/Mistral-v2-orpo datasets: - argilla/distilabel-capybara-dpo-7k-binarized language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About static quants of https://huggingface.co/abideen/Mistral-v2-orpo <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mistral-v2-orpo-GGUF/resolve/main/Mistral-v2-orpo.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-v2-orpo-GGUF/resolve/main/Mistral-v2-orpo.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-v2-orpo-GGUF/resolve/main/Mistral-v2-orpo.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-v2-orpo-GGUF/resolve/main/Mistral-v2-orpo.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mistral-v2-orpo-GGUF/resolve/main/Mistral-v2-orpo.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-v2-orpo-GGUF/resolve/main/Mistral-v2-orpo.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-v2-orpo-GGUF/resolve/main/Mistral-v2-orpo.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-v2-orpo-GGUF/resolve/main/Mistral-v2-orpo.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-v2-orpo-GGUF/resolve/main/Mistral-v2-orpo.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-v2-orpo-GGUF/resolve/main/Mistral-v2-orpo.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mistral-v2-orpo-GGUF/resolve/main/Mistral-v2-orpo.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Mistral-v2-orpo-GGUF/resolve/main/Mistral-v2-orpo.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mistral-v2-orpo-GGUF/resolve/main/Mistral-v2-orpo.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-v2-orpo-GGUF/resolve/main/Mistral-v2-orpo.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-v2-orpo-GGUF/resolve/main/Mistral-v2-orpo.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-v2-orpo-GGUF/resolve/main/Mistral-v2-orpo.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
MeanBean-05/fine-tuned-gist
MeanBean-05
2024-05-06T05:50:21Z
9
0
setfit
[ "setfit", "safetensors", "bert", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:avsolatorio/GIST-small-Embedding-v0", "base_model:finetune:avsolatorio/GIST-small-Embedding-v0", "model-index", "region:us" ]
text-classification
2024-05-06T05:47:08Z
--- library_name: setfit tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer base_model: avsolatorio/GIST-small-Embedding-v0 metrics: - accuracy widget: - text: 'User: Hello, I want to transfer some funds to another bank account. Bank Bot: Hi, you can do that through our online banking system. Can you please confirm the amount and the account details? User: Sure, I want to transfer $500 to account number 123456789. Bank Bot: Okay, I have processed your request to transfer $500 to account number 123456789. Is there anything else I can assist you with? User: No, actually I wanted to transfer $1000, not $500.' - text: 'User: Hello, I''m having trouble with my ATM card. Bank Bot: I''m sorry to hear that. To assist you further, could you please provide me with your account number. User: Yeah, 1234454673838 this is my account number. Bank Bot: It seems like your account is blocked. We will investigate and get back to you. Do you have any other issues? User: That''s all I needed help with. Thank you for your assistance.' - text: 'User: Hello, how do I transfer funds to my friend''s account? Bank Bot: Hi, you can transfer funds through our online banking system or mobile banking app. Which one would you prefer? User: I would prefer the mobile banking app. Bank Bot: Great! To transfer funds through the app, you need to login first. Once logged in, select the option "Transfer Funds" from the menu. User: Okay, I have logged in and selected "Transfer Funds". What details do I need to provide? Bank Bot: You need to provide the account number, name of the recipient, and the bank name and branch where your friend''s account is held. Do you have these details handy? User: Yes, I do. But can you confirm the daily transfer limit for me?' - text: 'User: Hello, I''m having trouble with my ATM card. Bank Bot: I''m sorry to hear that. To assist you further, could you please provide me with your account number. User: Yes, my account number is 9872123424. Bank Bot: When did you transferred your money? User: The transaction happened on 5th September. Bank Bot: Okay, I''ll investigate. User: Sorry, I provided the wrong date for the transaction. It was actually on 6th September.' - text: 'User: Hello, how do I transfer funds to my friend''s account? Bank Bot: Hi, you can transfer funds through our online banking system or mobile banking app. Which one would you prefer? User: I would prefer the online banking. Bank Bot: Alright. Can you please provide me the account number of your friend to initiate the transfer? User: I''m sorry, I cannot provide you with that information. Bank Bot: I understand. However, we need the account number to initiate the transfer. Alternatively, you can ask your friend to provide their account number to you and then initiate the transfer. User: I still refuse to provide the account number. Is there any other way to transfer funds?' pipeline_tag: text-classification inference: true model-index: - name: SetFit with avsolatorio/GIST-small-Embedding-v0 results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 0.5714285714285714 name: Accuracy --- # SetFit with avsolatorio/GIST-small-Embedding-v0 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [avsolatorio/GIST-small-Embedding-v0](https://huggingface.co/avsolatorio/GIST-small-Embedding-v0) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [avsolatorio/GIST-small-Embedding-v0](https://huggingface.co/avsolatorio/GIST-small-Embedding-v0) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 14 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:---------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Restart Conversation | <ul><li>"User: HEY! TRANSFER FUNDS. MY TRANSFER NOT PROCESS. PLS HELP.\nBot: Hello there. I'm happy to assist you with your fund transfer. Can you please provide me with the transaction details?\nUser: SRY I NO MEMB I THINK 50 DOLLAR TO MOM.\nBot: I see. Do you have your mom's account details and can you please confirm the amount and reference number of the transaction?\nUser: I THINK YES, WAIT PLS. AHA IT 123456.\nBot: Thank you. I'm checking the transaction now. Please hold on for a moment.\nUser: KK.\nBot: I've checked the transaction and it appears to be successful. Can you please check your mom's account to confirm?\nUser: OH OKAY. I SEE NOW. THNK U SO MUCH. I'm sorry for the mistakes in my previous texts. Can we start over again, please?"</li><li>"User: Hi, I'd like to dispute a transaction on my account.\nBank Bot: I'm sorry to hear that. To assist you further, could you please provide me with the details of the transaction you wish to dispute, including the transaction amount, date, and description?\nUser: Can we start over? I missed the details you asked for earlier.\nBank Bot: Of course. Could you please provide me with the specifics of the transaction you are disputing, including the transaction amount, date, and description?\nUser: $100 on March 15th, at a restaurant. However, I don't recall making this transaction.\nBank Bot: Thank you for providing the details. Based on your information, I can see that the transaction was made at a restaurant. Do you often dine at this particular restaurant?\nUser: No, I have never been there before.\nBank Bot: I'm sorry to hear that. Have you checked with anyone who may have access to your card information, such as a family member or friend?\nUser: Yes, I have checked with my family and friends, but none of them made this transaction.\nBank Bot: I see. In this case, I will need to escalate the issue to our dispute resolution team. Can you please confirm that you did not authorize this transaction?\nUser: Actually I've lost in process can we restart it?"</li><li>"User: I demand to speak to someone about a fraudulent transaction on my account right now!\nBank Bot: I'm sorry to hear about your experience. To assist you further, could you please provide me with the details of the transaction you wish to dispute?\nUser: Actually, let's start over. My account was hacked, and someone made a withdrawal of $1000 that I did not authorize.\nBank Bot: I see. When did this happen, and which transaction specifically are you disputing?\nUser: I forgot the context, can we restart it?"</li></ul> | | Correction Request | <ul><li>'User: Hi, I want to open a new account. Can you help me?\nBank Assist Bot: Yes, absolutely! What kind of account are you looking for?\nUser: Actually, I had selected the wrong option earlier. I am interested in opening a checking account.\nBank Assist Bot: No worries at all! Can you please provide your name and contact number so we can begin the process again?\nUser: Yes, my name is Jake Miller and my contact number is 555-555-5555.\nBank Assist Bot: Thanks, Jake! Before we proceed further, may I know if you already have an existing account with our bank?\nUser: Yes, I do.\nBank Assist Bot: Great! Can you please provide your account number and social security number to proceed?\nUser: Yes, my account number is 123456789 and my social security number is 123-45-6789.\nBank Assist Bot: Thanks for sharing the information, Jake. Can you please verify your date of birth as well?\nUser: Oops, I had given the wrong account number. It is actually 987654321.'</li><li>"User: Hi, I would like to dispute a transaction on my account.\nBank bot: Sure, can you please provide me with the details of the transaction you wish to dispute?\nUser: Yes, it was a charge of $100 on my account that I didn't authorize.\nBank bot: I see. During our investigation, we found that the transaction was made through your debit card. Can you confirm if you have authorized any transactions through your card?\nUser: No, I haven't made any such transaction. I think my card details were stolen."</li><li>"User: hi, i have to talk to you about a transfer. i don't know what to do.\nBot: Of course, I'd be happy to assist you with that. Can you be more specific about the transfer you'd like to dispute?\nUser: i don t knw rly. some sort of mistake with my account or smth.\nBot: I understand, could you please provide me with your account number and the date of the transaction so I can take a closer look at the issue and help you resolve it?\nUser: umm.. i think it was two weeks ago and the account number is 12345.\nBot: Thank you for that information. Could you please tell me your full name for verification purposes?\nUser: sure, my name is Emma.\nBot: Thank you, Emma. I'll check that transaction and get back to you with an update shortly.\nUser: oh wait, actually my account number is wrong. It's 54321"</li></ul> | | Affirmative Confirmation | <ul><li>"User: I want to dispute a transaction.\nBank bot: Sure, I'd be happy to help. Can you tell me the date and amount of the transaction?\nUser: Yes, it was on June 15th for $50.\nBank bot : Are you sure?\nUser : Yes."</li><li>"User: hi im not suree if i transfered mony but my balence is to low can u help\nBot: Hello, I'm sorry for the confusion. Can you provide me with more information regarding the transfer in question?\nUser: yea its from cheking to savInc and was $300 last week tuesday\nBot: Alright, thank you for clarifying that for me. I see the transfer you mentioned. It looks like your balance is reflecting that transaction. Is there anything else you're unsure about?\nUser: no thats it thx your helpful"</li><li>'User: Hi, I want to open a new account. Can you help me?\nBank Assist Bot: Yes, absolutely! What kind of account are you looking for?\nUser: A savings account.\nBank Assist Bot: Fantastic! Would you prefer one with a higher interest rate or more flexibility with withdrawals?\nUser: I would prefer a higher interest rate.\nBank Assist Bot: Alright, thank you. Can you please provide me with your name and contact number to begin the process of opening the account?\nUser: Yes, my name is John Smith and my contact number is 555-555-5555.\nBank Assist Bot: Perfect! Before we proceed further, may I know if you have an existing account with our bank?\nUser: Yes'</li></ul> | | Questions Requiring External Information | <ul><li>"User: Hi, I think something went wrong with my accunt. There is transaction that I did not make and it think I have been charged.\nBot: I'm sorry to hear that. Could you please give me more information about the transaction so I can investigate?\nUser: Ya, it was on November 20th for about $150. I id not recognize the recipient and id not approve the payment.\nBot: I see. Based on the information you provided, it sounds like this may be fraud. I willassist you with disputing the transaction. Please confirm if you contacted the recipient regarding this issue.\nUser: I did not contact them yet. How do I do it?\nBot: Just get in touch with the merchant or seller involved in the transaction and try to get a refund or more information on what the transaction was for. If you're not able to resolve it that way, we can start the dispute process. Is there anything else I can help you with?\nUser: Not really, thanks. Just one question, how do I make a transfer to other bank accounts?"</li><li>"Certainly! Here's another example:\nUser: Hi, I need to dispute a transaction.\nBank Bot: Okay, I’m sorry to hear that. Could you please provide some more details about the transaction?\nUser: Yes, I see a charge of $200 that I didn’t make.\nBank Bot: I understand. Could you please confirm the date and the name of the merchant?\nUser: It was on September 5th and the name is ABC Electronics.\nBank Bot: Thank you for the information. Is there anything else you'd like to add or modify?\nUser: Can you provide me with the status of my recent deposit?"</li><li>"User: I demand to speak to someone about a fraudulent transaction on my account right now!\nBank Bot: I'm sorry to hear about your experience. To assist you further, could you please provide me with the details of the transaction you wish to dispute?\nUser: This is ridiculous! Someone has stolen $1000 from my account, and I want it back immediately!\nBank Bot: I'm sorry to hear about the unauthorized transaction on your account. Could you please confirm the details of the transaction and your account information to help me assist you better?\nUser: Can you tell me if the transaction was made from a foreign country?"</li></ul> | | Repeat Conversation | <ul><li>"User: Hello, I'd like to dispute a transaction on my account.Bank \nBot: I'm sorry to hear that. Could you please provide me with the details of the transaction you wish to dispute?\nUser: The transaction was for $500 on April 1st, and it was for an online purchase. However, I did not authorize this transaction.\nBank Bot: Thank you for bringing this to our attention. Could you please provide me with your account number and contact information in case we need to follow up with you?\nUser: Sorry, could you Restart Conversation that? I didn't catch the last part."</li><li>"User: I demand to speak to someone about a fraudulent transaction on my account right now!\nBank Bot: I'm sorry to hear about your experience. To assist you further, could you please provide me with the details of the transaction you wish to dispute?\nUser: What did you say? I couldn't hear you properly."</li><li>"User: Hello, I'd like to dispute a transaction on my account.\nBank Bot: I'm sorry to hear that. Could you please provide me with the details of the transaction you wish to dispute?\nUser: The transaction was for $500 on April 1st, and it was for an online purchase. However, I did not authorise this transaction.\nBank Bot: The transaction you reported was for $500 on April 1st and was an online purchase. Please confirm if this is correct.\nUser: Sorry, could you please Restart Conversation the details?"</li></ul> | | Pause Conversation | <ul><li>"User: Hi, I want to dispute a transaction on my account.\nBank Bot: Hi there! I'd be happy to assist you with that. Can you please provide me with the details of the transaction you wish to dispute?\nUser: I noticed a charge on my account for $100 that I don't recognize.\nBank Bot: I'm sorry to hear that. Can you please tell me the date the transaction took place and the name of the merchant that charged your account?\nUser: It happened on June 1st and the merchant is called XYZ Company.\nBank Bot: Thanks for the information. We'll need to investigate it further. Can you confirm that you did not authorize this transaction?\nUser: Can I pause the conversation for a moment, please?"</li><li>"User: My transaction is wrong, somthing's happend and I don't recognize this transaction.\nBot: I'm sorry, could you please provide more information regarding the transaction in question such as the date and amount?\nUser: I not know the exact date but the ammaunt was around $30. Let me give second."</li><li>'User: Hi, I want to open a new account. Can you help me?\nBank Assist Bot: Yes, absolutely! What kind of account are you looking for?\nUser: Can I pause for a moment? I need to grab some information first.\nBank Assist Bot: Of course! Take your time and let me know when you are ready.\nUser: Sorry for the wait. I am interested in opening a checking account.\nBank Assist Bot: No problem at all! Can you please provide your name and contact information to begin the process?\nUser: Yes, my name is Sarah Smith and my contact number is 555-555-5555.\nBank Assist Bot: Great, Sarah! Before we proceed further, may I know if you already have an existing account with our bank?\nUser: No, I do not. Can we take a pause here.'</li></ul> | | Negative Confirmation | <ul><li>"User: Hi, I noticed a transaction on my account that I don't recognize. Can you help me with this?\nBank Bot: Sure, I can assist you with that. Can you please provide me with the details of the transaction in question?\nUser: It's a charge for $100 to a website I've never heard of.\nBank Bot: I see. Based on our records, the transaction was processed on the 15th of this month. Are you disputing the transaction?\nUser: No I did not make this transaction."</li><li>"User: Hi there, I need to speak to someone about a problem I'm having with a recent transaction.\nBank Bot: Alright, I'm here to help. What seems to be the issue?\nUser: I received a charge on my account that I don't recognize. Can you tell me what it's for?\nBank Bot: Yes, it looks like the charge is from a restaurant on Main Street for $100. Is that familiar to you?\nUser: No, it's definitely not. I haven't been to Main Street in months."</li><li>"User: Hi, I want to open a new account. Can you help me?\nBank Assist Bot: Yes, absolutely! What kind of account are you looking for?\nUser: Actually, I wanted to check the status of my existing account.\nBank Assist Bot: Oh, I see. Apologies for the confusion. Can you please tell me your account number?\nUser: Actually, I don't have it with me right now."</li></ul> | | Continue Conversation | <ul><li>"User: Hi, I want to dispute a transaction on my account.\nBank Bot: Hi there! I'd be happy to assist you with that. Can you please provide me with the details of the transaction you wish to dispute?\nUser: I noticed a charge on my account for $100 that I don't recognize.\nBank Bot: I'm sorry to hear that. Can you please tell me the date the transaction took place and the name of the merchant that charged your account?\nUser: It happened on June 1st and the merchant is called XYZ Company."</li><li>"User: Hello, I'd like to dispute a transaction on my account.\nBank Bot: I'm sorry to hear that. Could you please provide me with the details of the transaction you wish to dispute?\nUser: The transaction was for $500 on April 1st, and it was for an online purchase. However, I did not authorise this transaction.\nBank Bot: Thank you for providing me with the information. We will investigate this transaction and get back to you with an update as soon as possible. Do you have any other transaction disputes that you want to report?\nUser: No, this is the only one. Thank you for your help.\nBank Bot: You're welcome. Is there anything else I can assist you with today?\nUser: No, that's all for now. Thank you again."</li><li>"User: I demand to speak to someone about a fraudulent transaction on my account right now!\nBank Bot: I'm sorry to hear about your experience. To assist you further, could you please provide me with the details of the transaction you wish to dispute?\nUser: This is ridiculous! Someone has stolen $1000 from my account, and I want it back immediately!\nBank Bot: I completely understand your frustration. Can you please tell me which transaction you are disputing and when it took place?\nUser: It happened on September 15th, and it was a withdrawal at an ATM that I didn't make.\nBank Bot: I see. Just to clarify, was this at an ATM that you normally use?\nUser: No, it was at a different location that I have never been to before.\nBank Bot: Thank you for providing that information. We will investigate this immediately and get back to you with the results. Is there anything else that I can assist you with?\nUser: No, just please resolve this issue as soon as possible."</li></ul> | | End Conversation | <ul><li>"User: Hi, I am not sure about the transaction I made yesterday.\nBot: Hello, I'm sorry to hear that. Can you please elaborate on the transaction in question?\nUser: Yeah, I transferred funds to a friend, but I think it's not reflecting in his account.\nBot: I understand your concern. Could you please tell me the transaction details such as the amount and the recipient's account details? I can check our records and provide a solution.\nUser: Oh sure, let me check the details and get back to you.\nBot: Alright, take your time and let me know if you need any further assistance. Have a good day!\nUser: Thank you, bye."</li><li>'User: I want to dispute a transaction.\nBank Bot: Sure, please provide me with the details of the transaction.\nUser: There was a charge of $75 from a restaurant that I never went to.\nBank Bot: I understand. Can you please provide me with the transaction date and amount?\nUser: The transaction was on September 1st for $75.\nBank Bot: Okay, I have initiated the dispute process. You will receive an update within the next 7 business days.\nUser: Thank you, I appreciate your help.'</li><li>"User: I need help disputing a transaction.\nBank Bot: Sure, I can help you with that. Can you please provide me with the details of the transaction?\nUser: It was a charge of $50 from a vendor I don't recognize.\nBank Bot: Okay, I will need to verify some information to start the dispute process. Can you please confirm the date and amount of the transaction?\nUser: The transaction was on August 25th for $50.\nBank Bot: Thank you. I will initiate the dispute process and provide you with updates promptly.\nUser: Alright, thanks for your help."</li></ul> | | New Inquiry or Request | <ul><li>"User: Hi, I was looking at my account and noticed a transaction that I don't recognize. Can you explain it to me please?\nBot: Of course, I'd be glad to help. Can you provide me with the transaction details (date, amount, description)?\nUser: Sorry, I don't have the details at the moment. But can you check if there's any discrepancy in my account?\nBot: Sure, I can review your account transactions for any inaccuracies. However, it would be helpful if you could provide me with more information about the transaction in question.\nUser: I understand. I'll check my records and get back to you. By the way, can you tell me if your bank offers international transfer services?"</li><li>"User: hai, i need helpe with my accaunt. ive been charge for something i didn't dorecentely.\nBot: Hello, I apologize but I did not understand your request. Could you please provide more details about the transaction in question?\nUser: Yes soree, i meen i woz chargd for a purchaes that i didnt mad laat week. Can you helip?\nBot: I see. I believe you are notifying me of an unauthorized charge on your account. To help resolve this issue, please provide the date, amount, and description of the transaction so that I can investigate further.\nUser: Ok, lemmi chek. By the waay, i judt want to knoe if i can take a hom loan?"</li><li>"User: Hi, I have a question about disputing a transaction on my account.\nBank Bot: Hello! I'm happy to help. Please provide me with some details about the transaction in question.\nUser: I received a product that was damaged, but the merchant has refused to issue a refund. What can I do?\nBank Bot: I'm sorry to hear that. Have you tried contacting the merchant to resolve the issue?\nUser: Yes, but they are not being cooperative. Can you help me dispute the transaction?"</li></ul> | | Questions Answerable by Conversational Context / History | <ul><li>"User: Hello, I'd like to dispute a transaction on my account.Bank \nBot: I'm sorry to hear that. Could you please provide me with the details of the transaction you wish to dispute?\nUser: The transaction was for $500 on April 1st, and it was for an online purchase. However, I did not authorize this transaction.\nBank Bot: Thank you for bringing this to our attention. Could you please provide me with your account number and contact information in case we need to follow up with you?\nUser: Sure, my account number is XXXXXXXX and my phone number is XXX-XXX-XXXX.\nBank Bot: Thank you for the information. We will investigate the transaction and get back to you within 3-5 business days.\nUser: How long will it take?"</li><li>"User: I demand to speak to someone about a fraudulent transaction on my account right now!\nBank Bot: I'm sorry to hear about your experience. To assist you further, could you please provide me with the details of the transaction you wish to dispute?\nUser: This is ridiculous! Someone has stolen $1000 from my account, and I want it back immediately!\nBank Bot: I'm sorry to hear about the unauthorized transaction on your account. Could you please confirm the details of the transaction and your account information to help me assist you better?\nUser: Can you tell me if there have been any other unauthorized transactions on my account?"</li><li>"User: Hi, I'd like to dispute a transaction on my account.\nBank Bot: I'm sorry to hear that. To assist you further, could you please provide me with the details of the transaction you wish to dispute, including the transaction amount, date, and description?\nUser: Yes, my account number is 12937492174.\nBank Bot: Thank you for providing the details. Based on your information, I can see that the transaction was made at a restaurant. Do you often dine at this particular restaurant?\nUser: No, I have never been there before.\nBank Bot: I'm sorry to hear that. Have you checked with anyone who may have access to your card information, such as a family member or friend?\nUser: Yes, I have checked with my family and friends, but none of them made this transaction.\nBank Bot: I see. In this case, I will need to escalate the issue to our dispute resolution team. Can you please confirm that you did not authorize this transaction?\nUser: Yes, I did not authorize this transaction.\nBank Bot: Great, I will initiate the dispute process for you and send you an update via email within the next 5-7 business days. Is there anything else I can assist you with today?\nUser: Could you let me know what the dispute process entails?"</li></ul> | | Refuse to Answer | <ul><li>"User: Hello, I'd like to dispute a transaction on my account.Bank \nBot: I'm sorry to hear that. Could you please provide me with the details of the transaction you wish to dispute?\nUser: The transaction was for $500 on April 1st, and it was for an online purchase. However, I did not authorize this transaction.\nBank Bot: Thank you for bringing this to our attention. Could you please provide me with your account number and contact information in case we need to follow up with you?\nUser: I'd rather not give out my personal information."</li><li>"User: I demand to speak to someone about a fraudulent transaction on my account right now!\nBank Bot: I'm sorry to hear about your experience. To assist you further, could you please provide me with the details of the transaction you wish to dispute?\nUser: That's none of your business. Just help me get my money back."</li><li>"User: Hi, I want to open a new account. Can you help me?\nBank Assist Bot: Yes, absolutely! What kind of account are you looking for?\nUser: I don't feel comfortable giving out my personal information."</li></ul> | | Transfer to Human Agent | <ul><li>"User: Hi, I want to dispute a transaction on my account.\nBank Bot: Hi there! I'd be happy to assist you with that. Can you please provide me with the details of the transaction you wish to dispute?\nUser: I noticed a charge on my account for $100 that I don't recognize.\nBank Bot: I'm sorry to hear that. Can you please tell me the date the transaction took place and the name of the merchant that charged your account?\nUser: It happened on June 1st and the merchant is called XYZ Company.\nBank Bot: Thanks for the information. We'll need to investigate it further. Can you confirm that you did not authorize this transaction?\nUser: Transfer to a human agent please."</li><li>"User: What the hell is going on with this transaction? I'm having serious issues, and your bot is so useless.\nBot: I'm sorry for any inconvenience caused. Can you please provide me with the transaction details so I can assist you better?\nUser: I don't have time to waste. Why can't you understand my problem? Your bot is such a waste of time.\nBot: I apologize for any frustration you're experiencing. Can you please explain the problem you're facing so I can forward this to our customer support executive?\nUser: Just transfer me to a human agent right now. I'm done with your bot."</li><li>"User: Hi, I'd like to dispute a transaction on my account.\nBank Bot: I'm sorry to hear that. To assist you further, could you please provide me with the details of the transaction you wish to dispute, including the transaction amount, date, and description?\nUser: I've already provided that information. Can I please speak with a human agent about this?"</li></ul> | | Questions on Process | <ul><li>'User: What the hell is this? I did not authorize a transaction for $100 with "Dan\'s Pizza"! This is ridiculous!\nBot: I\'m sorry for the inconvenience you\'ve experienced, and I would be happy to help. Can you please confirm your name and the last four digits of your credit card number so I can pull up your account and investigate the transaction?\nUser: My name is John Doe and the last four digits of my card are 6789. This is such bullshit, I have never even been to Dan\'s Pizza!\nBot: Thank you for providing me with that information. I understand how frustrating and concerning this is for you, and I apologize for any inconvenience caused. Our team will investigate the transaction and follow up with you within 2-3 business days with more information or next steps.\nUser: I need to know what steps are being taken in this process?'</li><li>"User: Hi, I want to open a new account. Can you help me?\nBank Assist Bot: Yes, absolutely! What kind of account are you looking for?\nUser: A savings account with higher interest rates.\nBank Assist Bot: Great! Can you please provide me with your name and contact number to begin the process of opening the account?\nUser: Yes, my name is Jane Smith and my contact number is 555-555-5555.\nBank Assist Bot: Perfect! Before we proceed further, can you please confirm that you do not have an existing account with our bank?\nUser: That's correct, I don't.\nBank Assist Bot: Understood. To open a new account, we will need some additional information from you. Can you please provide your date of birth and address?\nUser: Sure, my date of birth is 01/01/1990 and my address is 123 Main Street.\nBank Assist Bot: Thank you for that, Jane. Please note that it might take up to 24-48 hours to process your application and create your account. Is there anything else you need assistance with?\nUser: Yes, actually. Can you explain what happens after I submit my application?"</li><li>'User: Can you please explain the process for disputing a transaction?\nBank Bot: Absolutely. First, you need to report the transaction as unauthorized or fraudulent within 60 days of the date it appeared on your statement. Then, we will investigate the issue and work with you to resolve it.\nUser: What kind of information do I need to provide during the investigation?\nBank Bot: You will need to provide documentation proving that you did not make the transaction, such as a police report or signed affidavit. In some cases, we may also need additional information about the transaction in question.\nUser: How long does the investigation typically take?'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.5714 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("setfit_model_id") # Run inference preds = model("User: Hello, I'm having trouble with my ATM card. Bank Bot: I'm sorry to hear that. To assist you further, could you please provide me with your account number. User: Yeah, 1234454673838 this is my account number. Bank Bot: It seems like your account is blocked. We will investigate and get back to you. Do you have any other issues? User: That's all I needed help with. Thank you for your assistance.") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:---------|:----| | Word count | 34 | 109.9786 | 227 | | Label | Training Sample Count | |:---------------------------------------------------------|:----------------------| | Continue Conversation | 10 | | Pause Conversation | 10 | | Restart Conversation | 10 | | Repeat Conversation | 10 | | End Conversation | 10 | | Refuse to Answer | 10 | | Affirmative Confirmation | 10 | | Negative Confirmation | 10 | | Correction Request | 10 | | Questions Answerable by Conversational Context / History | 10 | | Questions Requiring External Information | 10 | | Questions on Process | 10 | | New Inquiry or Request | 10 | | Transfer to Human Agent | 10 | ### Training Hyperparameters - batch_size: (12, 12) - num_epochs: (3, 3) - max_steps: -1 - sampling_strategy: oversampling - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0007 | 1 | 0.3613 | - | | 0.0330 | 50 | 0.3367 | - | | 0.0659 | 100 | 0.2621 | - | | 0.0989 | 150 | 0.1997 | - | | 0.1318 | 200 | 0.1906 | - | | 0.1648 | 250 | 0.1034 | - | | 0.1978 | 300 | 0.0784 | - | | 0.2307 | 350 | 0.1119 | - | | 0.2637 | 400 | 0.0694 | - | | 0.2966 | 450 | 0.0693 | - | | 0.3296 | 500 | 0.0542 | - | | 0.3626 | 550 | 0.0669 | - | | 0.3955 | 600 | 0.0594 | - | | 0.4285 | 650 | 0.0175 | - | | 0.4614 | 700 | 0.0125 | - | | 0.4944 | 750 | 0.0057 | - | | 0.5274 | 800 | 0.0086 | - | | 0.5603 | 850 | 0.076 | - | | 0.5933 | 900 | 0.0077 | - | | 0.6262 | 950 | 0.0135 | - | | 0.6592 | 1000 | 0.012 | - | | 0.6922 | 1050 | 0.0094 | - | | 0.7251 | 1100 | 0.0735 | - | | 0.7581 | 1150 | 0.0047 | - | | 0.7910 | 1200 | 0.0699 | - | | 0.8240 | 1250 | 0.0063 | - | | 0.8570 | 1300 | 0.0044 | - | | 0.8899 | 1350 | 0.0028 | - | | 0.9229 | 1400 | 0.0706 | - | | 0.9558 | 1450 | 0.0047 | - | | 0.9888 | 1500 | 0.0711 | - | | 1.0218 | 1550 | 0.0036 | - | | 1.0547 | 1600 | 0.0024 | - | | 1.0877 | 1650 | 0.1245 | - | | 1.1206 | 1700 | 0.0044 | - | | 1.1536 | 1750 | 0.0566 | - | | 1.1866 | 1800 | 0.0045 | - | | 1.2195 | 1850 | 0.0046 | - | | 1.2525 | 1900 | 0.0033 | - | | 1.2854 | 1950 | 0.0031 | - | | 1.3184 | 2000 | 0.0095 | - | | 1.3514 | 2050 | 0.0034 | - | | 1.3843 | 2100 | 0.0031 | - | | 1.4173 | 2150 | 0.049 | - | | 1.4502 | 2200 | 0.0023 | - | | 1.4832 | 2250 | 0.0034 | - | | 1.5162 | 2300 | 0.0039 | - | | 1.5491 | 2350 | 0.0056 | - | | 1.5821 | 2400 | 0.0027 | - | | 1.6150 | 2450 | 0.0025 | - | | 1.6480 | 2500 | 0.0014 | - | | 1.6809 | 2550 | 0.0029 | - | | 1.7139 | 2600 | 0.0024 | - | | 1.7469 | 2650 | 0.0017 | - | | 1.7798 | 2700 | 0.0018 | - | | 1.8128 | 2750 | 0.0018 | - | | 1.8457 | 2800 | 0.0018 | - | | 1.8787 | 2850 | 0.0025 | - | | 1.9117 | 2900 | 0.0024 | - | | 1.9446 | 2950 | 0.0022 | - | | 1.9776 | 3000 | 0.002 | - | | 2.0105 | 3050 | 0.0017 | - | | 2.0435 | 3100 | 0.0021 | - | | 2.0765 | 3150 | 0.0019 | - | | 2.1094 | 3200 | 0.0016 | - | | 2.1424 | 3250 | 0.0017 | - | | 2.1753 | 3300 | 0.0016 | - | | 2.2083 | 3350 | 0.0015 | - | | 2.2413 | 3400 | 0.0017 | - | | 2.2742 | 3450 | 0.0015 | - | | 2.3072 | 3500 | 0.0014 | - | | 2.3401 | 3550 | 0.0012 | - | | 2.3731 | 3600 | 0.0011 | - | | 2.4061 | 3650 | 0.0015 | - | | 2.4390 | 3700 | 0.0016 | - | | 2.4720 | 3750 | 0.0018 | - | | 2.5049 | 3800 | 0.0012 | - | | 2.5379 | 3850 | 0.0021 | - | | 2.5709 | 3900 | 0.0014 | - | | 2.6038 | 3950 | 0.0014 | - | | 2.6368 | 4000 | 0.0013 | - | | 2.6697 | 4050 | 0.0014 | - | | 2.7027 | 4100 | 0.0016 | - | | 2.7357 | 4150 | 0.0016 | - | | 2.7686 | 4200 | 0.0019 | - | | 2.8016 | 4250 | 0.0014 | - | | 2.8345 | 4300 | 0.0015 | - | | 2.8675 | 4350 | 0.0012 | - | | 2.9005 | 4400 | 0.0011 | - | | 2.9334 | 4450 | 0.0013 | - | | 2.9664 | 4500 | 0.0016 | - | | 2.9993 | 4550 | 0.0014 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.3 - Sentence Transformers: 2.7.0 - Transformers: 4.40.1 - PyTorch: 2.2.1+cu121 - Datasets: 2.19.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
mradermacher/Noro-Hermes-7B-GGUF
mradermacher
2024-05-06T05:50:01Z
37
1
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "NeverSleep/Noromaid-7B-0.4-DPO", "NousResearch/Nous-Hermes-2-Mistral-7B-DPO", "en", "base_model:ThomasComics/Noro-Hermes-7B", "base_model:quantized:ThomasComics/Noro-Hermes-7B", "endpoints_compatible", "region:us" ]
null
2024-03-28T06:56:31Z
--- base_model: ThomasComics/Noro-Hermes-7B language: - en library_name: transformers quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - NeverSleep/Noromaid-7B-0.4-DPO - NousResearch/Nous-Hermes-2-Mistral-7B-DPO --- ## About static quants of https://huggingface.co/ThomasComics/Noro-Hermes-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Noro-Hermes-7B-GGUF/resolve/main/Noro-Hermes-7B.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Noro-Hermes-7B-GGUF/resolve/main/Noro-Hermes-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Noro-Hermes-7B-GGUF/resolve/main/Noro-Hermes-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Noro-Hermes-7B-GGUF/resolve/main/Noro-Hermes-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Noro-Hermes-7B-GGUF/resolve/main/Noro-Hermes-7B.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Noro-Hermes-7B-GGUF/resolve/main/Noro-Hermes-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Noro-Hermes-7B-GGUF/resolve/main/Noro-Hermes-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Noro-Hermes-7B-GGUF/resolve/main/Noro-Hermes-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Noro-Hermes-7B-GGUF/resolve/main/Noro-Hermes-7B.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Noro-Hermes-7B-GGUF/resolve/main/Noro-Hermes-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Noro-Hermes-7B-GGUF/resolve/main/Noro-Hermes-7B.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Noro-Hermes-7B-GGUF/resolve/main/Noro-Hermes-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Noro-Hermes-7B-GGUF/resolve/main/Noro-Hermes-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Noro-Hermes-7B-GGUF/resolve/main/Noro-Hermes-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Noro-Hermes-7B-GGUF/resolve/main/Noro-Hermes-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Noro-Hermes-7B-GGUF/resolve/main/Noro-Hermes-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Noro-Hermes-3x7B-GGUF
mradermacher
2024-05-06T05:49:35Z
161
2
transformers
[ "transformers", "gguf", "moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "NousResearch/Nous-Hermes-2-Mistral-7B-DPO", "NeverSleep/Noromaid-7B-0.4-DPO", "mistralai/Mistral-7B-Instruct-v0.2", "en", "base_model:ThomasComics/Noro-Hermes-3x7B", "base_model:quantized:ThomasComics/Noro-Hermes-3x7B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-03-28T08:21:34Z
--- base_model: ThomasComics/Noro-Hermes-3x7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - moe - frankenmoe - merge - mergekit - lazymergekit - NousResearch/Nous-Hermes-2-Mistral-7B-DPO - NeverSleep/Noromaid-7B-0.4-DPO - mistralai/Mistral-7B-Instruct-v0.2 --- ## About static quants of https://huggingface.co/ThomasComics/Noro-Hermes-3x7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Noro-Hermes-3x7B-GGUF/resolve/main/Noro-Hermes-3x7B.Q2_K.gguf) | Q2_K | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/Noro-Hermes-3x7B-GGUF/resolve/main/Noro-Hermes-3x7B.IQ3_XS.gguf) | IQ3_XS | 7.8 | | | [GGUF](https://huggingface.co/mradermacher/Noro-Hermes-3x7B-GGUF/resolve/main/Noro-Hermes-3x7B.Q3_K_S.gguf) | Q3_K_S | 8.3 | | | [GGUF](https://huggingface.co/mradermacher/Noro-Hermes-3x7B-GGUF/resolve/main/Noro-Hermes-3x7B.IQ3_S.gguf) | IQ3_S | 8.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Noro-Hermes-3x7B-GGUF/resolve/main/Noro-Hermes-3x7B.IQ3_M.gguf) | IQ3_M | 8.4 | | | [GGUF](https://huggingface.co/mradermacher/Noro-Hermes-3x7B-GGUF/resolve/main/Noro-Hermes-3x7B.Q3_K_M.gguf) | Q3_K_M | 9.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Noro-Hermes-3x7B-GGUF/resolve/main/Noro-Hermes-3x7B.Q3_K_L.gguf) | Q3_K_L | 9.9 | | | [GGUF](https://huggingface.co/mradermacher/Noro-Hermes-3x7B-GGUF/resolve/main/Noro-Hermes-3x7B.IQ4_XS.gguf) | IQ4_XS | 10.3 | | | [GGUF](https://huggingface.co/mradermacher/Noro-Hermes-3x7B-GGUF/resolve/main/Noro-Hermes-3x7B.Q4_0.gguf) | Q4_0 | 10.7 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Noro-Hermes-3x7B-GGUF/resolve/main/Noro-Hermes-3x7B.Q4_K_S.gguf) | Q4_K_S | 10.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Noro-Hermes-3x7B-GGUF/resolve/main/Noro-Hermes-3x7B.IQ4_NL.gguf) | IQ4_NL | 10.8 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Noro-Hermes-3x7B-GGUF/resolve/main/Noro-Hermes-3x7B.Q4_K_M.gguf) | Q4_K_M | 11.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Noro-Hermes-3x7B-GGUF/resolve/main/Noro-Hermes-3x7B.Q5_K_S.gguf) | Q5_K_S | 13.0 | | | [GGUF](https://huggingface.co/mradermacher/Noro-Hermes-3x7B-GGUF/resolve/main/Noro-Hermes-3x7B.Q5_K_M.gguf) | Q5_K_M | 13.4 | | | [GGUF](https://huggingface.co/mradermacher/Noro-Hermes-3x7B-GGUF/resolve/main/Noro-Hermes-3x7B.Q6_K.gguf) | Q6_K | 15.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Noro-Hermes-3x7B-GGUF/resolve/main/Noro-Hermes-3x7B.Q8_0.gguf) | Q8_0 | 19.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Aditi25/experimenting_with_falcon_instruct_copy
Aditi25
2024-05-06T05:49:11Z
1
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:tiiuae/falcon-7b-instruct", "base_model:adapter:tiiuae/falcon-7b-instruct", "license:apache-2.0", "region:us" ]
null
2024-05-04T13:07:35Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer base_model: tiiuae/falcon-7b-instruct model-index: - name: experimenting_with_falcon_instruct_copy results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # experimenting_with_falcon_instruct_copy This model is a fine-tuned version of [tiiuae/falcon-7b-instruct](https://huggingface.co/tiiuae/falcon-7b-instruct) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - training_steps: 200 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
mradermacher/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-GGUF
mradermacher
2024-05-06T05:49:05Z
159
2
transformers
[ "transformers", "gguf", "Safetensors", "mistral", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "athirdpath/NSFW_DPO_Noromaid-7b", "safetensors", "text-generation", "en", "dataset:athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW-v2", "dataset:athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us", "base_model:MaziyarPanahi/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1", "base_model:quantized:MaziyarPanahi/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1", "license:apache-2.0", "conversational" ]
text-generation
2024-03-28T10:26:24Z
--- base_model: MaziyarPanahi/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - Safetensors - mistral - text-generation-inference - merge - mistral - 7b - mistralai/Mistral-7B-Instruct-v0.1 - athirdpath/NSFW_DPO_Noromaid-7b - transformers - safetensors - mistral - text-generation - en - dataset:athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW-v2 - dataset:athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW - license:cc-by-nc-4.0 - autotrain_compatible - endpoints_compatible - has_space - text-generation-inference - region:us --- ## About static quants of https://huggingface.co/MaziyarPanahi/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-GGUF/resolve/main/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-GGUF/resolve/main/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-GGUF/resolve/main/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-GGUF/resolve/main/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-GGUF/resolve/main/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-GGUF/resolve/main/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-GGUF/resolve/main/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-GGUF/resolve/main/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-GGUF/resolve/main/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-GGUF/resolve/main/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-GGUF/resolve/main/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-GGUF/resolve/main/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-GGUF/resolve/main/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-GGUF/resolve/main/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-GGUF/resolve/main/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-GGUF/resolve/main/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/MistralMathOctopus-7B-GGUF
mradermacher
2024-05-06T05:48:25Z
109
0
transformers
[ "transformers", "gguf", "multilingual", "en", "base_model:kevinpro/MistralMathOctopus-7B", "base_model:quantized:kevinpro/MistralMathOctopus-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-28T12:05:05Z
--- base_model: kevinpro/MistralMathOctopus-7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - multilingual --- ## About static quants of https://huggingface.co/kevinpro/MistralMathOctopus-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MistralMathOctopus-7B-GGUF/resolve/main/MistralMathOctopus-7B.Q2_K.gguf) | Q2_K | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/MistralMathOctopus-7B-GGUF/resolve/main/MistralMathOctopus-7B.IQ3_XS.gguf) | IQ3_XS | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/MistralMathOctopus-7B-GGUF/resolve/main/MistralMathOctopus-7B.Q3_K_S.gguf) | Q3_K_S | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/MistralMathOctopus-7B-GGUF/resolve/main/MistralMathOctopus-7B.IQ3_S.gguf) | IQ3_S | 3.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/MistralMathOctopus-7B-GGUF/resolve/main/MistralMathOctopus-7B.IQ3_M.gguf) | IQ3_M | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/MistralMathOctopus-7B-GGUF/resolve/main/MistralMathOctopus-7B.Q3_K_M.gguf) | Q3_K_M | 4.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MistralMathOctopus-7B-GGUF/resolve/main/MistralMathOctopus-7B.Q3_K_L.gguf) | Q3_K_L | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/MistralMathOctopus-7B-GGUF/resolve/main/MistralMathOctopus-7B.IQ4_XS.gguf) | IQ4_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/MistralMathOctopus-7B-GGUF/resolve/main/MistralMathOctopus-7B.Q4_0.gguf) | Q4_0 | 4.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/MistralMathOctopus-7B-GGUF/resolve/main/MistralMathOctopus-7B.Q4_K_S.gguf) | Q4_K_S | 4.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MistralMathOctopus-7B-GGUF/resolve/main/MistralMathOctopus-7B.IQ4_NL.gguf) | IQ4_NL | 4.7 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/MistralMathOctopus-7B-GGUF/resolve/main/MistralMathOctopus-7B.Q4_K_M.gguf) | Q4_K_M | 4.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MistralMathOctopus-7B-GGUF/resolve/main/MistralMathOctopus-7B.Q5_K_S.gguf) | Q5_K_S | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/MistralMathOctopus-7B-GGUF/resolve/main/MistralMathOctopus-7B.Q5_K_M.gguf) | Q5_K_M | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/MistralMathOctopus-7B-GGUF/resolve/main/MistralMathOctopus-7B.Q6_K.gguf) | Q6_K | 6.5 | very good quality | | [GGUF](https://huggingface.co/mradermacher/MistralMathOctopus-7B-GGUF/resolve/main/MistralMathOctopus-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Erosumika-MistralLayla-Slerp-GGUF
mradermacher
2024-05-06T05:48:21Z
42
1
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "localfultonextractor/Erosumika-7B-v2", "l3utterfly/mistral-7b-v0.2-layla-v4", "en", "base_model:Smuggling1710/Erosumika-MistralLayla-Slerp", "base_model:quantized:Smuggling1710/Erosumika-MistralLayla-Slerp", "endpoints_compatible", "region:us" ]
null
2024-03-28T12:09:08Z
--- base_model: Smuggling1710/Erosumika-MistralLayla-Slerp language: - en library_name: transformers quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - localfultonextractor/Erosumika-7B-v2 - l3utterfly/mistral-7b-v0.2-layla-v4 --- ## About static quants of https://huggingface.co/Smuggling1710/Erosumika-MistralLayla-Slerp <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Erosumika-MistralLayla-Slerp-GGUF/resolve/main/Erosumika-MistralLayla-Slerp.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Erosumika-MistralLayla-Slerp-GGUF/resolve/main/Erosumika-MistralLayla-Slerp.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Erosumika-MistralLayla-Slerp-GGUF/resolve/main/Erosumika-MistralLayla-Slerp.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Erosumika-MistralLayla-Slerp-GGUF/resolve/main/Erosumika-MistralLayla-Slerp.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Erosumika-MistralLayla-Slerp-GGUF/resolve/main/Erosumika-MistralLayla-Slerp.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Erosumika-MistralLayla-Slerp-GGUF/resolve/main/Erosumika-MistralLayla-Slerp.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Erosumika-MistralLayla-Slerp-GGUF/resolve/main/Erosumika-MistralLayla-Slerp.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Erosumika-MistralLayla-Slerp-GGUF/resolve/main/Erosumika-MistralLayla-Slerp.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Erosumika-MistralLayla-Slerp-GGUF/resolve/main/Erosumika-MistralLayla-Slerp.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Erosumika-MistralLayla-Slerp-GGUF/resolve/main/Erosumika-MistralLayla-Slerp.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Erosumika-MistralLayla-Slerp-GGUF/resolve/main/Erosumika-MistralLayla-Slerp.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Erosumika-MistralLayla-Slerp-GGUF/resolve/main/Erosumika-MistralLayla-Slerp.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Erosumika-MistralLayla-Slerp-GGUF/resolve/main/Erosumika-MistralLayla-Slerp.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Erosumika-MistralLayla-Slerp-GGUF/resolve/main/Erosumika-MistralLayla-Slerp.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Erosumika-MistralLayla-Slerp-GGUF/resolve/main/Erosumika-MistralLayla-Slerp.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Erosumika-MistralLayla-Slerp-GGUF/resolve/main/Erosumika-MistralLayla-Slerp.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/MistralMathOctopus-MAPO-DPO-7B-GGUF
mradermacher
2024-05-06T05:48:10Z
32
0
transformers
[ "transformers", "gguf", "multilingual", "en", "base_model:kevinpro/MistralMathOctopus-MAPO-DPO-7B", "base_model:quantized:kevinpro/MistralMathOctopus-MAPO-DPO-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-28T12:41:37Z
--- base_model: kevinpro/MistralMathOctopus-MAPO-DPO-7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - multilingual --- ## About static quants of https://huggingface.co/kevinpro/MistralMathOctopus-MAPO-DPO-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MistralMathOctopus-MAPO-DPO-7B-GGUF/resolve/main/MistralMathOctopus-MAPO-DPO-7B.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/MistralMathOctopus-MAPO-DPO-7B-GGUF/resolve/main/MistralMathOctopus-MAPO-DPO-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/MistralMathOctopus-MAPO-DPO-7B-GGUF/resolve/main/MistralMathOctopus-MAPO-DPO-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/MistralMathOctopus-MAPO-DPO-7B-GGUF/resolve/main/MistralMathOctopus-MAPO-DPO-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/MistralMathOctopus-MAPO-DPO-7B-GGUF/resolve/main/MistralMathOctopus-MAPO-DPO-7B.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/MistralMathOctopus-MAPO-DPO-7B-GGUF/resolve/main/MistralMathOctopus-MAPO-DPO-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MistralMathOctopus-MAPO-DPO-7B-GGUF/resolve/main/MistralMathOctopus-MAPO-DPO-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/MistralMathOctopus-MAPO-DPO-7B-GGUF/resolve/main/MistralMathOctopus-MAPO-DPO-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/MistralMathOctopus-MAPO-DPO-7B-GGUF/resolve/main/MistralMathOctopus-MAPO-DPO-7B.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/MistralMathOctopus-MAPO-DPO-7B-GGUF/resolve/main/MistralMathOctopus-MAPO-DPO-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MistralMathOctopus-MAPO-DPO-7B-GGUF/resolve/main/MistralMathOctopus-MAPO-DPO-7B.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/MistralMathOctopus-MAPO-DPO-7B-GGUF/resolve/main/MistralMathOctopus-MAPO-DPO-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MistralMathOctopus-MAPO-DPO-7B-GGUF/resolve/main/MistralMathOctopus-MAPO-DPO-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/MistralMathOctopus-MAPO-DPO-7B-GGUF/resolve/main/MistralMathOctopus-MAPO-DPO-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/MistralMathOctopus-MAPO-DPO-7B-GGUF/resolve/main/MistralMathOctopus-MAPO-DPO-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/MistralMathOctopus-MAPO-DPO-7B-GGUF/resolve/main/MistralMathOctopus-MAPO-DPO-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/FrankenLimmy-10B-passthrough-GGUF
mradermacher
2024-05-06T05:48:01Z
8
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "liminerity/M7-7b", "en", "base_model:allknowingroger/FrankenLimmy-10B-passthrough", "base_model:quantized:allknowingroger/FrankenLimmy-10B-passthrough", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-28T14:11:35Z
--- base_model: allknowingroger/FrankenLimmy-10B-passthrough language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - liminerity/M7-7b - liminerity/M7-7b - liminerity/M7-7b - liminerity/M7-7b - liminerity/M7-7b --- ## About static quants of https://huggingface.co/allknowingroger/FrankenLimmy-10B-passthrough <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/FrankenLimmy-10B-passthrough-GGUF/resolve/main/FrankenLimmy-10B-passthrough.Q2_K.gguf) | Q2_K | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/FrankenLimmy-10B-passthrough-GGUF/resolve/main/FrankenLimmy-10B-passthrough.IQ3_XS.gguf) | IQ3_XS | 4.7 | | | [GGUF](https://huggingface.co/mradermacher/FrankenLimmy-10B-passthrough-GGUF/resolve/main/FrankenLimmy-10B-passthrough.Q3_K_S.gguf) | Q3_K_S | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/FrankenLimmy-10B-passthrough-GGUF/resolve/main/FrankenLimmy-10B-passthrough.IQ3_S.gguf) | IQ3_S | 4.9 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/FrankenLimmy-10B-passthrough-GGUF/resolve/main/FrankenLimmy-10B-passthrough.IQ3_M.gguf) | IQ3_M | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/FrankenLimmy-10B-passthrough-GGUF/resolve/main/FrankenLimmy-10B-passthrough.Q3_K_M.gguf) | Q3_K_M | 5.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/FrankenLimmy-10B-passthrough-GGUF/resolve/main/FrankenLimmy-10B-passthrough.Q3_K_L.gguf) | Q3_K_L | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/FrankenLimmy-10B-passthrough-GGUF/resolve/main/FrankenLimmy-10B-passthrough.IQ4_XS.gguf) | IQ4_XS | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/FrankenLimmy-10B-passthrough-GGUF/resolve/main/FrankenLimmy-10B-passthrough.Q4_0.gguf) | Q4_0 | 6.3 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/FrankenLimmy-10B-passthrough-GGUF/resolve/main/FrankenLimmy-10B-passthrough.Q4_K_S.gguf) | Q4_K_S | 6.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/FrankenLimmy-10B-passthrough-GGUF/resolve/main/FrankenLimmy-10B-passthrough.IQ4_NL.gguf) | IQ4_NL | 6.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/FrankenLimmy-10B-passthrough-GGUF/resolve/main/FrankenLimmy-10B-passthrough.Q4_K_M.gguf) | Q4_K_M | 6.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/FrankenLimmy-10B-passthrough-GGUF/resolve/main/FrankenLimmy-10B-passthrough.Q5_K_S.gguf) | Q5_K_S | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/FrankenLimmy-10B-passthrough-GGUF/resolve/main/FrankenLimmy-10B-passthrough.Q5_K_M.gguf) | Q5_K_M | 7.9 | | | [GGUF](https://huggingface.co/mradermacher/FrankenLimmy-10B-passthrough-GGUF/resolve/main/FrankenLimmy-10B-passthrough.Q6_K.gguf) | Q6_K | 9.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/FrankenLimmy-10B-passthrough-GGUF/resolve/main/FrankenLimmy-10B-passthrough.Q8_0.gguf) | Q8_0 | 11.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Humanised-LLMv3-GGUF
mradermacher
2024-05-06T05:47:55Z
32
0
transformers
[ "transformers", "gguf", "en", "base_model:Oneeb/Humanised-LLMv3", "base_model:quantized:Oneeb/Humanised-LLMv3", "endpoints_compatible", "region:us" ]
null
2024-03-28T16:03:10Z
--- base_model: Oneeb/Humanised-LLMv3 language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About static quants of https://huggingface.co/Oneeb/Humanised-LLMv3 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Humanised-LLMv3-GGUF/resolve/main/Humanised-LLMv3.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Humanised-LLMv3-GGUF/resolve/main/Humanised-LLMv3.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Humanised-LLMv3-GGUF/resolve/main/Humanised-LLMv3.IQ3_S.gguf) | IQ3_S | 3.2 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Humanised-LLMv3-GGUF/resolve/main/Humanised-LLMv3.Q3_K_S.gguf) | Q3_K_S | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/Humanised-LLMv3-GGUF/resolve/main/Humanised-LLMv3.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Humanised-LLMv3-GGUF/resolve/main/Humanised-LLMv3.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Humanised-LLMv3-GGUF/resolve/main/Humanised-LLMv3.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Humanised-LLMv3-GGUF/resolve/main/Humanised-LLMv3.IQ4_XS.gguf) | IQ4_XS | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Humanised-LLMv3-GGUF/resolve/main/Humanised-LLMv3.Q4_0.gguf) | Q4_0 | 4.1 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Humanised-LLMv3-GGUF/resolve/main/Humanised-LLMv3.IQ4_NL.gguf) | IQ4_NL | 4.1 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Humanised-LLMv3-GGUF/resolve/main/Humanised-LLMv3.Q4_K_S.gguf) | Q4_K_S | 4.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Humanised-LLMv3-GGUF/resolve/main/Humanised-LLMv3.Q4_K_M.gguf) | Q4_K_M | 4.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Humanised-LLMv3-GGUF/resolve/main/Humanised-LLMv3.Q5_K_S.gguf) | Q5_K_S | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Humanised-LLMv3-GGUF/resolve/main/Humanised-LLMv3.Q5_K_M.gguf) | Q5_K_M | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/Humanised-LLMv3-GGUF/resolve/main/Humanised-LLMv3.Q6_K.gguf) | Q6_K | 5.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Humanised-LLMv3-GGUF/resolve/main/Humanised-LLMv3.Q8_0.gguf) | Q8_0 | 7.4 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/random-waifus-4x7b-GGUF
mradermacher
2024-05-06T05:47:41Z
40
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "en", "base_model:Datters/random-waifus-4x7b", "base_model:quantized:Datters/random-waifus-4x7b", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
2024-03-28T17:03:21Z
--- base_model: Datters/random-waifus-4x7b language: - en library_name: transformers license: other quantized_by: mradermacher tags: - merge - mergekit --- ## About static quants of https://huggingface.co/Datters/random-waifus-4x7b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/random-waifus-4x7b-GGUF/resolve/main/random-waifus-4x7b.Q2_K.gguf) | Q2_K | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/random-waifus-4x7b-GGUF/resolve/main/random-waifus-4x7b.IQ3_XS.gguf) | IQ3_XS | 10.1 | | | [GGUF](https://huggingface.co/mradermacher/random-waifus-4x7b-GGUF/resolve/main/random-waifus-4x7b.Q3_K_S.gguf) | Q3_K_S | 10.7 | | | [GGUF](https://huggingface.co/mradermacher/random-waifus-4x7b-GGUF/resolve/main/random-waifus-4x7b.IQ3_S.gguf) | IQ3_S | 10.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/random-waifus-4x7b-GGUF/resolve/main/random-waifus-4x7b.IQ3_M.gguf) | IQ3_M | 10.9 | | | [GGUF](https://huggingface.co/mradermacher/random-waifus-4x7b-GGUF/resolve/main/random-waifus-4x7b.Q3_K_M.gguf) | Q3_K_M | 11.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/random-waifus-4x7b-GGUF/resolve/main/random-waifus-4x7b.Q3_K_L.gguf) | Q3_K_L | 12.8 | | | [GGUF](https://huggingface.co/mradermacher/random-waifus-4x7b-GGUF/resolve/main/random-waifus-4x7b.IQ4_XS.gguf) | IQ4_XS | 13.3 | | | [GGUF](https://huggingface.co/mradermacher/random-waifus-4x7b-GGUF/resolve/main/random-waifus-4x7b.Q4_0.gguf) | Q4_0 | 13.9 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/random-waifus-4x7b-GGUF/resolve/main/random-waifus-4x7b.Q4_K_S.gguf) | Q4_K_S | 14.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/random-waifus-4x7b-GGUF/resolve/main/random-waifus-4x7b.IQ4_NL.gguf) | IQ4_NL | 14.0 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/random-waifus-4x7b-GGUF/resolve/main/random-waifus-4x7b.Q4_K_M.gguf) | Q4_K_M | 14.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/random-waifus-4x7b-GGUF/resolve/main/random-waifus-4x7b.Q5_K_S.gguf) | Q5_K_S | 16.9 | | | [GGUF](https://huggingface.co/mradermacher/random-waifus-4x7b-GGUF/resolve/main/random-waifus-4x7b.Q5_K_M.gguf) | Q5_K_M | 17.4 | | | [GGUF](https://huggingface.co/mradermacher/random-waifus-4x7b-GGUF/resolve/main/random-waifus-4x7b.Q6_K.gguf) | Q6_K | 20.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/random-waifus-4x7b-GGUF/resolve/main/random-waifus-4x7b.Q8_0.gguf) | Q8_0 | 25.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/KangalKhan-Alpha-Sapphiroid-7B-GGUF
mradermacher
2024-05-06T05:46:40Z
62
1
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "argilla/CapybaraHermes-2.5-Mistral-7B", "kaist-ai/mistral-orpo-capybara-7k", "en", "base_model:Yuma42/KangalKhan-Alpha-Sapphiroid-7B", "base_model:quantized:Yuma42/KangalKhan-Alpha-Sapphiroid-7B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-03-28T18:26:26Z
--- base_model: Yuma42/KangalKhan-Alpha-Sapphiroid-7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - argilla/CapybaraHermes-2.5-Mistral-7B - kaist-ai/mistral-orpo-capybara-7k --- ## About static quants of https://huggingface.co/Yuma42/KangalKhan-Alpha-Sapphiroid-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/KangalKhan-Alpha-Sapphiroid-7B-GGUF/resolve/main/KangalKhan-Alpha-Sapphiroid-7B.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/KangalKhan-Alpha-Sapphiroid-7B-GGUF/resolve/main/KangalKhan-Alpha-Sapphiroid-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/KangalKhan-Alpha-Sapphiroid-7B-GGUF/resolve/main/KangalKhan-Alpha-Sapphiroid-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/KangalKhan-Alpha-Sapphiroid-7B-GGUF/resolve/main/KangalKhan-Alpha-Sapphiroid-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/KangalKhan-Alpha-Sapphiroid-7B-GGUF/resolve/main/KangalKhan-Alpha-Sapphiroid-7B.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/KangalKhan-Alpha-Sapphiroid-7B-GGUF/resolve/main/KangalKhan-Alpha-Sapphiroid-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/KangalKhan-Alpha-Sapphiroid-7B-GGUF/resolve/main/KangalKhan-Alpha-Sapphiroid-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/KangalKhan-Alpha-Sapphiroid-7B-GGUF/resolve/main/KangalKhan-Alpha-Sapphiroid-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/KangalKhan-Alpha-Sapphiroid-7B-GGUF/resolve/main/KangalKhan-Alpha-Sapphiroid-7B.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/KangalKhan-Alpha-Sapphiroid-7B-GGUF/resolve/main/KangalKhan-Alpha-Sapphiroid-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/KangalKhan-Alpha-Sapphiroid-7B-GGUF/resolve/main/KangalKhan-Alpha-Sapphiroid-7B.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/KangalKhan-Alpha-Sapphiroid-7B-GGUF/resolve/main/KangalKhan-Alpha-Sapphiroid-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/KangalKhan-Alpha-Sapphiroid-7B-GGUF/resolve/main/KangalKhan-Alpha-Sapphiroid-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/KangalKhan-Alpha-Sapphiroid-7B-GGUF/resolve/main/KangalKhan-Alpha-Sapphiroid-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/KangalKhan-Alpha-Sapphiroid-7B-GGUF/resolve/main/KangalKhan-Alpha-Sapphiroid-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/KangalKhan-Alpha-Sapphiroid-7B-GGUF/resolve/main/KangalKhan-Alpha-Sapphiroid-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Dolphin-2.6-FC_Hermes-2-Pro-GGUF
mradermacher
2024-05-06T05:46:37Z
43
1
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "cognitivecomputations/fc-dolphin-2.6-mistral-7b-dpo-laser", "NousResearch/Hermes-2-Pro-Mistral-7B", "en", "base_model:00000-X/Dolphin-2.6-FC_Hermes-2-Pro", "base_model:quantized:00000-X/Dolphin-2.6-FC_Hermes-2-Pro", "endpoints_compatible", "region:us" ]
null
2024-03-28T18:45:16Z
--- base_model: 00000-X/Dolphin-2.6-FC_Hermes-2-Pro language: - en library_name: transformers quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - cognitivecomputations/fc-dolphin-2.6-mistral-7b-dpo-laser - NousResearch/Hermes-2-Pro-Mistral-7B --- ## About static quants of https://huggingface.co/00000-X/Dolphin-2.6-FC_Hermes-2-Pro <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Dolphin-2.6-FC_Hermes-2-Pro-GGUF/resolve/main/Dolphin-2.6-FC_Hermes-2-Pro.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Dolphin-2.6-FC_Hermes-2-Pro-GGUF/resolve/main/Dolphin-2.6-FC_Hermes-2-Pro.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Dolphin-2.6-FC_Hermes-2-Pro-GGUF/resolve/main/Dolphin-2.6-FC_Hermes-2-Pro.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Dolphin-2.6-FC_Hermes-2-Pro-GGUF/resolve/main/Dolphin-2.6-FC_Hermes-2-Pro.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Dolphin-2.6-FC_Hermes-2-Pro-GGUF/resolve/main/Dolphin-2.6-FC_Hermes-2-Pro.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Dolphin-2.6-FC_Hermes-2-Pro-GGUF/resolve/main/Dolphin-2.6-FC_Hermes-2-Pro.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Dolphin-2.6-FC_Hermes-2-Pro-GGUF/resolve/main/Dolphin-2.6-FC_Hermes-2-Pro.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Dolphin-2.6-FC_Hermes-2-Pro-GGUF/resolve/main/Dolphin-2.6-FC_Hermes-2-Pro.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Dolphin-2.6-FC_Hermes-2-Pro-GGUF/resolve/main/Dolphin-2.6-FC_Hermes-2-Pro.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Dolphin-2.6-FC_Hermes-2-Pro-GGUF/resolve/main/Dolphin-2.6-FC_Hermes-2-Pro.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Dolphin-2.6-FC_Hermes-2-Pro-GGUF/resolve/main/Dolphin-2.6-FC_Hermes-2-Pro.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Dolphin-2.6-FC_Hermes-2-Pro-GGUF/resolve/main/Dolphin-2.6-FC_Hermes-2-Pro.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Dolphin-2.6-FC_Hermes-2-Pro-GGUF/resolve/main/Dolphin-2.6-FC_Hermes-2-Pro.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Dolphin-2.6-FC_Hermes-2-Pro-GGUF/resolve/main/Dolphin-2.6-FC_Hermes-2-Pro.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Dolphin-2.6-FC_Hermes-2-Pro-GGUF/resolve/main/Dolphin-2.6-FC_Hermes-2-Pro.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Dolphin-2.6-FC_Hermes-2-Pro-GGUF/resolve/main/Dolphin-2.6-FC_Hermes-2-Pro.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Kyllene-57B-v1.0-i1-GGUF
mradermacher
2024-05-06T05:45:56Z
26
0
transformers
[ "transformers", "gguf", "merge", "en", "base_model:TeeZee/Kyllene-57B-v1.0", "base_model:quantized:TeeZee/Kyllene-57B-v1.0", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
2024-03-28T20:23:58Z
--- base_model: TeeZee/Kyllene-57B-v1.0 language: - en library_name: transformers license: other license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE license_name: yi-license quantized_by: mradermacher tags: - merge --- ## About weighted/imatrix quants of https://huggingface.co/TeeZee/Kyllene-57B-v1.0 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Kyllene-57B-v1.0-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Kyllene-57B-v1.0-i1-GGUF/resolve/main/Kyllene-57B-v1.0.i1-IQ1_S.gguf) | i1-IQ1_S | 12.9 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Kyllene-57B-v1.0-i1-GGUF/resolve/main/Kyllene-57B-v1.0.i1-IQ1_M.gguf) | i1-IQ1_M | 14.0 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Kyllene-57B-v1.0-i1-GGUF/resolve/main/Kyllene-57B-v1.0.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 15.9 | | | [GGUF](https://huggingface.co/mradermacher/Kyllene-57B-v1.0-i1-GGUF/resolve/main/Kyllene-57B-v1.0.i1-IQ2_XS.gguf) | i1-IQ2_XS | 17.6 | | | [GGUF](https://huggingface.co/mradermacher/Kyllene-57B-v1.0-i1-GGUF/resolve/main/Kyllene-57B-v1.0.i1-IQ2_S.gguf) | i1-IQ2_S | 18.5 | | | [GGUF](https://huggingface.co/mradermacher/Kyllene-57B-v1.0-i1-GGUF/resolve/main/Kyllene-57B-v1.0.i1-IQ2_M.gguf) | i1-IQ2_M | 20.0 | | | [GGUF](https://huggingface.co/mradermacher/Kyllene-57B-v1.0-i1-GGUF/resolve/main/Kyllene-57B-v1.0.i1-Q2_K.gguf) | i1-Q2_K | 21.7 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Kyllene-57B-v1.0-i1-GGUF/resolve/main/Kyllene-57B-v1.0.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 22.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Kyllene-57B-v1.0-i1-GGUF/resolve/main/Kyllene-57B-v1.0.i1-IQ3_XS.gguf) | i1-IQ3_XS | 24.0 | | | [GGUF](https://huggingface.co/mradermacher/Kyllene-57B-v1.0-i1-GGUF/resolve/main/Kyllene-57B-v1.0.i1-Q3_K_S.gguf) | i1-Q3_K_S | 25.2 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Kyllene-57B-v1.0-i1-GGUF/resolve/main/Kyllene-57B-v1.0.i1-IQ3_S.gguf) | i1-IQ3_S | 25.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Kyllene-57B-v1.0-i1-GGUF/resolve/main/Kyllene-57B-v1.0.i1-IQ3_M.gguf) | i1-IQ3_M | 26.2 | | | [GGUF](https://huggingface.co/mradermacher/Kyllene-57B-v1.0-i1-GGUF/resolve/main/Kyllene-57B-v1.0.i1-Q3_K_M.gguf) | i1-Q3_K_M | 28.0 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Kyllene-57B-v1.0-i1-GGUF/resolve/main/Kyllene-57B-v1.0.i1-Q3_K_L.gguf) | i1-Q3_K_L | 30.5 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Kyllene-57B-v1.0-i1-GGUF/resolve/main/Kyllene-57B-v1.0.i1-IQ4_XS.gguf) | i1-IQ4_XS | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/Kyllene-57B-v1.0-i1-GGUF/resolve/main/Kyllene-57B-v1.0.i1-IQ4_NL.gguf) | i1-IQ4_NL | 32.8 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Kyllene-57B-v1.0-i1-GGUF/resolve/main/Kyllene-57B-v1.0.i1-Q4_0.gguf) | i1-Q4_0 | 32.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Kyllene-57B-v1.0-i1-GGUF/resolve/main/Kyllene-57B-v1.0.i1-Q4_K_S.gguf) | i1-Q4_K_S | 32.9 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Kyllene-57B-v1.0-i1-GGUF/resolve/main/Kyllene-57B-v1.0.i1-Q4_K_M.gguf) | i1-Q4_K_M | 34.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Kyllene-57B-v1.0-i1-GGUF/resolve/main/Kyllene-57B-v1.0.i1-Q5_K_S.gguf) | i1-Q5_K_S | 39.7 | | | [GGUF](https://huggingface.co/mradermacher/Kyllene-57B-v1.0-i1-GGUF/resolve/main/Kyllene-57B-v1.0.i1-Q5_K_M.gguf) | i1-Q5_K_M | 40.7 | | | [GGUF](https://huggingface.co/mradermacher/Kyllene-57B-v1.0-i1-GGUF/resolve/main/Kyllene-57B-v1.0.i1-Q6_K.gguf) | i1-Q6_K | 47.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/LimmyAutomerge-7B-slerp-GGUF
mradermacher
2024-05-06T05:45:20Z
12
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "automerger/MeliodasNeuralsirkrishna-7B", "liminerity/M7-7b", "en", "base_model:allknowingroger/LimmyAutomerge-7B-slerp", "base_model:quantized:allknowingroger/LimmyAutomerge-7B-slerp", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-28T22:22:18Z
--- base_model: allknowingroger/LimmyAutomerge-7B-slerp language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - automerger/MeliodasNeuralsirkrishna-7B - liminerity/M7-7b --- ## About static quants of https://huggingface.co/allknowingroger/LimmyAutomerge-7B-slerp <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/LimmyAutomerge-7B-slerp-GGUF/resolve/main/LimmyAutomerge-7B-slerp.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/LimmyAutomerge-7B-slerp-GGUF/resolve/main/LimmyAutomerge-7B-slerp.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/LimmyAutomerge-7B-slerp-GGUF/resolve/main/LimmyAutomerge-7B-slerp.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/LimmyAutomerge-7B-slerp-GGUF/resolve/main/LimmyAutomerge-7B-slerp.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/LimmyAutomerge-7B-slerp-GGUF/resolve/main/LimmyAutomerge-7B-slerp.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/LimmyAutomerge-7B-slerp-GGUF/resolve/main/LimmyAutomerge-7B-slerp.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/LimmyAutomerge-7B-slerp-GGUF/resolve/main/LimmyAutomerge-7B-slerp.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/LimmyAutomerge-7B-slerp-GGUF/resolve/main/LimmyAutomerge-7B-slerp.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/LimmyAutomerge-7B-slerp-GGUF/resolve/main/LimmyAutomerge-7B-slerp.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/LimmyAutomerge-7B-slerp-GGUF/resolve/main/LimmyAutomerge-7B-slerp.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/LimmyAutomerge-7B-slerp-GGUF/resolve/main/LimmyAutomerge-7B-slerp.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/LimmyAutomerge-7B-slerp-GGUF/resolve/main/LimmyAutomerge-7B-slerp.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/LimmyAutomerge-7B-slerp-GGUF/resolve/main/LimmyAutomerge-7B-slerp.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/LimmyAutomerge-7B-slerp-GGUF/resolve/main/LimmyAutomerge-7B-slerp.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/LimmyAutomerge-7B-slerp-GGUF/resolve/main/LimmyAutomerge-7B-slerp.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/LimmyAutomerge-7B-slerp-GGUF/resolve/main/LimmyAutomerge-7B-slerp.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/messiah-7b-v1.0-GGUF
mradermacher
2024-05-06T05:44:58Z
6
0
transformers
[ "transformers", "gguf", "en", "base_model:meseca/messiah-7b-v1.0", "base_model:quantized:meseca/messiah-7b-v1.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-03-28T22:58:20Z
--- base_model: meseca/messiah-7b-v1.0 language: - en library_name: transformers quantized_by: mradermacher --- ## About static quants of https://huggingface.co/meseca/messiah-7b-v1.0 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/messiah-7b-v1.0-GGUF/resolve/main/messiah-7b-v1.0.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/messiah-7b-v1.0-GGUF/resolve/main/messiah-7b-v1.0.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/messiah-7b-v1.0-GGUF/resolve/main/messiah-7b-v1.0.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/messiah-7b-v1.0-GGUF/resolve/main/messiah-7b-v1.0.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/messiah-7b-v1.0-GGUF/resolve/main/messiah-7b-v1.0.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/messiah-7b-v1.0-GGUF/resolve/main/messiah-7b-v1.0.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/messiah-7b-v1.0-GGUF/resolve/main/messiah-7b-v1.0.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/messiah-7b-v1.0-GGUF/resolve/main/messiah-7b-v1.0.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/messiah-7b-v1.0-GGUF/resolve/main/messiah-7b-v1.0.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/messiah-7b-v1.0-GGUF/resolve/main/messiah-7b-v1.0.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/messiah-7b-v1.0-GGUF/resolve/main/messiah-7b-v1.0.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/messiah-7b-v1.0-GGUF/resolve/main/messiah-7b-v1.0.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/messiah-7b-v1.0-GGUF/resolve/main/messiah-7b-v1.0.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/messiah-7b-v1.0-GGUF/resolve/main/messiah-7b-v1.0.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/messiah-7b-v1.0-GGUF/resolve/main/messiah-7b-v1.0.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/messiah-7b-v1.0-GGUF/resolve/main/messiah-7b-v1.0.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/megatron_v3_2x7B-GGUF
mradermacher
2024-05-06T05:44:31Z
196
0
transformers
[ "transformers", "gguf", "moe", "merge", "en", "tr", "base_model:Eurdem/megatron_v3_2x7B", "base_model:quantized:Eurdem/megatron_v3_2x7B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-03-28T23:27:48Z
--- base_model: Eurdem/megatron_v3_2x7B language: - en - tr library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - moe - merge --- ## About static quants of https://huggingface.co/Eurdem/megatron_v3_2x7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/megatron_v3_2x7B-GGUF/resolve/main/megatron_v3_2x7B.Q2_K.gguf) | Q2_K | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/megatron_v3_2x7B-GGUF/resolve/main/megatron_v3_2x7B.IQ3_XS.gguf) | IQ3_XS | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/megatron_v3_2x7B-GGUF/resolve/main/megatron_v3_2x7B.Q3_K_S.gguf) | Q3_K_S | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/megatron_v3_2x7B-GGUF/resolve/main/megatron_v3_2x7B.IQ3_S.gguf) | IQ3_S | 5.9 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/megatron_v3_2x7B-GGUF/resolve/main/megatron_v3_2x7B.IQ3_M.gguf) | IQ3_M | 6.0 | | | [GGUF](https://huggingface.co/mradermacher/megatron_v3_2x7B-GGUF/resolve/main/megatron_v3_2x7B.Q3_K_M.gguf) | Q3_K_M | 6.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/megatron_v3_2x7B-GGUF/resolve/main/megatron_v3_2x7B.Q3_K_L.gguf) | Q3_K_L | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/megatron_v3_2x7B-GGUF/resolve/main/megatron_v3_2x7B.IQ4_XS.gguf) | IQ4_XS | 7.2 | | | [GGUF](https://huggingface.co/mradermacher/megatron_v3_2x7B-GGUF/resolve/main/megatron_v3_2x7B.Q4_0.gguf) | Q4_0 | 7.5 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/megatron_v3_2x7B-GGUF/resolve/main/megatron_v3_2x7B.Q4_K_S.gguf) | Q4_K_S | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/megatron_v3_2x7B-GGUF/resolve/main/megatron_v3_2x7B.IQ4_NL.gguf) | IQ4_NL | 7.6 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/megatron_v3_2x7B-GGUF/resolve/main/megatron_v3_2x7B.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/megatron_v3_2x7B-GGUF/resolve/main/megatron_v3_2x7B.Q5_K_S.gguf) | Q5_K_S | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/megatron_v3_2x7B-GGUF/resolve/main/megatron_v3_2x7B.Q5_K_M.gguf) | Q5_K_M | 9.4 | | | [GGUF](https://huggingface.co/mradermacher/megatron_v3_2x7B-GGUF/resolve/main/megatron_v3_2x7B.Q6_K.gguf) | Q6_K | 10.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/megatron_v3_2x7B-GGUF/resolve/main/megatron_v3_2x7B.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
maanavsaggu/llama3-sports-knowledge-graph-finetuned-model
maanavsaggu
2024-05-06T05:44:25Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-06T04:53:15Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** maanavsaggu - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/Cyntia22-GGUF
mradermacher
2024-05-06T05:44:18Z
82
0
transformers
[ "transformers", "gguf", "en", "base_model:Cyntia22/Cyntia22", "base_model:quantized:Cyntia22/Cyntia22", "endpoints_compatible", "region:us" ]
null
2024-03-29T00:26:03Z
--- base_model: Cyntia22/Cyntia22 language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About static quants of https://huggingface.co/Cyntia22/Cyntia22 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Cyntia22-GGUF/resolve/main/Cyntia22.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Cyntia22-GGUF/resolve/main/Cyntia22.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Cyntia22-GGUF/resolve/main/Cyntia22.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Cyntia22-GGUF/resolve/main/Cyntia22.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Cyntia22-GGUF/resolve/main/Cyntia22.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Cyntia22-GGUF/resolve/main/Cyntia22.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Cyntia22-GGUF/resolve/main/Cyntia22.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Cyntia22-GGUF/resolve/main/Cyntia22.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Cyntia22-GGUF/resolve/main/Cyntia22.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Cyntia22-GGUF/resolve/main/Cyntia22.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Cyntia22-GGUF/resolve/main/Cyntia22.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Cyntia22-GGUF/resolve/main/Cyntia22.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Cyntia22-GGUF/resolve/main/Cyntia22.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Cyntia22-GGUF/resolve/main/Cyntia22.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Cyntia22-GGUF/resolve/main/Cyntia22.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Cyntia22-GGUF/resolve/main/Cyntia22.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Etudiant-GGUF
mradermacher
2024-05-06T05:44:15Z
53
0
transformers
[ "transformers", "gguf", "en", "base_model:kemtho/Etudiant", "base_model:quantized:kemtho/Etudiant", "endpoints_compatible", "region:us" ]
null
2024-03-29T00:59:37Z
--- base_model: kemtho/Etudiant language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About static quants of https://huggingface.co/kemtho/Etudiant <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Etudiant-GGUF/resolve/main/Etudiant.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Etudiant-GGUF/resolve/main/Etudiant.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Etudiant-GGUF/resolve/main/Etudiant.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Etudiant-GGUF/resolve/main/Etudiant.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Etudiant-GGUF/resolve/main/Etudiant.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Etudiant-GGUF/resolve/main/Etudiant.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Etudiant-GGUF/resolve/main/Etudiant.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Etudiant-GGUF/resolve/main/Etudiant.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Etudiant-GGUF/resolve/main/Etudiant.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Etudiant-GGUF/resolve/main/Etudiant.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Etudiant-GGUF/resolve/main/Etudiant.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Etudiant-GGUF/resolve/main/Etudiant.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Etudiant-GGUF/resolve/main/Etudiant.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Etudiant-GGUF/resolve/main/Etudiant.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Etudiant-GGUF/resolve/main/Etudiant.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Etudiant-GGUF/resolve/main/Etudiant.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/NexusRaven-15B-pass-GGUF
mradermacher
2024-05-06T05:43:08Z
11
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "Nexusflow/NexusRaven-V2-13B", "en", "base_model:allknowingroger/NexusRaven-15B-pass", "base_model:quantized:allknowingroger/NexusRaven-15B-pass", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-29T02:13:08Z
--- base_model: allknowingroger/NexusRaven-15B-pass language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - Nexusflow/NexusRaven-V2-13B --- ## About static quants of https://huggingface.co/allknowingroger/NexusRaven-15B-pass <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/NexusRaven-15B-pass-GGUF/resolve/main/NexusRaven-15B-pass.Q2_K.gguf) | Q2_K | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/NexusRaven-15B-pass-GGUF/resolve/main/NexusRaven-15B-pass.IQ3_XS.gguf) | IQ3_XS | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/NexusRaven-15B-pass-GGUF/resolve/main/NexusRaven-15B-pass.IQ3_S.gguf) | IQ3_S | 7.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/NexusRaven-15B-pass-GGUF/resolve/main/NexusRaven-15B-pass.Q3_K_S.gguf) | Q3_K_S | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/NexusRaven-15B-pass-GGUF/resolve/main/NexusRaven-15B-pass.IQ3_M.gguf) | IQ3_M | 7.4 | | | [GGUF](https://huggingface.co/mradermacher/NexusRaven-15B-pass-GGUF/resolve/main/NexusRaven-15B-pass.Q3_K_M.gguf) | Q3_K_M | 7.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/NexusRaven-15B-pass-GGUF/resolve/main/NexusRaven-15B-pass.Q3_K_L.gguf) | Q3_K_L | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/NexusRaven-15B-pass-GGUF/resolve/main/NexusRaven-15B-pass.IQ4_XS.gguf) | IQ4_XS | 8.7 | | | [GGUF](https://huggingface.co/mradermacher/NexusRaven-15B-pass-GGUF/resolve/main/NexusRaven-15B-pass.Q4_0.gguf) | Q4_0 | 9.1 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/NexusRaven-15B-pass-GGUF/resolve/main/NexusRaven-15B-pass.IQ4_NL.gguf) | IQ4_NL | 9.1 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/NexusRaven-15B-pass-GGUF/resolve/main/NexusRaven-15B-pass.Q4_K_S.gguf) | Q4_K_S | 9.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/NexusRaven-15B-pass-GGUF/resolve/main/NexusRaven-15B-pass.Q4_K_M.gguf) | Q4_K_M | 9.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/NexusRaven-15B-pass-GGUF/resolve/main/NexusRaven-15B-pass.Q5_K_S.gguf) | Q5_K_S | 11.0 | | | [GGUF](https://huggingface.co/mradermacher/NexusRaven-15B-pass-GGUF/resolve/main/NexusRaven-15B-pass.Q5_K_M.gguf) | Q5_K_M | 11.3 | | | [GGUF](https://huggingface.co/mradermacher/NexusRaven-15B-pass-GGUF/resolve/main/NexusRaven-15B-pass.Q6_K.gguf) | Q6_K | 13.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/NexusRaven-15B-pass-GGUF/resolve/main/NexusRaven-15B-pass.Q8_0.gguf) | Q8_0 | 16.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Asclepius-DPO-Mistral-7B-GGUF
mradermacher
2024-05-06T05:43:04Z
26
0
transformers
[ "transformers", "gguf", "en", "base_model:yerkekz/Asclepius-DPO-Mistral-7B", "base_model:quantized:yerkekz/Asclepius-DPO-Mistral-7B", "endpoints_compatible", "region:us" ]
null
2024-03-29T02:46:09Z
--- base_model: yerkekz/Asclepius-DPO-Mistral-7B language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About static quants of https://huggingface.co/yerkekz/Asclepius-DPO-Mistral-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Asclepius-DPO-Mistral-7B-GGUF/resolve/main/Asclepius-DPO-Mistral-7B.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Asclepius-DPO-Mistral-7B-GGUF/resolve/main/Asclepius-DPO-Mistral-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Asclepius-DPO-Mistral-7B-GGUF/resolve/main/Asclepius-DPO-Mistral-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Asclepius-DPO-Mistral-7B-GGUF/resolve/main/Asclepius-DPO-Mistral-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Asclepius-DPO-Mistral-7B-GGUF/resolve/main/Asclepius-DPO-Mistral-7B.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Asclepius-DPO-Mistral-7B-GGUF/resolve/main/Asclepius-DPO-Mistral-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Asclepius-DPO-Mistral-7B-GGUF/resolve/main/Asclepius-DPO-Mistral-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Asclepius-DPO-Mistral-7B-GGUF/resolve/main/Asclepius-DPO-Mistral-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Asclepius-DPO-Mistral-7B-GGUF/resolve/main/Asclepius-DPO-Mistral-7B.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Asclepius-DPO-Mistral-7B-GGUF/resolve/main/Asclepius-DPO-Mistral-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Asclepius-DPO-Mistral-7B-GGUF/resolve/main/Asclepius-DPO-Mistral-7B.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Asclepius-DPO-Mistral-7B-GGUF/resolve/main/Asclepius-DPO-Mistral-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Asclepius-DPO-Mistral-7B-GGUF/resolve/main/Asclepius-DPO-Mistral-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Asclepius-DPO-Mistral-7B-GGUF/resolve/main/Asclepius-DPO-Mistral-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Asclepius-DPO-Mistral-7B-GGUF/resolve/main/Asclepius-DPO-Mistral-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Asclepius-DPO-Mistral-7B-GGUF/resolve/main/Asclepius-DPO-Mistral-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/StoneDesign-GGUF
mradermacher
2024-05-06T05:42:11Z
91
0
transformers
[ "transformers", "gguf", "en", "base_model:Mengue/StoneDesign", "base_model:quantized:Mengue/StoneDesign", "endpoints_compatible", "region:us" ]
null
2024-03-29T03:52:15Z
--- base_model: Mengue/StoneDesign language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About static quants of https://huggingface.co/Mengue/StoneDesign <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/StoneDesign-GGUF/resolve/main/StoneDesign.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/StoneDesign-GGUF/resolve/main/StoneDesign.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/StoneDesign-GGUF/resolve/main/StoneDesign.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/StoneDesign-GGUF/resolve/main/StoneDesign.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/StoneDesign-GGUF/resolve/main/StoneDesign.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/StoneDesign-GGUF/resolve/main/StoneDesign.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/StoneDesign-GGUF/resolve/main/StoneDesign.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/StoneDesign-GGUF/resolve/main/StoneDesign.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/StoneDesign-GGUF/resolve/main/StoneDesign.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/StoneDesign-GGUF/resolve/main/StoneDesign.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/StoneDesign-GGUF/resolve/main/StoneDesign.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/StoneDesign-GGUF/resolve/main/StoneDesign.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/StoneDesign-GGUF/resolve/main/StoneDesign.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/StoneDesign-GGUF/resolve/main/StoneDesign.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/StoneDesign-GGUF/resolve/main/StoneDesign.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/StoneDesign-GGUF/resolve/main/StoneDesign.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Wespeaker/wespeaker-voxceleb-campplus-LM
Wespeaker
2024-05-06T05:42:10Z
5
0
null
[ "onnx", "license:apache-2.0", "region:us" ]
null
2024-05-06T05:06:08Z
--- license: apache-2.0 ---
mradermacher/AI2AIv2-candybot-7b-v3-GGUF
mradermacher
2024-05-06T05:42:00Z
62
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "mistral", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-29T04:41:55Z
--- base_model: EverAI-AI/AI2AIv2-candybot-7b-v3 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - mistral - trl --- ## About static quants of https://huggingface.co/EverAI-AI/AI2AIv2-candybot-7b-v3 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/AI2AIv2-candybot-7b-v3-GGUF/resolve/main/AI2AIv2-candybot-7b-v3.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/AI2AIv2-candybot-7b-v3-GGUF/resolve/main/AI2AIv2-candybot-7b-v3.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/AI2AIv2-candybot-7b-v3-GGUF/resolve/main/AI2AIv2-candybot-7b-v3.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/AI2AIv2-candybot-7b-v3-GGUF/resolve/main/AI2AIv2-candybot-7b-v3.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/AI2AIv2-candybot-7b-v3-GGUF/resolve/main/AI2AIv2-candybot-7b-v3.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/AI2AIv2-candybot-7b-v3-GGUF/resolve/main/AI2AIv2-candybot-7b-v3.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/AI2AIv2-candybot-7b-v3-GGUF/resolve/main/AI2AIv2-candybot-7b-v3.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/AI2AIv2-candybot-7b-v3-GGUF/resolve/main/AI2AIv2-candybot-7b-v3.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/AI2AIv2-candybot-7b-v3-GGUF/resolve/main/AI2AIv2-candybot-7b-v3.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/AI2AIv2-candybot-7b-v3-GGUF/resolve/main/AI2AIv2-candybot-7b-v3.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/AI2AIv2-candybot-7b-v3-GGUF/resolve/main/AI2AIv2-candybot-7b-v3.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/AI2AIv2-candybot-7b-v3-GGUF/resolve/main/AI2AIv2-candybot-7b-v3.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/AI2AIv2-candybot-7b-v3-GGUF/resolve/main/AI2AIv2-candybot-7b-v3.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/AI2AIv2-candybot-7b-v3-GGUF/resolve/main/AI2AIv2-candybot-7b-v3.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/AI2AIv2-candybot-7b-v3-GGUF/resolve/main/AI2AIv2-candybot-7b-v3.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/AI2AIv2-candybot-7b-v3-GGUF/resolve/main/AI2AIv2-candybot-7b-v3.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Vistral-7B-ties-GGUF
mradermacher
2024-05-06T05:41:55Z
5
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "Viet-Mistral/Vistral-7B-Chat", "yam-peleg/Experiment26-7B", "en", "endpoints_compatible", "region:us", "conversational" ]
null
2024-03-29T04:51:25Z
--- base_model: pphuc25/Vistral-7B-ties language: - en library_name: transformers quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - Viet-Mistral/Vistral-7B-Chat - yam-peleg/Experiment26-7B --- ## About static quants of https://huggingface.co/pphuc25/Vistral-7B-ties <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Vistral-7B-ties-GGUF/resolve/main/Vistral-7B-ties.Q2_K.gguf) | Q2_K | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/Vistral-7B-ties-GGUF/resolve/main/Vistral-7B-ties.IQ3_XS.gguf) | IQ3_XS | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/Vistral-7B-ties-GGUF/resolve/main/Vistral-7B-ties.Q3_K_S.gguf) | Q3_K_S | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/Vistral-7B-ties-GGUF/resolve/main/Vistral-7B-ties.IQ3_S.gguf) | IQ3_S | 2.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Vistral-7B-ties-GGUF/resolve/main/Vistral-7B-ties.IQ3_M.gguf) | IQ3_M | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Vistral-7B-ties-GGUF/resolve/main/Vistral-7B-ties.Q3_K_M.gguf) | Q3_K_M | 2.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Vistral-7B-ties-GGUF/resolve/main/Vistral-7B-ties.Q3_K_L.gguf) | Q3_K_L | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/Vistral-7B-ties-GGUF/resolve/main/Vistral-7B-ties.IQ4_XS.gguf) | IQ4_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Vistral-7B-ties-GGUF/resolve/main/Vistral-7B-ties.Q4_0.gguf) | Q4_0 | 3.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Vistral-7B-ties-GGUF/resolve/main/Vistral-7B-ties.Q4_K_S.gguf) | Q4_K_S | 3.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Vistral-7B-ties-GGUF/resolve/main/Vistral-7B-ties.IQ4_NL.gguf) | IQ4_NL | 3.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Vistral-7B-ties-GGUF/resolve/main/Vistral-7B-ties.Q4_K_M.gguf) | Q4_K_M | 3.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Vistral-7B-ties-GGUF/resolve/main/Vistral-7B-ties.Q5_K_S.gguf) | Q5_K_S | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Vistral-7B-ties-GGUF/resolve/main/Vistral-7B-ties.Q5_K_M.gguf) | Q5_K_M | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Vistral-7B-ties-GGUF/resolve/main/Vistral-7B-ties.Q6_K.gguf) | Q6_K | 4.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Vistral-7B-ties-GGUF/resolve/main/Vistral-7B-ties.Q8_0.gguf) | Q8_0 | 6.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/StarfishRP-GGUF
mradermacher
2024-05-06T05:41:52Z
156
1
transformers
[ "transformers", "gguf", "rp", "roleplay", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-29T04:53:59Z
--- base_model: Fredithefish/StarfishRP language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - rp - roleplay --- ## About static quants of https://huggingface.co/Fredithefish/StarfishRP <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/StarfishRP-GGUF/resolve/main/StarfishRP.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/StarfishRP-GGUF/resolve/main/StarfishRP.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/StarfishRP-GGUF/resolve/main/StarfishRP.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/StarfishRP-GGUF/resolve/main/StarfishRP.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/StarfishRP-GGUF/resolve/main/StarfishRP.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/StarfishRP-GGUF/resolve/main/StarfishRP.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/StarfishRP-GGUF/resolve/main/StarfishRP.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/StarfishRP-GGUF/resolve/main/StarfishRP.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/StarfishRP-GGUF/resolve/main/StarfishRP.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/StarfishRP-GGUF/resolve/main/StarfishRP.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/StarfishRP-GGUF/resolve/main/StarfishRP.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/StarfishRP-GGUF/resolve/main/StarfishRP.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/StarfishRP-GGUF/resolve/main/StarfishRP.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/StarfishRP-GGUF/resolve/main/StarfishRP.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/StarfishRP-GGUF/resolve/main/StarfishRP.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/StarfishRP-GGUF/resolve/main/StarfishRP.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/dolphin-mistral-daybreak-kunoichi-dpo-7B-GGUF
mradermacher
2024-05-06T05:41:41Z
37
1
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "cognitivecomputations/fc-dolphin-2.6-mistral-7b-dpo-laser", "crestf411/daybreak-kunoichi-2dpo-7b", "en", "endpoints_compatible", "region:us" ]
null
2024-03-29T05:17:21Z
--- base_model: ThijsL202/dolphin-mistral-daybreak-kunoichi-dpo-7B language: - en library_name: transformers quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - cognitivecomputations/fc-dolphin-2.6-mistral-7b-dpo-laser - crestf411/daybreak-kunoichi-2dpo-7b --- ## About static quants of https://huggingface.co/ThijsL202/dolphin-mistral-daybreak-kunoichi-dpo-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/dolphin-mistral-daybreak-kunoichi-dpo-7B-GGUF/resolve/main/dolphin-mistral-daybreak-kunoichi-dpo-7B.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-mistral-daybreak-kunoichi-dpo-7B-GGUF/resolve/main/dolphin-mistral-daybreak-kunoichi-dpo-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-mistral-daybreak-kunoichi-dpo-7B-GGUF/resolve/main/dolphin-mistral-daybreak-kunoichi-dpo-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-mistral-daybreak-kunoichi-dpo-7B-GGUF/resolve/main/dolphin-mistral-daybreak-kunoichi-dpo-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/dolphin-mistral-daybreak-kunoichi-dpo-7B-GGUF/resolve/main/dolphin-mistral-daybreak-kunoichi-dpo-7B.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-mistral-daybreak-kunoichi-dpo-7B-GGUF/resolve/main/dolphin-mistral-daybreak-kunoichi-dpo-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/dolphin-mistral-daybreak-kunoichi-dpo-7B-GGUF/resolve/main/dolphin-mistral-daybreak-kunoichi-dpo-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-mistral-daybreak-kunoichi-dpo-7B-GGUF/resolve/main/dolphin-mistral-daybreak-kunoichi-dpo-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-mistral-daybreak-kunoichi-dpo-7B-GGUF/resolve/main/dolphin-mistral-daybreak-kunoichi-dpo-7B.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/dolphin-mistral-daybreak-kunoichi-dpo-7B-GGUF/resolve/main/dolphin-mistral-daybreak-kunoichi-dpo-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/dolphin-mistral-daybreak-kunoichi-dpo-7B-GGUF/resolve/main/dolphin-mistral-daybreak-kunoichi-dpo-7B.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/dolphin-mistral-daybreak-kunoichi-dpo-7B-GGUF/resolve/main/dolphin-mistral-daybreak-kunoichi-dpo-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/dolphin-mistral-daybreak-kunoichi-dpo-7B-GGUF/resolve/main/dolphin-mistral-daybreak-kunoichi-dpo-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-mistral-daybreak-kunoichi-dpo-7B-GGUF/resolve/main/dolphin-mistral-daybreak-kunoichi-dpo-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-mistral-daybreak-kunoichi-dpo-7B-GGUF/resolve/main/dolphin-mistral-daybreak-kunoichi-dpo-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/dolphin-mistral-daybreak-kunoichi-dpo-7B-GGUF/resolve/main/dolphin-mistral-daybreak-kunoichi-dpo-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/electric-mist-7b-GGUF
mradermacher
2024-05-06T05:41:38Z
129
1
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "mistral", "en", "dataset:maldv/cyberpunk", "dataset:microsoft/orca-math-word-problems-200k", "dataset:Weyaxi/sci-datasets", "dataset:grimulkan/theory-of-mind", "dataset:ResplendentAI/Synthetic_Soul_1k", "dataset:GraphWiz/GraphInstruct-RFT-72K", "base_model:maldv/electric-mist-7b", "base_model:quantized:maldv/electric-mist-7b", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-03-29T05:53:25Z
--- base_model: maldv/electric-mist-7b datasets: - maldv/cyberpunk - microsoft/orca-math-word-problems-200k - Weyaxi/sci-datasets - grimulkan/theory-of-mind - ResplendentAI/Synthetic_Soul_1k - GraphWiz/GraphInstruct-RFT-72K language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - mistral --- ## About static quants of https://huggingface.co/maldv/electric-mist-7b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/electric-mist-7b-GGUF/resolve/main/electric-mist-7b.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/electric-mist-7b-GGUF/resolve/main/electric-mist-7b.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/electric-mist-7b-GGUF/resolve/main/electric-mist-7b.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/electric-mist-7b-GGUF/resolve/main/electric-mist-7b.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/electric-mist-7b-GGUF/resolve/main/electric-mist-7b.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/electric-mist-7b-GGUF/resolve/main/electric-mist-7b.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/electric-mist-7b-GGUF/resolve/main/electric-mist-7b.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/electric-mist-7b-GGUF/resolve/main/electric-mist-7b.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/electric-mist-7b-GGUF/resolve/main/electric-mist-7b.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/electric-mist-7b-GGUF/resolve/main/electric-mist-7b.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/electric-mist-7b-GGUF/resolve/main/electric-mist-7b.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/electric-mist-7b-GGUF/resolve/main/electric-mist-7b.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/electric-mist-7b-GGUF/resolve/main/electric-mist-7b.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/electric-mist-7b-GGUF/resolve/main/electric-mist-7b.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/electric-mist-7b-GGUF/resolve/main/electric-mist-7b.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/electric-mist-7b-GGUF/resolve/main/electric-mist-7b.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Gamma-Alpha-7B-GGUF
mradermacher
2024-05-06T05:41:13Z
45
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Inv/Gamma-Alpha-7B", "base_model:quantized:Inv/Gamma-Alpha-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-29T05:56:00Z
--- base_model: Inv/Gamma-Alpha-7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - mergekit - merge --- ## About static quants of https://huggingface.co/Inv/Gamma-Alpha-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Gamma-Alpha-7B-GGUF/resolve/main/Gamma-Alpha-7B.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Gamma-Alpha-7B-GGUF/resolve/main/Gamma-Alpha-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Gamma-Alpha-7B-GGUF/resolve/main/Gamma-Alpha-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Gamma-Alpha-7B-GGUF/resolve/main/Gamma-Alpha-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Gamma-Alpha-7B-GGUF/resolve/main/Gamma-Alpha-7B.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Gamma-Alpha-7B-GGUF/resolve/main/Gamma-Alpha-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Gamma-Alpha-7B-GGUF/resolve/main/Gamma-Alpha-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Gamma-Alpha-7B-GGUF/resolve/main/Gamma-Alpha-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Gamma-Alpha-7B-GGUF/resolve/main/Gamma-Alpha-7B.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Gamma-Alpha-7B-GGUF/resolve/main/Gamma-Alpha-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Gamma-Alpha-7B-GGUF/resolve/main/Gamma-Alpha-7B.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Gamma-Alpha-7B-GGUF/resolve/main/Gamma-Alpha-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Gamma-Alpha-7B-GGUF/resolve/main/Gamma-Alpha-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Gamma-Alpha-7B-GGUF/resolve/main/Gamma-Alpha-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Gamma-Alpha-7B-GGUF/resolve/main/Gamma-Alpha-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Gamma-Alpha-7B-GGUF/resolve/main/Gamma-Alpha-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Ramakrishna-7b-v3-GGUF
mradermacher
2024-05-06T05:41:09Z
63
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "automerger/YamShadow-7B", "Kukedlc/Neural4gsm8k", "Kukedlc/NeuralSirKrishna-7b", "mlabonne/NeuBeagle-7B", "Kukedlc/Ramakrishna-7b", "Kukedlc/NeuralGanesha-7b", "en", "base_model:Kukedlc/Ramakrishna-7b-v3", "base_model:quantized:Kukedlc/Ramakrishna-7b-v3", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-29T06:14:14Z
--- base_model: Kukedlc/Ramakrishna-7b-v3 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - automerger/YamShadow-7B - Kukedlc/Neural4gsm8k - Kukedlc/NeuralSirKrishna-7b - mlabonne/NeuBeagle-7B - Kukedlc/Ramakrishna-7b - Kukedlc/NeuralGanesha-7b --- ## About static quants of https://huggingface.co/Kukedlc/Ramakrishna-7b-v3 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Ramakrishna-7b-v3-GGUF/resolve/main/Ramakrishna-7b-v3.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Ramakrishna-7b-v3-GGUF/resolve/main/Ramakrishna-7b-v3.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Ramakrishna-7b-v3-GGUF/resolve/main/Ramakrishna-7b-v3.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Ramakrishna-7b-v3-GGUF/resolve/main/Ramakrishna-7b-v3.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Ramakrishna-7b-v3-GGUF/resolve/main/Ramakrishna-7b-v3.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Ramakrishna-7b-v3-GGUF/resolve/main/Ramakrishna-7b-v3.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Ramakrishna-7b-v3-GGUF/resolve/main/Ramakrishna-7b-v3.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Ramakrishna-7b-v3-GGUF/resolve/main/Ramakrishna-7b-v3.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Ramakrishna-7b-v3-GGUF/resolve/main/Ramakrishna-7b-v3.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Ramakrishna-7b-v3-GGUF/resolve/main/Ramakrishna-7b-v3.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Ramakrishna-7b-v3-GGUF/resolve/main/Ramakrishna-7b-v3.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Ramakrishna-7b-v3-GGUF/resolve/main/Ramakrishna-7b-v3.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Ramakrishna-7b-v3-GGUF/resolve/main/Ramakrishna-7b-v3.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Ramakrishna-7b-v3-GGUF/resolve/main/Ramakrishna-7b-v3.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Ramakrishna-7b-v3-GGUF/resolve/main/Ramakrishna-7b-v3.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Ramakrishna-7b-v3-GGUF/resolve/main/Ramakrishna-7b-v3.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/megatron_v4_4x7B-GGUF
mradermacher
2024-05-06T05:40:54Z
100
0
transformers
[ "transformers", "gguf", "moe", "merge", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-03-29T06:45:36Z
--- base_model: Eurdem/megatron_v4_4x7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - moe - merge --- ## About static quants of https://huggingface.co/Eurdem/megatron_v4_4x7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/megatron_v4_4x7B-GGUF/resolve/main/megatron_v4_4x7B.Q2_K.gguf) | Q2_K | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/megatron_v4_4x7B-GGUF/resolve/main/megatron_v4_4x7B.IQ3_XS.gguf) | IQ3_XS | 10.1 | | | [GGUF](https://huggingface.co/mradermacher/megatron_v4_4x7B-GGUF/resolve/main/megatron_v4_4x7B.Q3_K_S.gguf) | Q3_K_S | 10.7 | | | [GGUF](https://huggingface.co/mradermacher/megatron_v4_4x7B-GGUF/resolve/main/megatron_v4_4x7B.IQ3_S.gguf) | IQ3_S | 10.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/megatron_v4_4x7B-GGUF/resolve/main/megatron_v4_4x7B.IQ3_M.gguf) | IQ3_M | 10.9 | | | [GGUF](https://huggingface.co/mradermacher/megatron_v4_4x7B-GGUF/resolve/main/megatron_v4_4x7B.Q3_K_M.gguf) | Q3_K_M | 11.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/megatron_v4_4x7B-GGUF/resolve/main/megatron_v4_4x7B.Q3_K_L.gguf) | Q3_K_L | 12.8 | | | [GGUF](https://huggingface.co/mradermacher/megatron_v4_4x7B-GGUF/resolve/main/megatron_v4_4x7B.IQ4_XS.gguf) | IQ4_XS | 13.3 | | | [GGUF](https://huggingface.co/mradermacher/megatron_v4_4x7B-GGUF/resolve/main/megatron_v4_4x7B.Q4_0.gguf) | Q4_0 | 13.9 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/megatron_v4_4x7B-GGUF/resolve/main/megatron_v4_4x7B.Q4_K_S.gguf) | Q4_K_S | 14.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/megatron_v4_4x7B-GGUF/resolve/main/megatron_v4_4x7B.IQ4_NL.gguf) | IQ4_NL | 14.0 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/megatron_v4_4x7B-GGUF/resolve/main/megatron_v4_4x7B.Q4_K_M.gguf) | Q4_K_M | 14.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/megatron_v4_4x7B-GGUF/resolve/main/megatron_v4_4x7B.Q5_K_S.gguf) | Q5_K_S | 16.9 | | | [GGUF](https://huggingface.co/mradermacher/megatron_v4_4x7B-GGUF/resolve/main/megatron_v4_4x7B.Q5_K_M.gguf) | Q5_K_M | 17.4 | | | [GGUF](https://huggingface.co/mradermacher/megatron_v4_4x7B-GGUF/resolve/main/megatron_v4_4x7B.Q6_K.gguf) | Q6_K | 20.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/megatron_v4_4x7B-GGUF/resolve/main/megatron_v4_4x7B.Q8_0.gguf) | Q8_0 | 25.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Konstanta-V4-Alpha-7B-GGUF
mradermacher
2024-05-06T05:40:45Z
94
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "senseable/WestLake-7B-v2", "KatyTheCutie/LemonadeRP-4.5.3", "roleplay", "rp", "en", "base_model:Inv/Konstanta-V4-Alpha-7B", "base_model:quantized:Inv/Konstanta-V4-Alpha-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-29T06:58:01Z
--- base_model: Inv/Konstanta-V4-Alpha-7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - mergekit - merge - senseable/WestLake-7B-v2 - KatyTheCutie/LemonadeRP-4.5.3 - roleplay - rp --- ## About static quants of https://huggingface.co/Inv/Konstanta-V4-Alpha-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Konstanta-V4-Alpha-7B-GGUF/resolve/main/Konstanta-V4-Alpha-7B.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Konstanta-V4-Alpha-7B-GGUF/resolve/main/Konstanta-V4-Alpha-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Konstanta-V4-Alpha-7B-GGUF/resolve/main/Konstanta-V4-Alpha-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Konstanta-V4-Alpha-7B-GGUF/resolve/main/Konstanta-V4-Alpha-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Konstanta-V4-Alpha-7B-GGUF/resolve/main/Konstanta-V4-Alpha-7B.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Konstanta-V4-Alpha-7B-GGUF/resolve/main/Konstanta-V4-Alpha-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Konstanta-V4-Alpha-7B-GGUF/resolve/main/Konstanta-V4-Alpha-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Konstanta-V4-Alpha-7B-GGUF/resolve/main/Konstanta-V4-Alpha-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Konstanta-V4-Alpha-7B-GGUF/resolve/main/Konstanta-V4-Alpha-7B.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Konstanta-V4-Alpha-7B-GGUF/resolve/main/Konstanta-V4-Alpha-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Konstanta-V4-Alpha-7B-GGUF/resolve/main/Konstanta-V4-Alpha-7B.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Konstanta-V4-Alpha-7B-GGUF/resolve/main/Konstanta-V4-Alpha-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Konstanta-V4-Alpha-7B-GGUF/resolve/main/Konstanta-V4-Alpha-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Konstanta-V4-Alpha-7B-GGUF/resolve/main/Konstanta-V4-Alpha-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Konstanta-V4-Alpha-7B-GGUF/resolve/main/Konstanta-V4-Alpha-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Konstanta-V4-Alpha-7B-GGUF/resolve/main/Konstanta-V4-Alpha-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Exponenta-Alpha-7B-GGUF
mradermacher
2024-05-06T05:40:21Z
38
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Inv/Exponenta-Alpha-7B", "base_model:quantized:Inv/Exponenta-Alpha-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-29T07:24:22Z
--- base_model: Inv/Exponenta-Alpha-7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - mergekit - merge --- ## About static quants of https://huggingface.co/Inv/Exponenta-Alpha-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Exponenta-Alpha-7B-GGUF/resolve/main/Exponenta-Alpha-7B.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Exponenta-Alpha-7B-GGUF/resolve/main/Exponenta-Alpha-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Exponenta-Alpha-7B-GGUF/resolve/main/Exponenta-Alpha-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Exponenta-Alpha-7B-GGUF/resolve/main/Exponenta-Alpha-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Exponenta-Alpha-7B-GGUF/resolve/main/Exponenta-Alpha-7B.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Exponenta-Alpha-7B-GGUF/resolve/main/Exponenta-Alpha-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Exponenta-Alpha-7B-GGUF/resolve/main/Exponenta-Alpha-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Exponenta-Alpha-7B-GGUF/resolve/main/Exponenta-Alpha-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Exponenta-Alpha-7B-GGUF/resolve/main/Exponenta-Alpha-7B.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Exponenta-Alpha-7B-GGUF/resolve/main/Exponenta-Alpha-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Exponenta-Alpha-7B-GGUF/resolve/main/Exponenta-Alpha-7B.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Exponenta-Alpha-7B-GGUF/resolve/main/Exponenta-Alpha-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Exponenta-Alpha-7B-GGUF/resolve/main/Exponenta-Alpha-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Exponenta-Alpha-7B-GGUF/resolve/main/Exponenta-Alpha-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Exponenta-Alpha-7B-GGUF/resolve/main/Exponenta-Alpha-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Exponenta-Alpha-7B-GGUF/resolve/main/Exponenta-Alpha-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Mistral-7B-Cymraeg-Welsh-v2-GGUF
mradermacher
2024-05-06T05:40:12Z
8
0
transformers
[ "transformers", "gguf", "cy", "en", "dataset:yahma/alpaca-cleaned", "dataset:allenai/MADLAD-400", "base_model:BangorAI/Mistral-7B-Cymraeg-Welsh-v2", "base_model:quantized:BangorAI/Mistral-7B-Cymraeg-Welsh-v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-29T07:44:55Z
--- base_model: BangorAI/Mistral-7B-Cymraeg-Welsh-v2 datasets: - yahma/alpaca-cleaned - allenai/MADLAD-400 language: - cy - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About static quants of https://huggingface.co/BangorAI/Mistral-7B-Cymraeg-Welsh-v2 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Cymraeg-Welsh-v2-GGUF/resolve/main/Mistral-7B-Cymraeg-Welsh-v2.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Cymraeg-Welsh-v2-GGUF/resolve/main/Mistral-7B-Cymraeg-Welsh-v2.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Cymraeg-Welsh-v2-GGUF/resolve/main/Mistral-7B-Cymraeg-Welsh-v2.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Cymraeg-Welsh-v2-GGUF/resolve/main/Mistral-7B-Cymraeg-Welsh-v2.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Cymraeg-Welsh-v2-GGUF/resolve/main/Mistral-7B-Cymraeg-Welsh-v2.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Cymraeg-Welsh-v2-GGUF/resolve/main/Mistral-7B-Cymraeg-Welsh-v2.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Cymraeg-Welsh-v2-GGUF/resolve/main/Mistral-7B-Cymraeg-Welsh-v2.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Cymraeg-Welsh-v2-GGUF/resolve/main/Mistral-7B-Cymraeg-Welsh-v2.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Cymraeg-Welsh-v2-GGUF/resolve/main/Mistral-7B-Cymraeg-Welsh-v2.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Cymraeg-Welsh-v2-GGUF/resolve/main/Mistral-7B-Cymraeg-Welsh-v2.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Cymraeg-Welsh-v2-GGUF/resolve/main/Mistral-7B-Cymraeg-Welsh-v2.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Cymraeg-Welsh-v2-GGUF/resolve/main/Mistral-7B-Cymraeg-Welsh-v2.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Cymraeg-Welsh-v2-GGUF/resolve/main/Mistral-7B-Cymraeg-Welsh-v2.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Cymraeg-Welsh-v2-GGUF/resolve/main/Mistral-7B-Cymraeg-Welsh-v2.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Cymraeg-Welsh-v2-GGUF/resolve/main/Mistral-7B-Cymraeg-Welsh-v2.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Cymraeg-Welsh-v2-GGUF/resolve/main/Mistral-7B-Cymraeg-Welsh-v2.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Mistral-7B-v0.2-ko-Y24_v1.0-GGUF
mradermacher
2024-05-06T05:38:33Z
10
0
transformers
[ "transformers", "gguf", "Mistral", "ko", "base_model:AIdenU/Mistral-7B-v0.2-ko-Y24_v1.0", "base_model:quantized:AIdenU/Mistral-7B-v0.2-ko-Y24_v1.0", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-03-29T08:55:01Z
--- base_model: AIdenU/Mistral-7B-v0.2-ko-Y24_v1.0 language: - ko library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - Mistral --- ## About static quants of https://huggingface.co/AIdenU/Mistral-7B-v0.2-ko-Y24_v1.0 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.2-ko-Y24_v1.0-GGUF/resolve/main/Mistral-7B-v0.2-ko-Y24_v1.0.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.2-ko-Y24_v1.0-GGUF/resolve/main/Mistral-7B-v0.2-ko-Y24_v1.0.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.2-ko-Y24_v1.0-GGUF/resolve/main/Mistral-7B-v0.2-ko-Y24_v1.0.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.2-ko-Y24_v1.0-GGUF/resolve/main/Mistral-7B-v0.2-ko-Y24_v1.0.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.2-ko-Y24_v1.0-GGUF/resolve/main/Mistral-7B-v0.2-ko-Y24_v1.0.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.2-ko-Y24_v1.0-GGUF/resolve/main/Mistral-7B-v0.2-ko-Y24_v1.0.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.2-ko-Y24_v1.0-GGUF/resolve/main/Mistral-7B-v0.2-ko-Y24_v1.0.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.2-ko-Y24_v1.0-GGUF/resolve/main/Mistral-7B-v0.2-ko-Y24_v1.0.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.2-ko-Y24_v1.0-GGUF/resolve/main/Mistral-7B-v0.2-ko-Y24_v1.0.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.2-ko-Y24_v1.0-GGUF/resolve/main/Mistral-7B-v0.2-ko-Y24_v1.0.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.2-ko-Y24_v1.0-GGUF/resolve/main/Mistral-7B-v0.2-ko-Y24_v1.0.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.2-ko-Y24_v1.0-GGUF/resolve/main/Mistral-7B-v0.2-ko-Y24_v1.0.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.2-ko-Y24_v1.0-GGUF/resolve/main/Mistral-7B-v0.2-ko-Y24_v1.0.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.2-ko-Y24_v1.0-GGUF/resolve/main/Mistral-7B-v0.2-ko-Y24_v1.0.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.2-ko-Y24_v1.0-GGUF/resolve/main/Mistral-7B-v0.2-ko-Y24_v1.0.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.2-ko-Y24_v1.0-GGUF/resolve/main/Mistral-7B-v0.2-ko-Y24_v1.0.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/openchat-marunashop-v3-1.0-GGUF
mradermacher
2024-05-06T05:38:30Z
10
0
transformers
[ "transformers", "gguf", "en", "base_model:kimmypracha/openchat-marunashop-v3-1.0", "base_model:quantized:kimmypracha/openchat-marunashop-v3-1.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-03-29T08:55:30Z
--- base_model: kimmypracha/openchat-marunashop-v3-1.0 language: - en library_name: transformers quantized_by: mradermacher --- ## About static quants of https://huggingface.co/kimmypracha/openchat-marunashop-v3-1.0 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/openchat-marunashop-v3-1.0-GGUF/resolve/main/openchat-marunashop-v3-1.0.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/openchat-marunashop-v3-1.0-GGUF/resolve/main/openchat-marunashop-v3-1.0.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/openchat-marunashop-v3-1.0-GGUF/resolve/main/openchat-marunashop-v3-1.0.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/openchat-marunashop-v3-1.0-GGUF/resolve/main/openchat-marunashop-v3-1.0.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/openchat-marunashop-v3-1.0-GGUF/resolve/main/openchat-marunashop-v3-1.0.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/openchat-marunashop-v3-1.0-GGUF/resolve/main/openchat-marunashop-v3-1.0.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/openchat-marunashop-v3-1.0-GGUF/resolve/main/openchat-marunashop-v3-1.0.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/openchat-marunashop-v3-1.0-GGUF/resolve/main/openchat-marunashop-v3-1.0.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/openchat-marunashop-v3-1.0-GGUF/resolve/main/openchat-marunashop-v3-1.0.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/openchat-marunashop-v3-1.0-GGUF/resolve/main/openchat-marunashop-v3-1.0.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/openchat-marunashop-v3-1.0-GGUF/resolve/main/openchat-marunashop-v3-1.0.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/openchat-marunashop-v3-1.0-GGUF/resolve/main/openchat-marunashop-v3-1.0.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/openchat-marunashop-v3-1.0-GGUF/resolve/main/openchat-marunashop-v3-1.0.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/openchat-marunashop-v3-1.0-GGUF/resolve/main/openchat-marunashop-v3-1.0.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/openchat-marunashop-v3-1.0-GGUF/resolve/main/openchat-marunashop-v3-1.0.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/openchat-marunashop-v3-1.0-GGUF/resolve/main/openchat-marunashop-v3-1.0.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Bonaparta-7B-GGUF
mradermacher
2024-05-06T05:38:26Z
128
0
transformers
[ "transformers", "gguf", "en", "base_model:grx96/Bonaparta-7B", "base_model:quantized:grx96/Bonaparta-7B", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2024-03-29T08:59:25Z
--- base_model: grx96/Bonaparta-7B language: - en library_name: transformers license: mit quantized_by: mradermacher --- ## About static quants of https://huggingface.co/grx96/Bonaparta-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Bonaparta-7B-GGUF/resolve/main/Bonaparta-7B.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Bonaparta-7B-GGUF/resolve/main/Bonaparta-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Bonaparta-7B-GGUF/resolve/main/Bonaparta-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Bonaparta-7B-GGUF/resolve/main/Bonaparta-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Bonaparta-7B-GGUF/resolve/main/Bonaparta-7B.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Bonaparta-7B-GGUF/resolve/main/Bonaparta-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Bonaparta-7B-GGUF/resolve/main/Bonaparta-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Bonaparta-7B-GGUF/resolve/main/Bonaparta-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Bonaparta-7B-GGUF/resolve/main/Bonaparta-7B.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Bonaparta-7B-GGUF/resolve/main/Bonaparta-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Bonaparta-7B-GGUF/resolve/main/Bonaparta-7B.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Bonaparta-7B-GGUF/resolve/main/Bonaparta-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Bonaparta-7B-GGUF/resolve/main/Bonaparta-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Bonaparta-7B-GGUF/resolve/main/Bonaparta-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Bonaparta-7B-GGUF/resolve/main/Bonaparta-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Bonaparta-7B-GGUF/resolve/main/Bonaparta-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Miscela-7b-slerp-GGUF
mradermacher
2024-05-06T05:37:51Z
189
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:grx96/Miscela-7b-slerp", "base_model:quantized:grx96/Miscela-7b-slerp", "endpoints_compatible", "region:us", "conversational" ]
null
2024-03-29T09:43:41Z
--- base_model: grx96/Miscela-7b-slerp language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About static quants of https://huggingface.co/grx96/Miscela-7b-slerp <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Miscela-7b-slerp-GGUF/resolve/main/Miscela-7b-slerp.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Miscela-7b-slerp-GGUF/resolve/main/Miscela-7b-slerp.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Miscela-7b-slerp-GGUF/resolve/main/Miscela-7b-slerp.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Miscela-7b-slerp-GGUF/resolve/main/Miscela-7b-slerp.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Miscela-7b-slerp-GGUF/resolve/main/Miscela-7b-slerp.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Miscela-7b-slerp-GGUF/resolve/main/Miscela-7b-slerp.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Miscela-7b-slerp-GGUF/resolve/main/Miscela-7b-slerp.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Miscela-7b-slerp-GGUF/resolve/main/Miscela-7b-slerp.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Miscela-7b-slerp-GGUF/resolve/main/Miscela-7b-slerp.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Miscela-7b-slerp-GGUF/resolve/main/Miscela-7b-slerp.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Miscela-7b-slerp-GGUF/resolve/main/Miscela-7b-slerp.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Miscela-7b-slerp-GGUF/resolve/main/Miscela-7b-slerp.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Miscela-7b-slerp-GGUF/resolve/main/Miscela-7b-slerp.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Miscela-7b-slerp-GGUF/resolve/main/Miscela-7b-slerp.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Miscela-7b-slerp-GGUF/resolve/main/Miscela-7b-slerp.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Miscela-7b-slerp-GGUF/resolve/main/Miscela-7b-slerp.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/kor_resume-Orion-14B-GGUF
mradermacher
2024-05-06T05:37:45Z
16
0
transformers
[ "transformers", "gguf", "en", "endpoints_compatible", "region:us", "conversational" ]
null
2024-03-29T10:01:50Z
--- base_model: nebchi/kor_resume-Orion-14B language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About static quants of https://huggingface.co/nebchi/kor_resume-Orion-14B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/kor_resume-Orion-14B-GGUF/resolve/main/kor_resume-Orion-14B.Q2_K.gguf) | Q2_K | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/kor_resume-Orion-14B-GGUF/resolve/main/kor_resume-Orion-14B.IQ3_XS.gguf) | IQ3_XS | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/kor_resume-Orion-14B-GGUF/resolve/main/kor_resume-Orion-14B.IQ3_S.gguf) | IQ3_S | 7.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/kor_resume-Orion-14B-GGUF/resolve/main/kor_resume-Orion-14B.Q3_K_S.gguf) | Q3_K_S | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/kor_resume-Orion-14B-GGUF/resolve/main/kor_resume-Orion-14B.IQ3_M.gguf) | IQ3_M | 7.3 | | | [GGUF](https://huggingface.co/mradermacher/kor_resume-Orion-14B-GGUF/resolve/main/kor_resume-Orion-14B.Q3_K_M.gguf) | Q3_K_M | 7.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/kor_resume-Orion-14B-GGUF/resolve/main/kor_resume-Orion-14B.Q3_K_L.gguf) | Q3_K_L | 8.4 | | | [GGUF](https://huggingface.co/mradermacher/kor_resume-Orion-14B-GGUF/resolve/main/kor_resume-Orion-14B.IQ4_XS.gguf) | IQ4_XS | 8.5 | | | [GGUF](https://huggingface.co/mradermacher/kor_resume-Orion-14B-GGUF/resolve/main/kor_resume-Orion-14B.Q4_0.gguf) | Q4_0 | 8.9 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/kor_resume-Orion-14B-GGUF/resolve/main/kor_resume-Orion-14B.IQ4_NL.gguf) | IQ4_NL | 8.9 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/kor_resume-Orion-14B-GGUF/resolve/main/kor_resume-Orion-14B.Q4_K_S.gguf) | Q4_K_S | 8.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/kor_resume-Orion-14B-GGUF/resolve/main/kor_resume-Orion-14B.Q4_K_M.gguf) | Q4_K_M | 9.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/kor_resume-Orion-14B-GGUF/resolve/main/kor_resume-Orion-14B.Q5_K_S.gguf) | Q5_K_S | 10.6 | | | [GGUF](https://huggingface.co/mradermacher/kor_resume-Orion-14B-GGUF/resolve/main/kor_resume-Orion-14B.Q5_K_M.gguf) | Q5_K_M | 10.9 | | | [GGUF](https://huggingface.co/mradermacher/kor_resume-Orion-14B-GGUF/resolve/main/kor_resume-Orion-14B.Q6_K.gguf) | Q6_K | 12.5 | very good quality | | [GGUF](https://huggingface.co/mradermacher/kor_resume-Orion-14B-GGUF/resolve/main/kor_resume-Orion-14B.Q8_0.gguf) | Q8_0 | 15.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Hikari-4x7B-GGUF
mradermacher
2024-05-06T05:37:42Z
57
0
transformers
[ "transformers", "gguf", "en", "ja", "base_model:saucam/Hikari-4x7B", "base_model:quantized:saucam/Hikari-4x7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-29T10:10:42Z
--- base_model: saucam/Hikari-4x7B language: - en - ja library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About static quants of https://huggingface.co/saucam/Hikari-4x7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Hikari-4x7B-GGUF/resolve/main/Hikari-4x7B.Q2_K.gguf) | Q2_K | 9.2 | | | [GGUF](https://huggingface.co/mradermacher/Hikari-4x7B-GGUF/resolve/main/Hikari-4x7B.IQ3_XS.gguf) | IQ3_XS | 10.3 | | | [GGUF](https://huggingface.co/mradermacher/Hikari-4x7B-GGUF/resolve/main/Hikari-4x7B.Q3_K_S.gguf) | Q3_K_S | 10.8 | | | [GGUF](https://huggingface.co/mradermacher/Hikari-4x7B-GGUF/resolve/main/Hikari-4x7B.IQ3_S.gguf) | IQ3_S | 10.9 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Hikari-4x7B-GGUF/resolve/main/Hikari-4x7B.IQ3_M.gguf) | IQ3_M | 11.1 | | | [GGUF](https://huggingface.co/mradermacher/Hikari-4x7B-GGUF/resolve/main/Hikari-4x7B.Q3_K_M.gguf) | Q3_K_M | 12.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Hikari-4x7B-GGUF/resolve/main/Hikari-4x7B.Q3_K_L.gguf) | Q3_K_L | 13.0 | | | [GGUF](https://huggingface.co/mradermacher/Hikari-4x7B-GGUF/resolve/main/Hikari-4x7B.IQ4_XS.gguf) | IQ4_XS | 13.5 | | | [GGUF](https://huggingface.co/mradermacher/Hikari-4x7B-GGUF/resolve/main/Hikari-4x7B.Q4_0.gguf) | Q4_0 | 14.0 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Hikari-4x7B-GGUF/resolve/main/Hikari-4x7B.Q4_K_S.gguf) | Q4_K_S | 14.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Hikari-4x7B-GGUF/resolve/main/Hikari-4x7B.IQ4_NL.gguf) | IQ4_NL | 14.2 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Hikari-4x7B-GGUF/resolve/main/Hikari-4x7B.Q4_K_M.gguf) | Q4_K_M | 15.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Hikari-4x7B-GGUF/resolve/main/Hikari-4x7B.Q5_K_S.gguf) | Q5_K_S | 17.1 | | | [GGUF](https://huggingface.co/mradermacher/Hikari-4x7B-GGUF/resolve/main/Hikari-4x7B.Q5_K_M.gguf) | Q5_K_M | 17.6 | | | [GGUF](https://huggingface.co/mradermacher/Hikari-4x7B-GGUF/resolve/main/Hikari-4x7B.Q6_K.gguf) | Q6_K | 20.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Hikari-4x7B-GGUF/resolve/main/Hikari-4x7B.Q8_0.gguf) | Q8_0 | 26.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/HermesFlashback-7B.1-GGUF
mradermacher
2024-05-06T05:37:36Z
44
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "mlabonne/NeuralHermes-2.5-Mistral-7B", "timpal0l/Mistral-7B-v0.1-flashback-v2", "en", "base_model:FredrikBL/HermesFlashback-7B.1", "base_model:quantized:FredrikBL/HermesFlashback-7B.1", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-03-29T10:30:02Z
--- base_model: FredrikBL/HermesFlashback-7B.1 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - mlabonne/NeuralHermes-2.5-Mistral-7B - timpal0l/Mistral-7B-v0.1-flashback-v2 --- ## About static quants of https://huggingface.co/FredrikBL/HermesFlashback-7B.1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/HermesFlashback-7B.1-GGUF/resolve/main/HermesFlashback-7B.1.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/HermesFlashback-7B.1-GGUF/resolve/main/HermesFlashback-7B.1.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/HermesFlashback-7B.1-GGUF/resolve/main/HermesFlashback-7B.1.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/HermesFlashback-7B.1-GGUF/resolve/main/HermesFlashback-7B.1.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/HermesFlashback-7B.1-GGUF/resolve/main/HermesFlashback-7B.1.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/HermesFlashback-7B.1-GGUF/resolve/main/HermesFlashback-7B.1.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/HermesFlashback-7B.1-GGUF/resolve/main/HermesFlashback-7B.1.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/HermesFlashback-7B.1-GGUF/resolve/main/HermesFlashback-7B.1.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/HermesFlashback-7B.1-GGUF/resolve/main/HermesFlashback-7B.1.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/HermesFlashback-7B.1-GGUF/resolve/main/HermesFlashback-7B.1.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/HermesFlashback-7B.1-GGUF/resolve/main/HermesFlashback-7B.1.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/HermesFlashback-7B.1-GGUF/resolve/main/HermesFlashback-7B.1.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/HermesFlashback-7B.1-GGUF/resolve/main/HermesFlashback-7B.1.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/HermesFlashback-7B.1-GGUF/resolve/main/HermesFlashback-7B.1.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/HermesFlashback-7B.1-GGUF/resolve/main/HermesFlashback-7B.1.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/HermesFlashback-7B.1-GGUF/resolve/main/HermesFlashback-7B.1.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/shqiponja-59b-v1-GGUF
mradermacher
2024-05-06T05:37:28Z
4
1
transformers
[ "transformers", "gguf", "mergekit", "frankenstein", "merge", "en", "base_model:nisten/shqiponja-59b-v1", "base_model:quantized:nisten/shqiponja-59b-v1", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2024-03-29T12:14:30Z
--- base_model: nisten/shqiponja-59b-v1 language: - en library_name: transformers license: mit quantized_by: mradermacher tags: - mergekit - frankenstein - merge --- ## About static quants of https://huggingface.co/nisten/shqiponja-59b-v1 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/shqiponja-59b-v1-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/shqiponja-59b-v1-GGUF/resolve/main/shqiponja-59b-v1.Q2_K.gguf) | Q2_K | 22.5 | | | [GGUF](https://huggingface.co/mradermacher/shqiponja-59b-v1-GGUF/resolve/main/shqiponja-59b-v1.IQ3_XS.gguf) | IQ3_XS | 24.9 | | | [GGUF](https://huggingface.co/mradermacher/shqiponja-59b-v1-GGUF/resolve/main/shqiponja-59b-v1.Q3_K_S.gguf) | Q3_K_S | 26.2 | | | [GGUF](https://huggingface.co/mradermacher/shqiponja-59b-v1-GGUF/resolve/main/shqiponja-59b-v1.IQ3_S.gguf) | IQ3_S | 26.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/shqiponja-59b-v1-GGUF/resolve/main/shqiponja-59b-v1.IQ3_M.gguf) | IQ3_M | 27.2 | | | [GGUF](https://huggingface.co/mradermacher/shqiponja-59b-v1-GGUF/resolve/main/shqiponja-59b-v1.Q3_K_M.gguf) | Q3_K_M | 29.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/shqiponja-59b-v1-GGUF/resolve/main/shqiponja-59b-v1.Q3_K_L.gguf) | Q3_K_L | 31.7 | | | [GGUF](https://huggingface.co/mradermacher/shqiponja-59b-v1-GGUF/resolve/main/shqiponja-59b-v1.IQ4_XS.gguf) | IQ4_XS | 32.5 | | | [GGUF](https://huggingface.co/mradermacher/shqiponja-59b-v1-GGUF/resolve/main/shqiponja-59b-v1.Q4_0.gguf) | Q4_0 | 33.9 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/shqiponja-59b-v1-GGUF/resolve/main/shqiponja-59b-v1.Q4_K_S.gguf) | Q4_K_S | 34.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/shqiponja-59b-v1-GGUF/resolve/main/shqiponja-59b-v1.IQ4_NL.gguf) | IQ4_NL | 34.3 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/shqiponja-59b-v1-GGUF/resolve/main/shqiponja-59b-v1.Q4_K_M.gguf) | Q4_K_M | 36.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/shqiponja-59b-v1-GGUF/resolve/main/shqiponja-59b-v1.Q5_K_S.gguf) | Q5_K_S | 41.2 | | | [GGUF](https://huggingface.co/mradermacher/shqiponja-59b-v1-GGUF/resolve/main/shqiponja-59b-v1.Q5_K_M.gguf) | Q5_K_M | 42.3 | | | [GGUF](https://huggingface.co/mradermacher/shqiponja-59b-v1-GGUF/resolve/main/shqiponja-59b-v1.Q6_K.gguf) | Q6_K | 49.0 | very good quality | | [PART 1](https://huggingface.co/mradermacher/shqiponja-59b-v1-GGUF/resolve/main/shqiponja-59b-v1.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/shqiponja-59b-v1-GGUF/resolve/main/shqiponja-59b-v1.Q8_0.gguf.part2of2) | Q8_0 | 63.2 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/ThaliaAlpha-GGUF
mradermacher
2024-05-06T05:37:18Z
56
0
transformers
[ "transformers", "gguf", "mlx", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-03-29T15:54:22Z
--- base_model: N8Programs/ThaliaAlpha language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - mlx --- ## About static quants of https://huggingface.co/N8Programs/ThaliaAlpha <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/ThaliaAlpha-GGUF/resolve/main/ThaliaAlpha.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/ThaliaAlpha-GGUF/resolve/main/ThaliaAlpha.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/ThaliaAlpha-GGUF/resolve/main/ThaliaAlpha.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/ThaliaAlpha-GGUF/resolve/main/ThaliaAlpha.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/ThaliaAlpha-GGUF/resolve/main/ThaliaAlpha.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/ThaliaAlpha-GGUF/resolve/main/ThaliaAlpha.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/ThaliaAlpha-GGUF/resolve/main/ThaliaAlpha.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/ThaliaAlpha-GGUF/resolve/main/ThaliaAlpha.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/ThaliaAlpha-GGUF/resolve/main/ThaliaAlpha.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/ThaliaAlpha-GGUF/resolve/main/ThaliaAlpha.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ThaliaAlpha-GGUF/resolve/main/ThaliaAlpha.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/ThaliaAlpha-GGUF/resolve/main/ThaliaAlpha.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ThaliaAlpha-GGUF/resolve/main/ThaliaAlpha.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/ThaliaAlpha-GGUF/resolve/main/ThaliaAlpha.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/ThaliaAlpha-GGUF/resolve/main/ThaliaAlpha.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/ThaliaAlpha-GGUF/resolve/main/ThaliaAlpha.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Ramakrishna-7b-v4-GGUF
mradermacher
2024-05-06T05:36:43Z
109
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "automerger/YamShadow-7B", "automerger/OgnoExperiment27-7B", "automerger/PasticheNeuralsirkrishna-7B", "automerger/Experiment26Neuralarjuna-7B", "Kukedlc/NeuralGanesha-7b", "en", "base_model:Kukedlc/Ramakrishna-7b-v4", "base_model:quantized:Kukedlc/Ramakrishna-7b-v4", "endpoints_compatible", "region:us" ]
null
2024-03-29T16:39:59Z
--- base_model: Kukedlc/Ramakrishna-7b-v4 language: - en library_name: transformers quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - automerger/YamShadow-7B - automerger/OgnoExperiment27-7B - automerger/PasticheNeuralsirkrishna-7B - automerger/Experiment26Neuralarjuna-7B - Kukedlc/NeuralGanesha-7b --- ## About static quants of https://huggingface.co/Kukedlc/Ramakrishna-7b-v4 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Ramakrishna-7b-v4-GGUF/resolve/main/Ramakrishna-7b-v4.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Ramakrishna-7b-v4-GGUF/resolve/main/Ramakrishna-7b-v4.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Ramakrishna-7b-v4-GGUF/resolve/main/Ramakrishna-7b-v4.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Ramakrishna-7b-v4-GGUF/resolve/main/Ramakrishna-7b-v4.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Ramakrishna-7b-v4-GGUF/resolve/main/Ramakrishna-7b-v4.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Ramakrishna-7b-v4-GGUF/resolve/main/Ramakrishna-7b-v4.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Ramakrishna-7b-v4-GGUF/resolve/main/Ramakrishna-7b-v4.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Ramakrishna-7b-v4-GGUF/resolve/main/Ramakrishna-7b-v4.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Ramakrishna-7b-v4-GGUF/resolve/main/Ramakrishna-7b-v4.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Ramakrishna-7b-v4-GGUF/resolve/main/Ramakrishna-7b-v4.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Ramakrishna-7b-v4-GGUF/resolve/main/Ramakrishna-7b-v4.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Ramakrishna-7b-v4-GGUF/resolve/main/Ramakrishna-7b-v4.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Ramakrishna-7b-v4-GGUF/resolve/main/Ramakrishna-7b-v4.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Ramakrishna-7b-v4-GGUF/resolve/main/Ramakrishna-7b-v4.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Ramakrishna-7b-v4-GGUF/resolve/main/Ramakrishna-7b-v4.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Ramakrishna-7b-v4-GGUF/resolve/main/Ramakrishna-7b-v4.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->