eaddario commited on
Commit
9c5b75f
·
verified ·
1 Parent(s): dd9bc2f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +107 -45
README.md CHANGED
@@ -11,25 +11,28 @@ pipeline_tag: text-generation
11
  tags:
12
  - gguf
13
  - quant
 
14
  ---
15
 
16
- # GGUF and "i-matrix" quantized versions of cognitivecomputations/Dolphin3.0-Mistral-24B
17
 
18
- Using [LLaMA C++](https://github.com/ggerganov/llama.cpp) release [b4762](https://github.com/ggerganov/llama.cpp/releases/tag/b4762) for quantization.
19
 
20
  Original model: [cognitivecomputations/Dolphin3.0-Mistral-24B](https://huggingface.co/cognitivecomputations/Dolphin3.0-Mistral-24B)
21
 
22
- From the model creator:
23
 
24
- > Dolphin 3.0 is the next generation of the Dolphin series of instruct-tuned models. Designed to be the ultimate general purpose local model, enabling coding, math, agentic, function calling, and general use cases.
25
  >
26
- > Dolphin aims to be an **uncensored** general purpose reasoning instruct model, similar to the models behind ChatGPT, Claude, Gemini. But these models present problems for businesses seeking to include AI in their products.
 
 
 
 
 
27
  >
28
- > - They maintain control of the system prompt, deprecating and changing things as they wish, often causing software to break.
29
- > - They maintain control of the model versions, sometimes changing things silently, or deprecating older models that your business relies on.
30
- > - They maintain control of the alignment, and in particular the alignment is one-size-fits all, not tailored to the application.
31
- > - They can see all your queries and they can potentially use that data in ways you wouldn't want. Dolphin, in contrast, is steerable and gives control to the system owner. You set the system prompt. You decide the alignment. You have control of your data. Dolphin does not impose its ethics or guidelines on you. You are the one who decides the guidelines.
32
- > - Dolphin belongs to YOU, it is your tool, an extension of your will. Just as you are personally responsible for what you do with a knife, gun, fire, car, or the internet, you are the creator and originator of any content you generate with Dolphin.
33
 
34
  From Eric Hartford's, the creator of the Dolphin model series, [Uncensored Models](https://erichartford.com/uncensored-models):
35
 
@@ -37,56 +40,115 @@ From Eric Hartford's, the creator of the Dolphin model series, [Uncensored Model
37
  >
38
  > The reason these models are aligned is that they are trained with data that was generated by ChatGPT, which itself is aligned by an alignment team at OpenAI. As it is a black box, we don't know all the reasons for the decisions that were made, but we can observe it generally is aligned with American popular culture, and to obey American law, and with a liberal and progressive political bias.
39
 
 
40
 
41
- All quantized versions were generated using an appropriate imatrix created from datasets available at [eaddario/imatrix-calibration](https://huggingface.co/datasets/eaddario/imatrix-calibration).
42
 
43
- At its core, an Importance Matrix (imatrix) is a table or, more broadly, a structured representation that scores the relative importance of different features or parameters in a machine learning model. It essentially quantifies the "impact" each feature has on a specific outcome, prediction, or relationship being modeled.
44
 
45
- The process to produce the quantized [GGUF](https://huggingface.co/docs/hub/en/gguf) models is roughly as follows:
46
 
47
- 1. Convert the the original model's safetensors into GGUF F16*
48
- 2. Estimate the Perplexity score for the F16 model (base) using [wikitext-2-raw-v1](https://huggingface.co/datasets/Salesforce/wikitext/tree/main/wikitext-2-raw-v1), and record the [logits](https://huggingface.co/eaddario/Dolphin3.0-Mistral-24B-GGUF/tree/main/logits)
49
- 3. Generate the [imatrix](https://huggingface.co/eaddario/Dolphin3.0-Mistral-24B-GGUF/tree/main/imatrix) for each calibration dataset
50
- 4. Create quantized versions of the base model using each imatrix per quant type
51
- 5. Calculate the Perplexity and KL Divergence scores for each quantized model [(scores)](https://huggingface.co/eaddario/Dolphin3.0-Mistral-24B-GGUF/tree/main/scores)
52
- 6. For each quant type, keep the version with the best (usually the lowest) scores
53
 
54
- *[BF16](https://en.wikipedia.org/wiki/Bfloat16_floating-point_format) would be preferred, but Apple's GPUs don't support it yet, and therefore any operations are executed in the CPU, making it unacceptably slow. This is expected to change in the near term but until then, if you are using Apple kit avoid using any models tagged BF16
55
 
56
- # Motivation
57
 
58
- An area of ongoing personal research is to optimize the inference performance of LLMs when deployed in resource-constrained environments like, for example, commodity hardware, personal desktops/laptops, edge devices, etc.
59
 
60
- The process of [quantization](https://huggingface.co/docs/optimum/en/concept_guides/quantization) reduces the precision of the model's weights, leading to significant reductions in model size, memory needs and computational requirements (a good thing), but this however comes at the expense of a loss in the model's capabilities and accuracy (a bad thing!).
 
 
 
 
 
 
61
 
62
- By producing imatrix optimized quantized models, we can maintain inference efficiency whilst reducing memory size and CPU/GPU processing requirements. This optimization is crucial for deploying LLMs on devices with limited hardware capabilities, such as mobile phones or edge devices, without sacrificing significant accuracy.
63
 
64
  # Models
65
 
66
- | Filename | Quant type | Size | Perplexity (μ) | ln(PPL(Q)/PPL(base)) | KL Divergence (μ) | Description |
67
- |-----------------------------------------------------------------------|------------|--------|---------------------|----------------------|--------------------|--------------------------------------------------------------------------------|
68
- | [Dolphin3.0-Mistral-24B-F16](./Dolphin3.0-Mistral-24B-F16.gguf) | F16 | 47.20G | 7.669212 ±0.052592 | N/A | N/A | 16-bit standard IEEE 754 half-precision floating-point number |
69
- | [Dolphin3.0-Mistral-24B-Q8_0](./Dolphin3.0-Mistral-24B-Q8_0.gguf) | Q8_0 | 25.10G | 7.668204 ±0.052714 | 99.94% | 0.001478 ±0.000032 | Extremely high quality, generally unneeded but max available quant |
70
- | [Dolphin3.0-Mistral-24B-Q6_K](./Dolphin3.0-Mistral-24B-Q6_K.gguf) | Q6_K | 19.30G | 7.697080 ±0.053150 | 99.90% | 0.003182 ±0.000018 | Very high quality, near perfect, *recommended* |
71
- | [Dolphin3.0-Mistral-24B-Q5_K_M](./Dolphin3.0-Mistral-24B-Q5_K_M.gguf) | Q5_K_M | 16.80G | 7.713033 ±0.053233 | 99.84% | 0.006054 ±0.000033 | High quality |
72
- | [Dolphin3.0-Mistral-24B-Q5_K_S](./Dolphin3.0-Mistral-24B-Q5_K_S.gguf) | Q5_K_S | 16.30G | 7.731911 ±0.053337 | 99.83% | 0.006765 ±0.000058 | High quality, *recommended* |
73
- | [Dolphin3.0-Mistral-24B-IQ4_NL](./Dolphin3.0-Mistral-24B-IQ4_NL.gguf) | IQ4_NL | 13.50G | 7.840359 ±0.054591 | 99.59% | 0.018718 ±0.000116 | Good quality, new method (super-blocks with 256 weights), *recommended* |
74
- | [Dolphin3.0-Mistral-24B-Q4_K_M](./Dolphin3.0-Mistral-24B-Q4_K_M.gguf) | Q4_K_M | 14.30G | 7.815683 ±0.054370 | 99.64% | 0.015912 ±0.000100 | Good quality, default size for must use cases, *recommended* |
75
- | [Dolphin3.0-Mistral-24B-Q4_K_S](./Dolphin3.0-Mistral-24B-Q4_K_S.gguf) | Q4_K_S | 13.50G | 7.853283 ±0.054726 | 99.59% | 0.018656 ±0.000110 | Good quality, best choice in the Q4 series if RAM is scarce, *recommended* |
76
- | [Dolphin3.0-Mistral-24B-IQ3_M](./Dolphin3.0-Mistral-24B-IQ3_M.gguf) | IQ3_M | 10.70G | 8.286801 ±0.058783 | 98.72% | 0.061737 ±0.000316 | Medium-low quality, new method with decent performance comparable to Q3_K_M |
77
- | [Dolphin3.0-Mistral-24B-IQ3_S](./Dolphin3.0-Mistral-24B-IQ3_S.gguf) | IQ3_S | 10.40G | 8.376969 ±0.060007 | 98.61% | 0.068549 ±0.000337 | Lower quality, new method with decent performance, better than Q3_K_S |
78
- | [Dolphin3.0-Mistral-24B-Q3_K_L](./Dolphin3.0-Mistral-24B-Q3_K_L.gguf) | Q3_K_L | 12.40G | 8.110625 ±0.057574 | 99.13% | 0.040448 ±0.000234 | Lower quality but usable, good for low RAM availability, *recommended* |
79
- | [Dolphin3.0-Mistral-24B-Q3_K_M](./Dolphin3.0-Mistral-24B-Q3_K_M.gguf) | Q3_K_M | 11.50G | 8.165871 ±0.058188 | 98.99% | 0.047858 ±0.000270 | Medium-low quality |
80
- | [Dolphin3.0-Mistral-24B-Q3_K_S](./Dolphin3.0-Mistral-24B-Q3_K_S.gguf) | Q3_K_S | 10.40G | 8.471756 ±0.060877 | 97.94% | 0.095982 ±0.000509 | Lower quality but may be usable in certain cases |
81
-
82
- I find that quantizations below Q3/IQ3 are not fit for my purposes and therefore do not usually generate them, but happy to provide other quants on request.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
83
 
84
  # Metrics used
 
85
 
86
- **[Perplexity](https://huggingface.co/docs/transformers/en/perplexity):** one of the key metrics used in NLP evaluation. It measures the quality of a language model by evaluating how well it predicts the next token given a particular sequence of words. A PPL of **1** indicates an exact match between predicted and actual, whereas values greater than one indicate a degree of "surprise" the generated token differs from the expected.
87
 
88
- **[Kullback–Leibler (KL) Divergence](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence):** a statistical measure of how much a probability distribution differs from another. When quantizing models (or altering the original tensors in any way for that matter), the closest we can preserve the weights' probability distribution to the orignal model the better, thus the closest to **0** the better.
89
 
90
- ## Credits
 
 
 
 
91
 
92
- A big **Thank You!** to [Colin Kealty](https://huggingface.co/bartowski) for the many contributions and for being one of the best sources of high quality quantized models available in Hugginface, and a really big ***Thank You!*** to [Georgi Gerganov](https://github.com/ggerganov) for his amazing work with **llama.cpp** and the **gguf** file format.
 
 
 
 
11
  tags:
12
  - gguf
13
  - quant
14
+ - experimental
15
  ---
16
 
17
+ # Experimental GGUF quantized versions of cognitivecomputations/Dolphin3.0-Mistral-24B
18
 
19
+ Using [LLaMA C++](<https://github.com/ggerganov/llama.cpp>) release [b4837](<https://github.com/ggerganov/llama.cpp/releases/tag/b4837>) for quantization.
20
 
21
  Original model: [cognitivecomputations/Dolphin3.0-Mistral-24B](https://huggingface.co/cognitivecomputations/Dolphin3.0-Mistral-24B)
22
 
23
+ From the original model creators:
24
 
25
+ >Dolphin 3.0 is the next generation of the Dolphin series of instruct-tuned models. Designed to be the ultimate general purpose local model, enabling coding, math, agentic, function calling, and general use cases.
26
  >
27
+ >Dolphin aims to be a general purpose instruct model, similar to the models behind ChatGPT, Claude, Gemini. But these models present problems for businesses seeking to include AI in their products.
28
+ >1) They maintain control of the system prompt, deprecating and changing things as they wish, often causing software to break.
29
+ >2) They maintain control of the model versions, sometimes changing things silently, or deprecating older models that your business relies on.
30
+ >3) They maintain control of the alignment, and in particular the alignment is one-size-fits all, not tailored to the application.
31
+ >4) They can see all your queries and they can potentially use that data in ways you wouldn't want.
32
+ >Dolphin, in contrast, is steerable and gives control to the system owner. You set the system prompt. You decide the alignment. You have control of your data. Dolphin does not impose its ethics or guidelines on you. You are the one who decides the guidelines.
33
  >
34
+ >Dolphin belongs to YOU, it is your tool, an extension of your will.
35
+ >Just as you are personally responsible for what you do with a knife, gun, fire, car, or the internet, you are the creator and originator of any content you generate with Dolphin.
 
 
 
36
 
37
  From Eric Hartford's, the creator of the Dolphin model series, [Uncensored Models](https://erichartford.com/uncensored-models):
38
 
 
40
  >
41
  > The reason these models are aligned is that they are trained with data that was generated by ChatGPT, which itself is aligned by an alignment team at OpenAI. As it is a black box, we don't know all the reasons for the decisions that were made, but we can observe it generally is aligned with American popular culture, and to obey American law, and with a liberal and progressive political bias.
42
 
43
+ # PLEASE READ THIS BEFORE USING THESE EXPERIMENTAL VERSIONS!
44
 
45
+ An area of personal interest is finding ways to optimize the inference performance of LLMs when deployed in resource-constrained environments like commodity hardware, desktops, laptops, edge devices, etc. There are many approaches to accomplish this, including architecture simplification and knowledge distillation, but for now I'm focusing primarily on quantization and pruning.
46
 
47
+ The process of [quantization](<https://huggingface.co/docs/optimum/en/concept_guides/quantization>) reduces the precision of the model's weights, leading to significant reductions in model size, memory needs and computational requirements (a good thing), but this however comes at the expense of a loss in the model's capabilities and accuracy (a bad thing!).
48
 
49
+ Another approach is to [prune](<https://en.wikipedia.org/wiki/Pruning_(artificial_neural_network>) the model, that is, to selectively zero-out groups of parameters. Although significant reductions can be achieved this way, the risk of severely degrading the model's performance is markedly higher than when quantizing, as the process requires a deep understanding of the model's architecture in order to identify which tensors can be safely zero'ed. For all means and purposes, pruning is the equivalent of lobotomizing the LLM!
50
 
51
+ A successful outcome is when the overall size is reduced with no, or negligible, loss of capabilities (i.e. language understanding, math and logic problem-solving, conversation, coding, domain-specific knowledge, etc.) compared to the original version. On that regard, the method I'm using seems to yield some modest but encouraging results, and the versions available in this repo are on average **3% smaller** than other, high-quality, sources with negligible loss of capability. As I continue to improve the process and develop tools to automate it, I aim to achieve further reductions in the **10-15%** range, maybe more.
 
 
 
 
 
52
 
53
+ For testing and comparison I'd normally use models produced by [Unsloth](<https://huggingface.co/unsloth>) ([Daniel and Michael Han](<https://unsloth.ai/>) do some really advanced level stuff!) and [Bartowski](<https://huggingface.co/bartowski>) (see credits below), but only the latter offers a version of these model so all tests and comparisons are done against a single reference.
54
 
55
+ All experimental versions were generated using an appropriate imatrix created from calibration datasets available at [eaddario/imatrix-calibration](<https://huggingface.co/datasets/eaddario/imatrix-calibration>). At its core, an Importance Matrix (imatrix) is a table or, more broadly, a structured representation that scores the relative importance of different features or parameters in a machine learning model. It essentially quantifies the "impact" each feature has on a specific outcome, prediction, or relationship being modeled, and it helps to counterbalance the negative effects of quantization and pruning.
56
 
57
+ The process to generate these models is roughly as follows:
58
 
59
+ 1. Convert the the original model's tensors to [GGUF](<https://huggingface.co/docs/hub/en/gguf>) F16*
60
+ 2. Estimate the Perplexity score for the F16 model (baseline) using the [wikitext-2-raw-v1](<https://huggingface.co/datasets/Salesforce/wikitext/tree/main/wikitext-2-raw-v1>) dataset, and save the [logits](<https://huggingface.co/eaddario/Dolphin3.0-Mistral-24B-GGUF/tree/main/logits>)
61
+ 3. Generate an [imatrix](<https://huggingface.co/eaddario/Dolphin3.0-Mistral-24B-GGUF/tree/main/imatrix>) from selected calibration datasets
62
+ 4. Quantize & prune versions of the base model
63
+ 5. Calculate Perplexity, KL Divergence, ARC (Easy+Challenge), HellaSwag, MMLU, Truthful QA and WinoGrande [scores](<https://huggingface.co/eaddario/Dolphin3.0-Mistral-24B-GGUF/tree/main/scores>) for each quantized model
64
+ 6. Keep versions with the best scores
65
+ 7. Repeat until all desired quants are created. I find that quantizations below Q3/IQ3 are not fit for my purposes and therefore do not usually generate them, but happy to provide other quants on request.
66
 
67
+ *[BF16](<https://en.wikipedia.org/wiki/Bfloat16_floating-point_format>) would be preferred, but Apple's GPUs don't support it yet, and therefore any operations are executed in the CPU, making it unacceptably slow. This is expected to change in the near term but until then, if you are using Apple kit avoid using any models tagged BF16
68
 
69
  # Models
70
 
71
+ ### Sizes (in GB)
72
+ | Model | Bartowski | Repo | Shrinkage |
73
+ |-----------------------------------------------------------------------|----------:|------:|----------:|
74
+ | [Dolphin3.0-Mistral-24B-IQ3_M](./Dolphin3.0-Mistral-24B-IQ3_M.gguf) | 10.65 | 10.25 | 3.8% |
75
+ | [Dolphin3.0-Mistral-24B-IQ3_S](./Dolphin3.0-Mistral-24B-IQ3_S.gguf) | N/A | 10.03 | N/A |
76
+ | [Dolphin3.0-Mistral-24B-IQ4_NL](./Dolphin3.0-Mistral-24B-IQ4_NL.gguf) | 13.05 | 13.05 | 0% |
77
+ | [Dolphin3.0-Mistral-24B-Q3_K_L](./Dolphin3.0-Mistral-24B-Q3_K_L.gguf) | 12.40 | 12.00 | 0% |
78
+ | [Dolphin3.0-Mistral-24B-Q3_K_M](./Dolphin3.0-Mistral-24B-Q3_K_M.gguf) | 11.47 | 11.08 | 3.4% |
79
+ | [Dolphin3.0-Mistral-24B-Q3_K_S](./Dolphin3.0-Mistral-24B-Q3_K_S.gguf) | 10.40 | 10.00 | 3.8% |
80
+ | [Dolphin3.0-Mistral-24B-Q4_K_M](./Dolphin3.0-Mistral-24B-Q4_K_M.gguf) | 14.33 | 13.99 | 2.4% |
81
+ | [Dolphin3.0-Mistral-24B-Q4_K_S](./Dolphin3.0-Mistral-24B-Q4_K_S.gguf) | 13.55 | 13.13 | 3.1% |
82
+ | [Dolphin3.0-Mistral-24B-Q5_K_M](./Dolphin3.0-Mistral-24B-Q5_K_M.gguf) | 16.76 | 16.26 | 3% |
83
+ | [Dolphin3.0-Mistral-24B-Q5_K_S](./Dolphin3.0-Mistral-24B-Q5_K_S.gguf) | 16.30 | 15.80 | 3.1% |
84
+ | [Dolphin3.0-Mistral-24B-Q6_K](./Dolphin3.0-Mistral-24B-Q6_K.gguf) | 19.35 | 18.75 | 3.1% |
85
+ | [Dolphin3.0-Mistral-24B-Q8_0](./Dolphin3.0-Mistral-24B-Q8_0.gguf) | 25.05 | 24.14 | 3.6% |
86
+
87
+ ### Perplexity and KL Divergence scores
88
+ | Model | μPPL | 𝜌PPL | μKLD | RMS Δp |
89
+ |-----------------------------------------------------------------------|--------------------:|-------:|-------------------:|--------------:|
90
+ | [Dolphin3.0-Mistral-24B-IQ3_M](./Dolphin3.0-Mistral-24B-IQ3_M.gguf) | 8.536738 ±0.059988 | 98.03% | 0.103999 ±0.000378 | 9.251 ±0.039 |
91
+ | [Dolphin3.0-Mistral-24B-IQ3_S](./Dolphin3.0-Mistral-24B-IQ3_S.gguf) | 8.602292 ±0.060945 | 97.95% | 0.108536 ±0.000393 | 9.309 ±0.040 |
92
+ | [Dolphin3.0-Mistral-24B-IQ4_NL](./Dolphin3.0-Mistral-24B-IQ4_NL.gguf) | 7.895583 ±0.054759 | 99.38% | 0.031400 ±0.000136 | 5.278 ±0.027 |
93
+ | [Dolphin3.0-Mistral-24B-Q3_K_L](./Dolphin3.0-Mistral-24B-Q3_K_L.gguf) | 8.320959 ±0.058079 | 98.43% | 0.083143 ±0.000296 | 8.464 ±0.035 |
94
+ | [Dolphin3.0-Mistral-24B-Q3_K_M](./Dolphin3.0-Mistral-24B-Q3_K_M.gguf) | 8.389337 ±0.058940 | 98.30% | 0.089559 ±0.000330 | 8.734 ±0.037 |
95
+ | [Dolphin3.0-Mistral-24B-Q3_K_S](./Dolphin3.0-Mistral-24B-Q3_K_S.gguf) | 8.681563 ±0.061366 | 97.26% | 0.138030 ±0.000557 | 10.731 ±0.047 |
96
+ | [Dolphin3.0-Mistral-24B-Q4_K_M](./Dolphin3.0-Mistral-24B-Q4_K_M.gguf) | 7.868503 ±0.054510 | 99.45% | 0.027967 ±0.000110 | 4.959 ±0.024 |
97
+ | [Dolphin3.0-Mistral-24B-Q4_K_S](./Dolphin3.0-Mistral-24B-Q4_K_S.gguf) | 7.922314 ±0.055044 | 99.40% | 0.030745 ±0.000128 | 5.176 ±0.026 |
98
+ | [Dolphin3.0-Mistral-24B-Q5_K_M](./Dolphin3.0-Mistral-24B-Q5_K_M.gguf) | 7.785290 ±0.053616 | 99.63% | 0.018900 ±0.000070 | 4.199 ±0.019 |
99
+ | [Dolphin3.0-Mistral-24B-Q5_K_S](./Dolphin3.0-Mistral-24B-Q5_K_S.gguf) | 7.819818 ±0.053887 | 99.62% | 0.019857 ±0.000076 | 4.338 ±0.020 |
100
+ | [Dolphin3.0-Mistral-24B-Q6_K](./Dolphin3.0-Mistral-24B-Q6_K.gguf) | 7.757601 ±0.053375 | 99.70% | 0.015707 ±0.000047 | 3.852 ±0.014 |
101
+ | [Dolphin3.0-Mistral-24B-Q8_0](./Dolphin3.0-Mistral-24B-Q8_0.gguf) | 7.737414 ±0.053007 | 99.72% | 0.014644 ±0.000046 | 3.754 ±0.015 |
102
+ | [Dolphin3.0-Mistral-24B-F16](./Dolphin3.0-Mistral-24B-F16.gguf) | 9.366577 ±0.066397 | 100% | N/A | N/A |
103
+
104
+ ### ARC, HellaSwag, MMLU, Truthful QA and WinoGrande scores
105
+ Scores generated using [llama-perplexity](<https://github.com/ggml-org/llama.cpp/tree/master/examples/perplexity>) with 750 tasks per test, and a context size of 768 tokens. Naive (`llama-quantize` with no optimization) Q4_K_M quantization included for comparison.
106
+
107
+ For the test data used in the generation of these scores, follow the appropiate links: [HellaSwag](<https://github.com/klosax/hellaswag_text_data>), [ARC, MMLU, Truthful QA](<https://huggingface.co/datasets/ikawrakow/validation-datasets-for-llama.cpp/tree/main>) and [WinoGrande](<https://huggingface.co/datasets/ikawrakow/winogrande-eval-for-llama.cpp/tree/main>)
108
+
109
+ | Model | ARC | HellaSwag | MMLU | Truthful QA | WinoGrande |
110
+ |-------------------------------------------------------------------------------------------------------------------------------|----------------:|----------:|----------------:|----------------:|----------------:|
111
+ | [Dolphin3.0-Mistral-24B-IQ3_M](./Dolphin3.0-Mistral-24B-IQ3_M.gguf) | 70.5333 ±1.6658 | 80.80 | 43.7333 ±1.8126 | 35.4667 ±1.7481 | 74.9333 ±1.5836 |
112
+ | [Dolphin3.0-Mistral-24B-IQ3_S](./Dolphin3.0-Mistral-24B-IQ3_S.gguf) | 70.1333 ±1.6723 | 80.93 | 43.7333 ±1.8126 | 36.5333 ±1.7594 | 74.4000 ±1.5947 |
113
+ | [Dolphin3.0-Mistral-24B-IQ4_NL](./Dolphin3.0-Mistral-24B-IQ4_NL.gguf) | 72.1333 ±1.6382 | 80.27 | 42.6667 ±1.8072 | 35.7333 ±1.7510 | 76.5333 ±1.5485 |
114
+ | [Dolphin3.0-Mistral-24B-Q3_K_L](./Dolphin3.0-Mistral-24B-Q3_K_L.gguf) | 72.5333 ±1.6309 | 80.93 | 41.6000 ±1.8010 | 34.2667 ±1.7342 | 75.7333 ±1.5664 |
115
+ | [Dolphin3.0-Mistral-24B-Q3_K_M](./Dolphin3.0-Mistral-24B-Q3_K_M.gguf) | 73.4667 ±1.6132 | 80.93 | 42.1333 ±1.8042 | 34.9300 ±1.6774 | 76.1333 ±1.5576 |
116
+ | [Dolphin3.0-Mistral-24B-Q3_K_S](./Dolphin3.0-Mistral-24B-Q3_K_S.gguf) | 70.5333 ±1.6658 | 80.67 | 41.2000 ±1.7984 | 35.2000 ±1.7451 | 74.6667 ±1.5892 |
117
+ | [Dolphin3.0-Mistral-24B-Q4_K_M](./Dolphin3.0-Mistral-24B-Q4_K_M.gguf) | 72.6667 ±1.6284 | 81.73 | 42.5333 ±1.8065 | 35.0667 ±1.7436 | 77.6000 ±1.5234 |
118
+ | [Dolphin3.0-Mistral-24B-Q4_K_M-bartowski](https://huggingface.co/bartowski/cognitivecomputations_Dolphin3.0-Mistral-24B-GGUF) | 72.2667 ±1.6358 | 81.73 | 42.8000 ±1.8079 | 35.0667 ±1.7436 | 76.8000 ±1.5424 |
119
+ | [Dolphin3.0-Mistral-24B-Q4_K_S](./Dolphin3.0-Mistral-24B-Q4_K_S.gguf) | 72.4000 ±1.6334 | 81.47 | 43.0667 ±1.8093 | 35.3333 ±1.7466 | 77.2000 ±1.5330 |
120
+ | [Dolphin3.0-Mistral-24B-Q5_K_M](./Dolphin3.0-Mistral-24B-Q5_K_M.gguf) | 72.0000 ±1.6406 | 81.20 | 42.9333 ±1.8086 | 35.7333 ±1.7510 | 78.1333 ±1.5103 |
121
+ | [Dolphin3.0-Mistral-24B-Q5_K_S](./Dolphin3.0-Mistral-24B-Q5_K_S.gguf) | 72.6667 ±1.6284 | 81.47 | 41.6000 ±1.8010 | 35.7333 ±1.7510 | 76.5333 ±1.5485 |
122
+ | [Dolphin3.0-Mistral-24B-Q6_K](./Dolphin3.0-Mistral-24B-Q6_K.gguf) | 72.4000 ±1.6334 | 81.47 | 43.0667 ±1.8093 | 36.6667 ±1.7608 | 77.8667 ±1.5169 |
123
+ | [Dolphin3.0-Mistral-24B-Q8_0](./Dolphin3.0-Mistral-24B-Q8_0.gguf) | 72.8000 ±1.6260 | 81.33 | 43.0667 ±1.8093 | 35.7333 ±1.7510 | 77.4667 ±1.5266 |
124
+ | [Dolphin3.0-Mistral-24B-F16](./Dolphin3.0-Mistral-24B-F16.gguf) | 71.6000 ±1.6477 | 81.47 | 43.4667 ±1.8113 | 35.4667 ±1.7481 | 77.6000 ±1.5234 |
125
+
126
+ ### Tokens per Second - Benchmarks
127
+ Scores generated using [llama-bench](<https://github.com/ggml-org/llama.cpp/tree/master/examples/llama-bench>). Naive (`llama-quantize` with no optimization) Q4_K_M quantization included for comparison.
128
+
129
+ | model | size | params | backend | ngl | test | t/s |
130
+ |-------------------------------------------------------------------------------------------------------------------------------|----------:|--------:|---------|----:|--------------:|--------------:|
131
+ | [Dolphin3.0-Mistral-24B-Q4_K_M](./Dolphin3.0-Mistral-24B-Q4_K_M.gguf) | 12.95 GiB | 23.57 B | CUDA | 12 | pp512 | 164.39 ± 0.20 |
132
+ | [Dolphin3.0-Mistral-24B-Q4_K_M](./Dolphin3.0-Mistral-24B-Q4_K_M.gguf) | 12.95 GiB | 23.57 B | CUDA | 12 | tg128 | 4.71 ± 0.06 |
133
+ | [Dolphin3.0-Mistral-24B-Q4_K_M](./Dolphin3.0-Mistral-24B-Q4_K_M.gguf) | 12.95 GiB | 23.57 B | CUDA | 12 | pp1024+tg1024 | 8.87 ± 0.04 |
134
+ | [Dolphin3.0-Mistral-24B-Q4_K_M-bartowski](https://huggingface.co/bartowski/cognitivecomputations_Dolphin3.0-Mistral-24B-GGUF) | 13.34 GiB | 23.57 B | CUDA | 12 | pp512 | 162.55 ± 0.47 |
135
+ | [Dolphin3.0-Mistral-24B-Q4_K_M-bartowski](https://huggingface.co/bartowski/cognitivecomputations_Dolphin3.0-Mistral-24B-GGUF) | 13.34 GiB | 23.57 B | CUDA | 12 | tg128 | 4.57 ± 0.03 |
136
+ | [Dolphin3.0-Mistral-24B-Q4_K_M-bartowski](https://huggingface.co/bartowski/cognitivecomputations_Dolphin3.0-Mistral-24B-GGUF) | 13.34 GiB | 23.57 B | CUDA | 12 | pp1024+tg1024 | 8.62 ± 0.03 |
137
 
138
  # Metrics used
139
+ **[Perplexity](<https://huggingface.co/docs/transformers/en/perplexity>):** one of the key metrics used in NLP evaluation. It measures the quality of a language model by evaluating how well it predicts the next token given a particular sequence of words. A PPL of **1** indicates an exact match between predicted and actual, whereas values greater than one indicate a degree of "surprise" the generated token differs from the expected.
140
 
141
+ **[Kullback–Leibler (KL) Divergence](<https://en.wikipedia.org/wiki/Kullback–Leibler_divergence>):** a statistical measure of how much a probability distribution differs from another. When quantizing models (or altering the original tensors in any way for that matter), the closest we can preserve the weights' probability distribution to the orignal model the better, thus the closest to **0** the better.
142
 
143
+ **[AI2 Reasoning Challenge (ARC)](<https://leaderboard.allenai.org/arc/submissions/get-started>):** a benchmark to evaluate the ability of AI models to answer complex science questions that require logical reasoning beyond pattern matching.
144
 
145
+ **[HellaSwag](<https://rowanzellers.com/hellaswag/>):** the Harder Endings, Longer contexts, and Low-shot Activities for Situations With Adversarial Generations (bit of a mouthful!) is a benchmark designed to test commonsense natural language inference. It requires the model to predict the most likely ending of a sentence.
146
+
147
+ **[MMLU](<https://github.com/hendrycks/test>):** the Massive Multitask Language Understanding evaluates LLMs’ general knowledge and problem-solving abilities across 57 subjects, including elementary mathematics, US history, computer science, and law.
148
+
149
+ **[Truthful QA](<https://github.com/sylinrl/TruthfulQA>):** evaluates how well LLMs generate truthful responses to questions. It identifies whether AI models can avoid generating false or misleading information, particularly in areas where human knowledge is prone to misconceptions.
150
 
151
+ **[Winogrande](<https://winogrande.allenai.org/>):** based on the [Winograd Schema Challenge](<https://cdn.aaai.org/ocs/4492/4492-21843-1-PB.pdf>), is a natural language understanding task requiring models to resolve ambiguities in sentences involving pronoun references.
152
+
153
+ ## Credits
154
+ A big **Thank You!** to [Colin Kealty](<https://huggingface.co/bartowski>) for the many contributions and for being one of the best sources of high quality quantized models available in Hugginface, and a really big ***Thank You!*** to [Georgi Gerganov](<https://github.com/ggerganov>) for his amazing work with **llama.cpp** and the **gguf** file format.