Update README.md
Browse files
README.md
CHANGED
@@ -8,6 +8,7 @@ tags:
|
|
8 |
https://www.kaggle.com/code/reginliu/perplexity
|
9 |
| Model | Size | PPL | n_vocab | PPL_adjust |
|
10 |
|-------|---------|---------|---------|---------|
|
|
|
11 |
| [qwen1_5-14b-chat-IQ3_XS.gguf](https://huggingface.co/Limour/Qwen1.5-14B-Chat-GGUF/blob/main/qwen1_5-14b-chat-IQ3_XS.gguf) | 6.48 | 11.8084 +/- 0.121615 | 152064 | 11.8084 |
|
12 |
| [causallm_14b.IQ3_XS.gguf](https://huggingface.co/Limour/CausalLM-14B-GGUF/blob/main/causallm_14b.IQ3_XS.gguf) | 6.48 | 13.3798 +/- 0.13641 | 152064 | 13.3798 |
|
13 |
| [causallm_14b.IQ4_XS.gguf](https://huggingface.co/Limour/CausalLM-14B-GGUF/blob/main/causallm_14b.IQ4_XS.gguf) | 7.85 | 13.4127 +/- 0.13762 | 152064 | 13.4127 |
|
@@ -19,6 +20,7 @@ https://www.kaggle.com/code/reginliu/perplexity
|
|
19 |
| [Qwen1.5-22B-Chat-Merge-Q4_0.gguf](https://huggingface.co/DisOOM/Qwen1.5-22B-Chat-Merge-GGUF/blob/main/Qwen1.5-22B-Chat-Merge-Q4_0.gguf) | 12.6 | 21.9669 +/- 0.28980 | 152064 | 21.9669 |
|
20 |
| [Kunoichi-DPO-v2-7B-Q4_K_M-imatrix.gguf](https://hf-mirror.com/Lewdiculous/Kunoichi-DPO-v2-7B-GGUF-Imatrix/blob/main/Kunoichi-DPO-v2-7B-Q4_K_M-imatrix.gguf) | 4.37 | 6.7096 +/- 0.04519 | 32000 | 31.8840 |
|
21 |
| [WizardLM-2-7B-IQ4_XS-imat.gguf](https://huggingface.co/ABX-AI/WizardLM-2-7B-GGUF-IQ-Imatrix/blob/main/WizardLM-2-7B-IQ4_XS-imat.gguf) | 3.91 | 9.8891 +/- 0.08106 | 32000 | 46.9930 |
|
|
|
22 |
For a model that returns tokens completely at random, we have
|
23 |
$$ P(token|context) = \frac{1}{n_{vocab}}, \quad PPL = \sqrt[N]{\left(\frac{1}{P}\right)^N} = n_{vocab} $$
|
24 |
therefore
|
|
|
8 |
https://www.kaggle.com/code/reginliu/perplexity
|
9 |
| Model | Size | PPL | n_vocab | PPL_adjust |
|
10 |
|-------|---------|---------|---------|---------|
|
11 |
+
| [qwen2.5-14b-fp16.gguf](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct-GGUF/blob/main/qwen2.5-14b-instruct-fp16-00001-of-00008.gguf) | 27.51 | 9.5316 +/- 0.08886 | 152064 | 9.5316 |
|
12 |
| [qwen1_5-14b-chat-IQ3_XS.gguf](https://huggingface.co/Limour/Qwen1.5-14B-Chat-GGUF/blob/main/qwen1_5-14b-chat-IQ3_XS.gguf) | 6.48 | 11.8084 +/- 0.121615 | 152064 | 11.8084 |
|
13 |
| [causallm_14b.IQ3_XS.gguf](https://huggingface.co/Limour/CausalLM-14B-GGUF/blob/main/causallm_14b.IQ3_XS.gguf) | 6.48 | 13.3798 +/- 0.13641 | 152064 | 13.3798 |
|
14 |
| [causallm_14b.IQ4_XS.gguf](https://huggingface.co/Limour/CausalLM-14B-GGUF/blob/main/causallm_14b.IQ4_XS.gguf) | 7.85 | 13.4127 +/- 0.13762 | 152064 | 13.4127 |
|
|
|
20 |
| [Qwen1.5-22B-Chat-Merge-Q4_0.gguf](https://huggingface.co/DisOOM/Qwen1.5-22B-Chat-Merge-GGUF/blob/main/Qwen1.5-22B-Chat-Merge-Q4_0.gguf) | 12.6 | 21.9669 +/- 0.28980 | 152064 | 21.9669 |
|
21 |
| [Kunoichi-DPO-v2-7B-Q4_K_M-imatrix.gguf](https://hf-mirror.com/Lewdiculous/Kunoichi-DPO-v2-7B-GGUF-Imatrix/blob/main/Kunoichi-DPO-v2-7B-Q4_K_M-imatrix.gguf) | 4.37 | 6.7096 +/- 0.04519 | 32000 | 31.8840 |
|
22 |
| [WizardLM-2-7B-IQ4_XS-imat.gguf](https://huggingface.co/ABX-AI/WizardLM-2-7B-GGUF-IQ-Imatrix/blob/main/WizardLM-2-7B-IQ4_XS-imat.gguf) | 3.91 | 9.8891 +/- 0.08106 | 32000 | 46.9930 |
|
23 |
+
|
24 |
For a model that returns tokens completely at random, we have
|
25 |
$$ P(token|context) = \frac{1}{n_{vocab}}, \quad PPL = \sqrt[N]{\left(\frac{1}{P}\right)^N} = n_{vocab} $$
|
26 |
therefore
|