Datasets:

Modalities:
Text
Formats:
text
Languages:
Chinese
Tags:
Not-For-All-Audiences
Libraries:
Datasets
License:
Limour commited on
Commit
7025895
·
verified ·
1 Parent(s): 138d8df

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -0
README.md CHANGED
@@ -17,6 +17,7 @@ https://www.kaggle.com/code/reginliu/perplexity
17
  | [Fi-9B-200K-Q8_0.gguf](https://huggingface.co/DisOOM/Fi-9B-GGUF/blob/main/Fi-9B-Q8_0.gguf) | 9.38 | 6.8402 +/- 0.05741 | 64000 | 16.2523 |
18
  | [causallm_7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/CausalLM-7B-GGUF/blob/main/causallm_7b.Q5_K_M.gguf) | 5.53 | 16.5278 +/- 0.18005 | 152064 | 16.5278 |
19
  | [Qwen1.5-22B-Chat-Merge-Q4_0.gguf](https://huggingface.co/DisOOM/Qwen1.5-22B-Chat-Merge-GGUF/blob/main/Qwen1.5-22B-Chat-Merge-Q4_0.gguf) | 12.6 | 21.9669 +/- 0.28980 | 152064 | 21.9669 |
 
20
 
21
  For a model that returns tokens completely at random, we have
22
  $$ P(token|context) = \frac{1}{n_{vocab}}, \quad PPL = \sqrt[N]{\left(\frac{1}{P}\right)^N} = n_{vocab} $$
 
17
  | [Fi-9B-200K-Q8_0.gguf](https://huggingface.co/DisOOM/Fi-9B-GGUF/blob/main/Fi-9B-Q8_0.gguf) | 9.38 | 6.8402 +/- 0.05741 | 64000 | 16.2523 |
18
  | [causallm_7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/CausalLM-7B-GGUF/blob/main/causallm_7b.Q5_K_M.gguf) | 5.53 | 16.5278 +/- 0.18005 | 152064 | 16.5278 |
19
  | [Qwen1.5-22B-Chat-Merge-Q4_0.gguf](https://huggingface.co/DisOOM/Qwen1.5-22B-Chat-Merge-GGUF/blob/main/Qwen1.5-22B-Chat-Merge-Q4_0.gguf) | 12.6 | 21.9669 +/- 0.28980 | 152064 | 21.9669 |
20
+ | [Kunoichi-DPO-v2-7B-Q4_K_M-imatrix.gguf](https://hf-mirror.com/Lewdiculous/Kunoichi-DPO-v2-7B-GGUF-Imatrix/blob/main/Kunoichi-DPO-v2-7B-Q4_K_M-imatrix.gguf) | 4.37 | 6.7096 +/- 0.04519 | 32000 | 31.8840 |
21
 
22
  For a model that returns tokens completely at random, we have
23
  $$ P(token|context) = \frac{1}{n_{vocab}}, \quad PPL = \sqrt[N]{\left(\frac{1}{P}\right)^N} = n_{vocab} $$