Datasets:

Modalities:
Text
Formats:
text
Languages:
Chinese
Tags:
Not-For-All-Audiences
Libraries:
Datasets
License:
File size: 1,951 Bytes
87b3fac
 
0b66676
 
 
 
4a1c632
7e9e7aa
44ad278
f39181d
 
 
 
 
 
 
bcd16f4
138d8df
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
---
license: cc-by-nc-sa-4.0
language:
- zh
tags:
- not-for-all-audiences
---
https://www.kaggle.com/code/reginliu/perplexity
| Model | Size |  PPL |  n_vocab |  PPL_adjust |
|-------|---------|---------|---------|---------|
| [qwen1_5-14b-chat-IQ3_XS.gguf](https://huggingface.co/Limour/Qwen1.5-14B-Chat-GGUF/blob/main/qwen1_5-14b-chat-IQ3_XS.gguf) | 6.48 | 11.8084 +/- 0.121615 | 152064 | 11.8084 |
| [causallm_14b.IQ3_XS.gguf](https://huggingface.co/Limour/CausalLM-14B-GGUF/blob/main/causallm_14b.IQ3_XS.gguf) | 6.48 | 13.3798 +/- 0.13641 | 152064 | 13.3798 |
| [causallm_14b.IQ4_XS.gguf](https://huggingface.co/Limour/CausalLM-14B-GGUF/blob/main/causallm_14b.IQ4_XS.gguf) | 7.85 | 13.4127 +/- 0.13762 | 152064 | 13.4127 |
| [causallm_14b.Q4_0.gguf](https://huggingface.co/TheBloke/CausalLM-14B-GGUF/blob/main/causallm_14b.Q4_0.gguf) | 8.18 | 13.6714 +/- 0.13964 | 152064 | 13.6714 |
| [causallm_14b.IQ2_XXS.gguf](https://huggingface.co/Limour/CausalLM-14B-GGUF/blob/main/causallm_14b.IQ2_XXS.gguf) | 4.98 | 15.0160 +/- 0.15004 | 152064 | 15.0160 |
| [Yi-9B-200K_iQ3xxs.gguf](https://huggingface.co/MarsupialAI/Yi-9B-200K_iMatrix_GGUF/blob/main/Yi-9B-200K_iQ3xxs.gguf) | 3.47 | 6.8157 +/- 0.05453 | 64000 | 16.1941 |
| [Fi-9B-200K-Q8_0.gguf](https://huggingface.co/DisOOM/Fi-9B-GGUF/blob/main/Fi-9B-Q8_0.gguf) | 9.38 | 6.8402 +/- 0.05741 | 64000 | 16.2523 |
| [causallm_7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/CausalLM-7B-GGUF/blob/main/causallm_7b.Q5_K_M.gguf) | 5.53 | 16.5278 +/- 0.18005 | 152064 | 16.5278 |
| [Qwen1.5-22B-Chat-Merge-Q4_0.gguf](https://huggingface.co/DisOOM/Qwen1.5-22B-Chat-Merge-GGUF/blob/main/Qwen1.5-22B-Chat-Merge-Q4_0.gguf) | 12.6 | 21.9669 +/- 0.28980 | 152064 | 21.9669 |

For a model that returns tokens completely at random, we have
$$ P(token|context) = \frac{1}{n_{vocab}}, \quad PPL = \sqrt[N]{\left(\frac{1}{P}\right)^N} = n_{vocab} $$
therefore
$$ PPL_{adjust} = \frac{PPL}{n_{vocab}}  \times 152064  $$