bartowski commited on
Commit
72e3522
·
verified ·
1 Parent(s): afeb916

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -14
README.md CHANGED
@@ -24,22 +24,15 @@ Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.15">turb
24
 
25
  Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
26
 
27
- Conversion was done using the default calibration dataset.
28
-
29
- Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
30
-
31
  Original model: https://huggingface.co/cognitivecomputations/dolphincoder-starcoder2-7b
32
 
33
-
34
- <a href="https://huggingface.co/bartowski/dolphincoder-starcoder2-7b-exl2/tree/8_0">8.0 bits per weight</a>
35
-
36
- <a href="https://huggingface.co/bartowski/dolphincoder-starcoder2-7b-exl2/tree/6_5">6.5 bits per weight</a>
37
-
38
- <a href="https://huggingface.co/bartowski/dolphincoder-starcoder2-7b-exl2/tree/5_0">5.0 bits per weight</a>
39
-
40
- <a href="https://huggingface.co/bartowski/dolphincoder-starcoder2-7b-exl2/tree/4_25">4.25 bits per weight</a>
41
-
42
- <a href="https://huggingface.co/bartowski/dolphincoder-starcoder2-7b-exl2/tree/3_5">3.5 bits per weight</a>
43
 
44
 
45
  ## Download instructions
 
24
 
25
  Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
26
 
 
 
 
 
27
  Original model: https://huggingface.co/cognitivecomputations/dolphincoder-starcoder2-7b
28
 
29
+ | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
30
+ | ----- | ---- | ------- | ------ | ------ | ------ | ------------ |
31
+ | [8_0](https://huggingface.co/bartowski/dolphincoder-starcoder2-7b-exl2/tree/8_0) | 8.0 | 8.0 | 8.4 GB | 9.2 GB | 10.2 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
32
+ | [6_5](https://huggingface.co/bartowski/dolphincoder-starcoder2-7b-exl2/tree/6_5) | 6.5 | 8.0 | 7.1 GB | 7.9 GB | 8.9 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
33
+ | [5_0](https://huggingface.co/bartowski/dolphincoder-starcoder2-7b-exl2/tree/5_0) | 5.0 | 6.0 | 5.8 GB | 6.6 GB | 7.6 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
34
+ | [4_25](https://huggingface.co/bartowski/dolphincoder-starcoder2-7b-exl2/tree/4_25) | 4.25 | 6.0 | 5.1 GB | 5.9 GB | 6.9 GB | GPTQ equivalent bits per weight, slightly higher quality. |
35
+ | [3_5](https://huggingface.co/bartowski/dolphincoder-starcoder2-7b-exl2/tree/3_5) | 3.5 | 6.0 | 4.5 GB | 5.3 GB | 6.3 GB | Lower quality, only use if you have to. |
 
 
 
36
 
37
 
38
  ## Download instructions