nisten commited on
Commit
c77f40c
·
verified ·
1 Parent(s): 1b861d6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -5,7 +5,7 @@ base_model: [deepseek-ai/DeepSeek-Coder-V2-Instruct]
5
  #### Custom quantizations of deepseek-coder-v2-instruct optimized for cpu inference.
6
 
7
  ### Theis ones uses GGML TYPE IQ_4_XS in combination with q8_0 so it runs fast with minimal loss and takes advantage of int8 optimizations on most nevwer server cpus.
8
- ### While it required custom code to make, it is standard compatible with plain llama.cpp from github or just search nisten in lmstudio
9
 
10
  >[!TIP]
11
  >The following 4bit version is the one I use myself, it gets 17tps on 64 arm cores.
 
5
  #### Custom quantizations of deepseek-coder-v2-instruct optimized for cpu inference.
6
 
7
  ### Theis ones uses GGML TYPE IQ_4_XS in combination with q8_0 so it runs fast with minimal loss and takes advantage of int8 optimizations on most nevwer server cpus.
8
+ ### While it required custom code to make, it is standard compatible with plain llama.cpp from github or just search nisten in lmstudio.
9
 
10
  >[!TIP]
11
  >The following 4bit version is the one I use myself, it gets 17tps on 64 arm cores.