File size: 1,611 Bytes
7f04701
 
 
 
ce1c71f
7f04701
ce1c71f
7f04701
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
---
base_model: [deepseek-ai/DeepSeek-Coder-V2-Instruct]
---

#### Custom quantizations of deepseek-coder-v2-instruct optimized for cpu inference.

### Theis ones uses GGML TYPE IQ_4_XS in combination with q8_0 so it runs fast with minimal loss and takes advantage of int8 optimizations on most nevwer server cpus.
### While it requirement custom code to make it is standard compatible with plain llama.cpp

```verilog
deepseek_coder_v2_cpu_iq4xm.gguf-00001-of-00004.gguf
deepseek_coder_v2_cpu_iq4xm.gguf-00002-of-00004.gguf
deepseek_coder_v2_cpu_iq4xm.gguf-00003-of-00004.gguf
deepseek_coder_v2_cpu_iq4xm.gguf-00004-of-00004.gguf
```

>[!TIP]
>To download much faster on linux apt install aria2, on mac:  brew install aria2
>
```verilog
sudo apt install -y aria2

aria2c -x 8 -o deepseek_coder_v2_cpu_iq4xm.gguf-00001-of-00004.gguf \
  https://huggingface.co/nisten/deepseek-coder-v2-inst-cpu-optimized-gguf/resolve/main/deepseek_coder_v2_cpu_iq4xm.gguf-00001-of-00004.gguf

aria2c -x 8 -o deepseek_coder_v2_cpu_iq4xm.gguf-00002-of-00004.gguf \
  https://huggingface.co/nisten/deepseek-coder-v2-inst-cpu-optimized-gguf/resolve/main/deepseek_coder_v2_cpu_iq4xm.gguf-00002-of-00004.gguf

aria2c -x 8 -o deepseek_coder_v2_cpu_iq4xm.gguf-00003-of-00004.gguf \
  https://huggingface.co/nisten/deepseek-coder-v2-inst-cpu-optimized-gguf/resolve/main/deepseek_coder_v2_cpu_iq4xm.gguf-00003-of-00004.gguf

aria2c -x 8 -o deepseek_coder_v2_cpu_iq4xm.gguf-00004-of-00004.gguf \
  https://huggingface.co/nisten/deepseek-coder-v2-inst-cpu-optimized-gguf/resolve/main/deepseek_coder_v2_cpu_iq4xm.gguf-00004-of-00004.gguf
```