|
--- |
|
base_model: [deepseek-ai/DeepSeek-Coder-V2-Instruct] |
|
--- |
|
|
|
#### Custom quantizations of deepseek-coder-v2-instruct optimized for cpu inference. |
|
|
|
### Theis ones uses GGML TYPE IQ_4_XS in combination with q8_0 so it runs fast with minimal loss and takes advantage of int8 optimizations on most nevwer server cpus. |
|
### While it requirement custom code to make it is standard compatible with plain llama.cpp |
|
|
|
```verilog |
|
deepseek_coder_v2_cpu_iq4xm.gguf-00001-of-00004.gguf |
|
deepseek_coder_v2_cpu_iq4xm.gguf-00002-of-00004.gguf |
|
deepseek_coder_v2_cpu_iq4xm.gguf-00003-of-00004.gguf |
|
deepseek_coder_v2_cpu_iq4xm.gguf-00004-of-00004.gguf |
|
``` |
|
|
|
>[!TIP] |
|
>To download much faster on linux apt install aria2, on mac: brew install aria2 |
|
> |
|
```verilog |
|
sudo apt install -y aria2 |
|
|
|
aria2c -x 8 -o deepseek_coder_v2_cpu_iq4xm.gguf-00001-of-00004.gguf \ |
|
https://huggingface.co/nisten/deepseek-coder-v2-inst-cpu-optimized-gguf/resolve/main/deepseek_coder_v2_cpu_iq4xm.gguf-00001-of-00004.gguf |
|
|
|
aria2c -x 8 -o deepseek_coder_v2_cpu_iq4xm.gguf-00002-of-00004.gguf \ |
|
https://huggingface.co/nisten/deepseek-coder-v2-inst-cpu-optimized-gguf/resolve/main/deepseek_coder_v2_cpu_iq4xm.gguf-00002-of-00004.gguf |
|
|
|
aria2c -x 8 -o deepseek_coder_v2_cpu_iq4xm.gguf-00003-of-00004.gguf \ |
|
https://huggingface.co/nisten/deepseek-coder-v2-inst-cpu-optimized-gguf/resolve/main/deepseek_coder_v2_cpu_iq4xm.gguf-00003-of-00004.gguf |
|
|
|
aria2c -x 8 -o deepseek_coder_v2_cpu_iq4xm.gguf-00004-of-00004.gguf \ |
|
https://huggingface.co/nisten/deepseek-coder-v2-inst-cpu-optimized-gguf/resolve/main/deepseek_coder_v2_cpu_iq4xm.gguf-00004-of-00004.gguf |
|
``` |