File size: 2,499 Bytes
b450712 0e17985 b450712 aaa391e 949da77 6be5e25 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
---
license: mit
library_name: transformers
---
# huihui-ai/DeepSeek-R1
This model converted from DeepSeek-R1 to BF16.
Here we simply provide the conversion command and related information about ollama.
If needed, we can upload the bf16 version.
## FP8 to BF16
1. Download [deepseek-ai/DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1) model, requires approximately 641GB of space.
```
cd /home/admin/models
huggingface-cli download deepseek-ai/DeepSeek-R1 --local-dir ./deepseek-ai/DeepSeek-R1
```
2. Create the environment.
```
conda create -yn DeepSeek-V3 python=3.12
conda activate DeepSeek-V3
pip install -r requirements.txt
```
3. Convert to BF16, requires an additional approximately 1.3 TB of space.
Here, you need to download the transformation code from the "inference" folder of [deepseek-ai/DeepSeek-V3](https://huggingface.co/deepseek-ai/DeepSeek-V3)
```
cd deepseek-ai/DeepSeek-V3/inference
python fp8_cast_bf16.py --input-fp8-hf-path /home/admin/models/deepseek-ai/DeepSeek-R1/ --output-bf16-hf-path /home/admin/models/deepseek-ai/DeepSeek-R1-bf16
```
## BF16 to f16.gguf
1. Use the [llama.cpp](https://github.com/ggerganov/llama.cpp) conversion program to convert DeepSeek-R1-bf16 to gguf format, requires an additional approximately 1.3 TB of space.
```
python convert_hf_to_gguf.py /home/admin/models/deepseek-ai/DeepSeek-R1-bf16 --outfile /home/admin/models/deepseek-ai/DeepSeek-R1-bf16/ggml-model-f16.gguf --outtype f16
```
2. Use the [llama.cpp](https://github.com/ggerganov/llama.cpp) quantitative program to quantitative model (llama-quantize needs to be compiled.),
other [quant option](https://github.com/ggerganov/llama.cpp/blob/master/examples/quantize/quantize.cpp).
Convert first Q2_K, requires an additional approximately 227 GB of space.
```
llama-quantize /home/admin/models/deepseek-ai/DeepSeek-R1-bf16/ggml-model-f16.gguf /home/admin/models/deepseek-ai/DeepSeek-R1-bf16/ggml-model-Q2_K.gguf Q2_K
```
3. Use llama-cli to test.
```
llama-cli -m /home/admin/models/deepseek-ai/DeepSeek-R1-bf16/ggml-model-Q2_K.gguf -n 2048
```
## Use with ollama
**Note:** this model requires [Ollama 0.5.5](https://github.com/ollama/ollama/releases/tag/v0.5.5)
You can use [huihui_ai/deepseek-r1:671b-q2_K](https://ollama.com/huihui_ai/deepseek-r1:671b-q2_K) directly
```
ollama run huihui_ai/deepseek-r1:671b-q2_K
```
or [huihui_ai/deepseek-r1:671b-q3_K](https://ollama.com/huihui_ai/deepseek-r1:671b-q3_K)
```
ollama run huihui_ai/deepseek-r1:671b-q3_K
```
|