File size: 1,227 Bytes
8cb6f88
 
 
 
 
 
 
 
 
 
 
524e5e5
8cb6f88
524e5e5
8cb6f88
524e5e5
 
007e788
524e5e5
007e788
fac8d4b
524e5e5
007e788
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
---
license: mit
language:
- en
- zh
base_model:
- deepseek-ai/DeepSeek-R1
pipeline_tag: text-generation
library_name: transformers
---
# DeepSeek R1 AWQ
AWQ of DeepSeek R1.

This quant modified some of the model code to fix an overflow issue when using float16.

To serve using vLLM with 8x 80GB GPUs, use the following command:
```sh
VLLM_WORKER_MULTIPROC_METHOD=spawn python -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 12345 --max-model-len 65536 --max-num-batched-tokens 65536 --trust-remote-code --tensor-parallel-size 8 --gpu-memory-utilization 0.97 --dtype float16 --served-model-name deepseek-reasoner --model cognitivecomputations/DeepSeek-R1-AWQ
```
You can download the wheel I built for PyTorch 2.6, Python 3.12 by clicking [here](https://huggingface.co/x2ray/wheels/resolve/main/vllm-0.7.3.dev187%2Bg0ff1a4df.d20220101.cu126-cp312-cp312-linux_x86_64.whl).

Inference speed with batch size 1 and short prompt:
- 8x H100: 48 TPS
- 8x A100: 38 TPS

Note:
- Inference speed will be better than FP8 at low batch size but worse than FP8 at high batch size, this is the nature of low bit quantization.
- vLLM supports MLA for AWQ now, you can run this model with full context length on just 8x 80GB GPUs.