--- license: mit base_model: - deepseek-ai/DeepSeek-R1-Distill-Qwen-14B --- AWQ 4 bits quantization from DeepSeek-R1-Distill-Qwen-14B commit 123265213609ea67934b1790bbb0203d3c50f54f ```python from awq import AutoAWQForCausalLM from transformers import AutoTokenizer model_name = "deepseek-ai/DeepSeek-R1-Distill-Qwen-14B" commit_hash = "123265213609ea67934b1790bbb0203d3c50f54f" # Download the model and tokenizer at the specific commit hash model = AutoAWQForCausalLM.from_pretrained(model_name, revision=commit_hash) tokenizer = AutoTokenizer.from_pretrained(model_name, revision=commit_hash) quant_config = { "zero_point": True, "q_group_size": 128, "w_bit": 4, "version": "GEMM" } model.quantize(tokenizer, quant_config=quant_config) ```