BitsAndBytes 4 bits quantization from DeepSeek-R1-Distill-Qwen-14B commit 123265213609ea67934b1790bbb0203d3c50f54f

Downloads last month
9
Safetensors
Model size
8.37B params
Tensor type
FP16
·
F32
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MPWARE/DeepSeek-R1-Distill-Qwen-14B-BnB-4bits

Quantized
(110)
this model

Collection including MPWARE/DeepSeek-R1-Distill-Qwen-14B-BnB-4bits