|
--- |
|
license: mit |
|
base_model: |
|
- deepseek-ai/DeepSeek-R1 |
|
base_model_relation: quantized |
|
tags: |
|
- VPTQ |
|
- Quantized |
|
- Quantization |
|
--- |
|
|
|
**Disclaimer**: |
|
|
|
The model is reproduced based on the paper *VPTQ: Extreme Low-bit Vector Post-Training Quantization for Large Language Models* [github](https://github.com/microsoft/vptq) and [arXiv](https://arxiv.org/abs/2409.17066) |
|
|
|
The model itself is sourced from a community release. |
|
|
|
It is intended only for experimental purposes. |
|
|
|
Users are responsible for any consequences arising from the use of this model. |
|
|
|
``` |
|
|