File size: 629 Bytes
c88d73e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
---
license: mit
library_name: transformers
pipeline_tag: text-generation
tags:
- conversational
- mlx
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
---
# DeepSeek-R1-Distill-Qwen-32B-Q2-6
This model was converted to MLX from [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B), using mixed 2/6 bit quantization. This scheme preserves quality much more than a standard 2-bit quantization.
## Use with mlx
```bash
pip install mlx-lm
```
```bash
python -m mlx_lm.chat --model pcuenq/DeepSeek-R1-Distill-Qwen-32B-Q2-6 --max-tokens 10000 --temp 0.6 --top-p 0.7
```
|