File size: 343 Bytes
7f90a80
 
 
 
 
 
98022bc
d880956
 
 
1
2
3
4
5
6
7
8
9
10
---
language:
- en
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
---
- Extracted a 64 Rank Lora from DeepSeek-R1-Distill-Qwen-32B
- Merged & Quantized into Q4_K_M

### Note: The model seems to be somewhat working with the R1's weird template too but it repeats random Chinese characters and the quality seems to be consistently worse.