Melvin56/Elita-0.1-Distilled-R1-abliterated-GGUF
Original Model : prithivMLmods/Elita-0.1-Distilled-R1-abliterated
All quants are made using the imatrix option.
Model | Size (GB) |
---|---|
Q2_K_S | 2.82 |
Q2_K | 3.01 |
Q3_K_M | 3.80 |
Q3_K_L | 4.08 |
Q4_K_S | 4.46 |
Q4_K_M | 4.68 |
Q5_K_S | 5.30 |
Q5_K_M | 5.44 |
Q6_K | 6.25 |
Q8_0 | 8.10 |
F16 | 15.24 |
- Downloads last month
- 280
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for Melvin56/Elita-0.1-Distilled-R1-abliterated-GGUF
Base model
deepseek-ai/DeepSeek-R1-Distill-Qwen-7B