GGUF files of pipihand01/QwQ-32B-Preview-abliterated-linear25.

NOTE: I bear no responsibility for any output of this model. When properly prompted, this model may generate contents that are not suitable in some situations. Use it with your own caution.

Downloads last month
31
GGUF
Model size
32.8B params
Architecture
qwen2

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for pipihand01/QwQ-32B-Preview-abliterated-linear25-GGUF

Quantized
(3)
this model