EXL2 Quantizations of Qwen2.5-72B-Instruct-abliterated
Using exllamav2 release 0.2.6 for quantization.
Original model: https://huggingface.co/zetasepic/Qwen2.5-72B-Instruct-abliterated
Bits 6.5, lm_head 8.0
"quantization_config": {
"quant_method": "exl2",
"version": "0.2.6",
"bits": 6.5,
"head_bits": 8,
"calibration": {
"rows": 115,
"length": 2048,
"dataset": "(default)"
}
- Downloads last month
- 4
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.
Model tree for Zenabius/Qwen2.5-72B-Instruct-abliterated-exl2-6.5bpw
Base model
Qwen/Qwen2.5-72B
Finetuned
Qwen/Qwen2.5-72B-Instruct