ModelCloud optimized and validated quants that pass/meet strict quality assurance on multiple benchmarks. No one quantize
-
ModelCloud/QwQ-32B-gptqmodel-4bit-vortex-v1
Text Generation • Updated • 1.58k • 9 -
ModelCloud/DeepSeek-R1-Distill-Qwen-7B-gptqmodel-4bit-vortex-v2
Text Generation • Updated • 346 • 7 -
ModelCloud/DeepSeek-R1-Distill-Qwen-7B-gptqmodel-4bit-vortex-v1
Text Generation • Updated • 91 • 5 -
ModelCloud/Falcon3-10B-Instruct-gptqmodel-4bit-vortex-mlx-v1
Updated • 4 • 3