Model Card for Model ID

Optimum GPTQ quantized 8-bit version of Mixtral-8x22B-Instruct-v0.1
See original model card for more information.

How to load

Downloads last month
98
Safetensors
Model size
19.2B params
Tensor type
I32
·
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.