Mixtral-8x22b-iMat-GGUF

Quantized from fp32 with love. If you're on the latest release of llama.cpp you should no longer need to combine files before loading

  • Importance Matrix .dat file created using Q8 quant and groups_merged.txt

For a brief rundown of iMatrix quant performance please see this PR

All quants are verified working prior to uploading to repo for your safety and convenience.

Tip: Pick a size that can fit in your GPU while still allowing some room for context for best speed. You may need to pad this further depending on if you are running image gen or TTS as well.

Original model card can be found here

Downloads last month
52
GGUF
Model size
141B params
Architecture
llama

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Collection including InferenceIllusionist/Mixtral-8x22B-v0.1-iMat-GGUF