Mixtral-8x7B-v0.1-hf-4bit_g64-HQQ

This is a version of the Mixtral-8x7B-v0.1 model (https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) quantized to 4-bit via Half-Quadratic Quantization (HQQ).

Basic Usage

To run the model, install the HQQ library:

#This model is deprecated and requires older versions
pip install hqq==0.1.8
pip install transformers==4.46.0

and use it as follows:

model_id = 'mobiuslabsgmbh/Mixtral-8x7B-v0.1-hf-4bit_g64-HQQ'

#Load the model
from hqq.engine.hf import HQQModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
model     = HQQModelForCausalLM.from_quantized(model_id)

#Optional
from hqq.core.quantize import *
HQQLinear.set_backend(HQQBackend.PYTORCH_COMPILE) 
Downloads last month
18
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model authors have turned it off explicitly.

Collection including mobiuslabsgmbh/Mixtral-8x7B-v0.1-hf-4bit_g64-HQQ