We quantified mistralai/Mistral-Small-24B-Instruct-2501 to 4bit model using BitsAndBytes.

To use this model you need install BitsAndBytes at first:

pip install -U bitsandbytes

Then, use AutoModelForCausalLM:

from transformers import AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained("minicreeper/Mistral-Small-24B-Instruct-2501-bnb-4bit")
Downloads last month
70
Safetensors
Model size
12.8B params
Tensor type
F32
BF16
U8
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for minicreeper/Mistral-Small-24B-Instruct-2501-bnb-4bit