Mistral 7B Instruct v0.2 - GGUF

This is a quantized model for mistralai/Mistral-7B-Instruct-v0.2. Two quantization methods were used:

  • Q5_K_M: 5-bit, recommended, low quality loss.
  • Q4_K_M: 4-bit, recommended, offers balanced quality.

Description

This repo contains GGUF format model files for Mistral AI_'s Mistral 7B Instruct v0.2.

This model was quantized in Google Colab. Notebook link is here.

Downloads last month
69
GGUF
Model size
7.24B params
Architecture
llama

4-bit

5-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model authors have turned it off explicitly.

Model tree for wenqiglantz/Mistral-7B-Instruct-v0.2-GGUF

Quantized
(89)
this model