Mistral 7B v0.1 - GGUF
This is a quantized model for mistralai/Mistral-7B-v0.1
. Two quantization methods were used:
- Q5_K_M: 5-bit, preserves most of the model's performance
- Q4_K_M: 4-bit, smaller footprints and saves more memory
Description
This repo contains GGUF format model files for Mistral AI_'s Mistral 7B v0.1.
This model was quantized in Google Colab.
- Downloads last month
- 23
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for wenqiglantz/Mistral-7B-v0.1-GGUF
Base model
mistralai/Mistral-7B-v0.1