File size: 891 Bytes
735e134
 
 
 
 
3daaa6f
735e134
 
 
 
 
 
 
 
 
 
 
03a1138
599d406
 
03a1138
735e134
 
 
 
 
ee692e2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
---
license: apache-2.0
pipeline_tag: text-generation
tags:
- finetuned
inference: false
base_model: mistralai/Mistral-7B-Instruct-v0.2
model_creator: Mistral AI_
model_name: Mistral 7B Instruct v0.2
model_type: mistral
prompt_template: '<s>[INST] {prompt} [/INST]
  '
quantized_by: wenqiglantz
---

# Mistral 7B Instruct v0.2 - GGUF

This is a quantized model for `mistralai/Mistral-7B-Instruct-v0.2`. Two quantization methods were used:
- Q5_K_M: 5-bit, recommended, low quality loss.
- Q4_K_M: 4-bit, recommended, offers balanced quality.
  
<!-- description start -->
## Description

This repo contains GGUF format model files for [Mistral AI_'s Mistral 7B Instruct v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2).

This model was quantized in Google Colab. Notebook link is [here](https://colab.research.google.com/drive/17zT5sLs_f3M404OWhEcwtnlmMKFz3FM7?usp=sharing).