apepkuss79 commited on
Commit
b849ced
·
verified ·
1 Parent(s): da52a6f

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +49 -0
README.md CHANGED
@@ -1,3 +1,52 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ model_name: Mistral-7B-Instruct-v0.3
4
+ base_model: mistralai/Mistral-7B-Instruct-v0.3
5
+ inference: false
6
+ model_creator: mistralai
7
+ quantized_by: Second State Inc.
8
  ---
9
+
10
+ ![](https://github.com/GaiaNet-AI/.github/assets/45785633/d6976adc-f97d-4f86-a648-0f2f5c8e7eee)
11
+
12
+ # Mistral-7B-Instruct-v0.3-GGUF
13
+
14
+ ## Original Model
15
+
16
+ [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3)
17
+
18
+ ## Run with Gaianet
19
+
20
+ **Prompt template**
21
+
22
+ prompt template: `mistral-instruct`
23
+
24
+ **Context size**
25
+
26
+ chat_ctx_size: `32000`
27
+
28
+ **Run with GaiaNet**
29
+
30
+ - Quick start: https://docs.gaianet.ai/node-guide/quick-start
31
+
32
+ - Customize your node: https://docs.gaianet.ai/node-guide/customize
33
+
34
+ ## Quantized GGUF Models
35
+
36
+ | Name | Quant method | Bits | Size | Use case |
37
+ | ---- | ---- | ---- | ---- | ----- |
38
+ | [Mistral-7B-Instruct-v0.3-Q2_K.gguf](https://huggingface.co/gaianet/Mistral-7B-Instruct-v0.3-GGUF/blob/main/Mistral-7B-Instruct-v0.3-Q2_K.gguf) | Q2_K | 2 | 2.72 GB| smallest, significant quality loss - not recommended for most purposes |
39
+ | [Mistral-7B-Instruct-v0.3-Q3_K_L.gguf](https://huggingface.co/gaianet/Mistral-7B-Instruct-v0.3-GGUF/blob/main/Mistral-7B-Instruct-v0.3-Q3_K_L.gguf) | Q3_K_L | 3 | 3.83 GB| small, substantial quality loss |
40
+ | [Mistral-7B-Instruct-v0.3-Q3_K_M.gguf](https://huggingface.co/gaianet/Mistral-7B-Instruct-v0.3-GGUF/blob/main/Mistral-7B-Instruct-v0.3-Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| very small, high quality loss |
41
+ | [Mistral-7B-Instruct-v0.3-Q3_K_S.gguf](https://huggingface.co/gaianet/Mistral-7B-Instruct-v0.3-GGUF/blob/main/Mistral-7B-Instruct-v0.3-Q3_K_S.gguf) | Q3_K_S | 3 | 3.17 GB| very small, high quality loss |
42
+ | [Mistral-7B-Instruct-v0.3-Q4_0.gguf](https://huggingface.co/gaianet/Mistral-7B-Instruct-v0.3-GGUF/blob/main/Mistral-7B-Instruct-v0.3-Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
43
+ | [Mistral-7B-Instruct-v0.3-Q4_K_M.gguf](https://huggingface.co/gaianet/Mistral-7B-Instruct-v0.3-GGUF/blob/main/Mistral-7B-Instruct-v0.3-Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| medium, balanced quality - recommended |
44
+ | [Mistral-7B-Instruct-v0.3-Q4_K_S.gguf](https://huggingface.co/gaianet/Mistral-7B-Instruct-v0.3-GGUF/blob/main/Mistral-7B-Instruct-v0.3-Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| small, greater quality loss |
45
+ | [Mistral-7B-Instruct-v0.3-Q5_0.gguf](https://huggingface.co/gaianet/Mistral-7B-Instruct-v0.3-GGUF/blob/main/Mistral-7B-Instruct-v0.3-Q5_0.gguf) | Q5_0 | 5 | 5 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
46
+ | [Mistral-7B-Instruct-v0.3-Q5_K_M.gguf](https://huggingface.co/gaianet/Mistral-7B-Instruct-v0.3-GGUF/blob/main/Mistral-7B-Instruct-v0.3-Q5_K_M.gguf) | Q5_K_M | 5 | 5.14 GB| large, very low quality loss - recommended |
47
+ | [Mistral-7B-Instruct-v0.3-Q5_K_S.gguf](https://huggingface.co/gaianet/Mistral-7B-Instruct-v0.3-GGUF/blob/main/Mistral-7B-Instruct-v0.3-Q5_K_S.gguf) | Q5_K_S | 5 | 5 GB| large, low quality loss - recommended |
48
+ | [Mistral-7B-Instruct-v0.3-Q6_K.gguf](https://huggingface.co/gaianet/Mistral-7B-Instruct-v0.3-GGUF/blob/main/Mistral-7B-Instruct-v0.3-Q6_K.gguf) | Q6_K | 6 | 5.95 GB| very large, extremely low quality loss |
49
+ | [Mistral-7B-Instruct-v0.3-Q8_0.gguf](https://huggingface.co/gaianet/Mistral-7B-Instruct-v0.3-GGUF/blob/main/Mistral-7B-Instruct-v0.3-Q8_0.gguf) | Q8_0 | 8 | 7.7 GB| very large, extremely low quality loss - not recommended |
50
+ | [Mistral-7B-Instruct-v0.3-f16.gguf](https://huggingface.co/gaianet/Mistral-7B-Instruct-v0.3-GGUF/blob/main/Mistral-7B-Instruct-v0.3-f16.gguf) | f16 | 16 | 14.5 GB| |
51
+
52
+ *Quantized with llama.cpp b3030.*