zhaode commited on
Commit
bd1ac9f
·
verified ·
1 Parent(s): 0cd1bdd

Upload folder using huggingface_hub

Browse files
Files changed (6) hide show
  1. README.md +38 -1
  2. config.json +3 -4
  3. llm.mnn +3 -0
  4. llm.mnn.json +3 -0
  5. llm.mnn.weight +3 -0
  6. llm_config.json +3 -4
README.md CHANGED
@@ -9,5 +9,42 @@ tags:
9
  # phi-2-MNN
10
 
11
  ## Introduction
 
12
 
13
- This model is a 4-bit quantized version of the MNN model exported from phi-2 using [llm-export](https://github.com/wangzhaode/llm-export).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  # phi-2-MNN
10
 
11
  ## Introduction
12
+ This model is a 4-bit quantized version of the MNN model exported from [phi-2](https://modelscope.cn/models/mengzhao/phi-2/summary) using [llmexport](https://github.com/alibaba/MNN/tree/master/transformers/llm/export).
13
 
14
+ ## Download
15
+ ```bash
16
+ # install huggingface
17
+ pip install huggingface
18
+ ```
19
+ ```bash
20
+ # shell download
21
+ huggingface download --model 'taobao-mnn/phi-2-MNN' --local_dir 'path/to/dir'
22
+ ```
23
+ ```python
24
+ # SDK download
25
+ from huggingface_hub import snapshot_download
26
+ model_dir = snapshot_download('taobao-mnn/phi-2-MNN')
27
+ ```
28
+
29
+ ```bash
30
+ # git clone
31
+ git clone https://www.modelscope.cn/taobao-mnn/phi-2-MNN
32
+ ```
33
+
34
+ ## Usage
35
+ ```bash
36
+ # clone MNN source
37
+ git clone https://github.com/alibaba/MNN.git
38
+
39
+ # compile
40
+ cd MNN
41
+ mkdir build && cd build
42
+ cmake .. -DMNN_LOW_MEMORY=true -DMNN_CPU_WEIGHT_DEQUANT_GEMM=true -DMNN_BUILD_LLM=true -DMNN_SUPPORT_TRANSFORMER_FUSE=true
43
+ make -j
44
+
45
+ # run
46
+ ./llm_demo /path/to/phi-2-MNN/config.json prompt.txt
47
+ ```
48
+
49
+ ## Document
50
+ [MNN-LLM](https://mnn-docs.readthedocs.io/en/latest/transformers/llm.html#)
config.json CHANGED
@@ -1,9 +1,8 @@
1
  {
2
- "llm_model": "phi2-int4.mnn",
3
- "llm_weight": "phi2-int4.mnn.weight",
4
-
5
  "backend_type": "cpu",
6
  "thread_num": 4,
7
  "precision": "low",
8
  "memory": "low"
9
- }
 
1
  {
2
+ "llm_model": "llm.mnn",
3
+ "llm_weight": "llm.mnn.weight",
 
4
  "backend_type": "cpu",
5
  "thread_num": 4,
6
  "precision": "low",
7
  "memory": "low"
8
+ }
llm.mnn ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:60f2064d6235755629ccb91177177dffa330c2dd64c9dc0ff3a00764abc58169
3
+ size 1195984
llm.mnn.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:90c13b5d8b59642a9eea79a4014d121e4e876b4d9bb295b495b451cbded3f44f
3
+ size 7325129
llm.mnn.weight ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:809ed6b0670b761921efa79989ac850a9e1e0891b94c28b2013a5abc1cfd9311
3
+ size 1492463766
llm_config.json CHANGED
@@ -1,15 +1,14 @@
1
  {
2
  "hidden_size": 2560,
3
  "layer_nums": 32,
4
- "attention_mask": "glm",
5
  "key_value_shape": [
 
6
  1,
7
  0,
8
- 2,
9
  32,
10
  80
11
  ],
12
  "prompt_template": "Instruct: %s\nOutput:",
13
- "is_visual": false,
14
- "is_single": true
15
  }
 
1
  {
2
  "hidden_size": 2560,
3
  "layer_nums": 32,
4
+ "attention_mask": "float",
5
  "key_value_shape": [
6
+ 2,
7
  1,
8
  0,
 
9
  32,
10
  80
11
  ],
12
  "prompt_template": "Instruct: %s\nOutput:",
13
+ "is_visual": false
 
14
  }