OPEA
/

Safetensors
llama
4-bit precision
intel/auto-round
cicdatopea commited on
Commit
1281e2f
·
verified ·
1 Parent(s): 417447c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +148 -3
README.md CHANGED
@@ -1,3 +1,148 @@
1
- ---
2
- license: llama3.1
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3.1
3
+ datasets:
4
+ - NeelNanda/pile-10k
5
+ ---
6
+ ## Model Card Details
7
+
8
+ This model is an int4 model with group_size -1 and symmetric quantization of [meta-llama/Llama-3.1-405B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-405B-Instruct) generated by [intel/auto-round](https://github.com/intel/auto-round).
9
+
10
+
11
+
12
+ ## Inference on CPU/HPU/CUDA
13
+
14
+ HPU: docker image with Gaudi Software Stack is recommended, please refer to following script for environment setup. More details can be found in [Gaudi Guide](https://docs.habana.ai/en/latest/Installation_Guide/Bare_Metal_Fresh_OS.html#launch-docker-image-that-was-built).
15
+
16
+ ```python
17
+ from auto_round import AutoHfQuantizer ##must import for auto-round format
18
+ import torch
19
+ from transformers import AutoModelForCausalLM,AutoTokenizer
20
+ quantized_model_dir = "OPEA/Meta-Llama-3.1-405B-Instruct-int4-sym-inc"
21
+ tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir)
22
+
23
+ model = AutoModelForCausalLM.from_pretrained(
24
+ quantized_model_dir,
25
+ torch_dtype='auto',
26
+ device_map="auto",
27
+ )
28
+
29
+ ##import habana_frameworks.torch.core as htcore ## uncommnet it for HPU
30
+ ##import habana_frameworks.torch.hpu as hthpu ## uncommnet it for HPU
31
+ ##model = model.to(torch.bfloat16).to("hpu") ## uncommnet it for HPU
32
+
33
+ prompt = "There is a girl who likes adventure,"
34
+ messages = [
35
+ {"role": "system", "content": "You are a helpful assistant."},
36
+ {"role": "user", "content": prompt}
37
+ ]
38
+
39
+ tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir)
40
+ text = tokenizer.apply_chat_template(
41
+ messages,
42
+ tokenize=False,
43
+ add_generation_prompt=True
44
+ )
45
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
46
+
47
+ generated_ids = model.generate(
48
+ model_inputs.input_ids,
49
+ max_new_tokens=200, ##change this to align with the official usage
50
+ do_sample=False ##change this to align with the official usage
51
+ )
52
+ generated_ids = [
53
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
54
+ ]
55
+
56
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
57
+ print(response)
58
+
59
+ ##prompt = "There is a girl who likes adventure,"
60
+ ##INT4
61
+ """That sounds exciting! Does she have a specific type of adventure in mind, such as traveling to new places, trying new activities, or exploring the outdoors? Or is she more of a spontaneous, "see where the day takes me" kind of person?
62
+ """
63
+
64
+ ##prompt = "Which one is larger, 9.11 or 9.8"
65
+ ## INT4
66
+ """9.11 is larger than 9.8."""
67
+
68
+ prompt = "How many r in strawberry."
69
+ ## INT4
70
+ """There are 2 Rs in the word "strawberry".""
71
+
72
+ ##prompt = "Once upon a time,"
73
+ ## INT4
74
+ """
75
+ ...in a land far, far away... Would you like me to continue the story, or do you have a specific direction in mind?
76
+ """
77
+
78
+
79
+ ```
80
+
81
+ ### Evaluate the model
82
+
83
+ pip3 install lm-eval==0.4.5. We have no enough resource to evaluate the bf16 model.
84
+
85
+ ```bash
86
+ auto-round --eval --model_name "OPEA/Meta-Llama-3.1-405B-Instruct-int4-sym-inc" --eval_bs 16 --tasks leaderboard_mmlu_pro,leaderboard_ifeval,lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,mmlu,gsm8k
87
+ ```
88
+
89
+ | Metric | INT4 |
90
+ | --------------------------- | ------ |
91
+ | avg | |
92
+ | leaderboard_mmlu_pro 5shot | |
93
+ | leaderboard_ifeval | |
94
+ | mmlu | 0.8551 |
95
+ | lambada_openai | |
96
+ | hellaswag | |
97
+ | winogrande | 0.8303 |
98
+ | piqa | |
99
+ | truthfulqa_mc1 | |
100
+ | openbookqa | |
101
+ | boolq | |
102
+ | arc_easy | |
103
+ | arc_challenge | 0.6451 |
104
+ | gsm8k(5shot) strict match | |
105
+
106
+ ## Generate the model
107
+
108
+ Here is the sample command to generate the model. Torch 2.6 and add `torch._dynamo.config.cache_size_limit = 130` to the code, otherwise, OOM will occur on 80GB gpu device, ~800G CPU memory
109
+
110
+ ```bash
111
+ auto-round \
112
+ --model meta-llama/Llama-3.1-405B-Instruct \
113
+ --device 0 \
114
+ --group_size -1 \
115
+ --batch_size 1 \
116
+ --gradient_accumulate_steps 4 \
117
+ --bits 4 \
118
+ --disable_eval \
119
+ --low_gpu_mem_usage \
120
+ --format 'auto_round' \
121
+ --output_dir "./tmp_autoround"
122
+ ```
123
+
124
+
125
+
126
+ ## Ethical Considerations and Limitations
127
+
128
+ The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
129
+
130
+ Therefore, before deploying any applications of the model, developers should perform safety testing.
131
+
132
+ ## Caveats and Recommendations
133
+
134
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
135
+
136
+ Here are a couple of useful links to learn more about Intel's AI software:
137
+
138
+ - Intel Neural Compressor [link](https://github.com/intel/neural-compressor)
139
+
140
+ ## Disclaimer
141
+
142
+ The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
143
+
144
+ ## Cite
145
+
146
+ @article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} }
147
+
148
+ [arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)