nm-research commited on
Commit
277ac4a
·
verified ·
1 Parent(s): ffe6ea4

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +232 -0
README.md ADDED
@@ -0,0 +1,232 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - w4a16
4
+ - int4
5
+ - vllm
6
+ license: apache-2.0
7
+ license_link: https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md
8
+ language:
9
+ - en
10
+ base_model: ibm-granite/granite-3.1-8b-base
11
+ library_name: transformers
12
+ ---
13
+
14
+ # granite-3.1-8b-base-quantized.w4a16
15
+
16
+ ## Model Overview
17
+ - **Model Architecture:** granite-3.1-8b-base
18
+ - **Input:** Text
19
+ - **Output:** Text
20
+ - **Model Optimizations:**
21
+ - **Weight quantization:** INT4
22
+ - **Activation quantization:** INT4
23
+ - **Release Date:** 1/8/2025
24
+ - **Version:** 1.0
25
+ - **Model Developers:** Neural Magic
26
+
27
+ Quantized version of [ibm-granite/granite-3.1-8b-base](https://huggingface.co/ibm-granite/granite-3.1-8b-base).
28
+ It achieves an average score of 69.81 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 70.30.
29
+
30
+ ### Model Optimizations
31
+
32
+ This model was obtained by quantizing the weights of [ibm-granite/granite-3.1-8b-base](https://huggingface.co/ibm-granite/granite-3.1-8b-base) to INT4 data type, ready for inference with vLLM >= 0.5.2.
33
+ This optimization reduces the number of bits per parameter from 16 to 4, reducing the disk size and GPU memory requirements by approximately 75%. Only the weights of the linear operators within transformers blocks are quantized.
34
+
35
+ ## Deployment
36
+
37
+ ### Use with vLLM
38
+
39
+ This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
40
+
41
+ ```python
42
+ from transformers import AutoTokenizer
43
+ from vllm import LLM, SamplingParams
44
+
45
+ max_model_len, tp_size = 4096, 1
46
+ model_name = "neuralmagic-ent/granite-3.1-8b-base-quantized.w4a16"
47
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
48
+ llm = LLM(model=model_name, tensor_parallel_size=tp_size, max_model_len=max_model_len, trust_remote_code=True)
49
+ sampling_params = SamplingParams(temperature=0.3, max_tokens=256, stop_token_ids=[tokenizer.eos_token_id])
50
+
51
+ messages_list = [
52
+ [{"role": "user", "content": "Who are you? Please respond in pirate speak!"}],
53
+ ]
54
+
55
+ prompt_token_ids = [tokenizer.apply_chat_template(messages, add_generation_prompt=True) for messages in messages_list]
56
+
57
+ outputs = llm.generate(prompt_token_ids=prompt_token_ids, sampling_params=sampling_params)
58
+
59
+ generated_text = [output.outputs[0].text for output in outputs]
60
+ print(generated_text)
61
+ ```
62
+
63
+ vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
64
+
65
+ ## Creation
66
+
67
+ This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below.
68
+
69
+
70
+ ```bash
71
+ python quantize.py --model_path ibm-granite/granite-3.1-8b-base --quant_path "output_dir/granite-3.1-8b-base-quantized.w4a16" --calib_size 3072 --dampening_frac 0.1 --observer mse
72
+ ```
73
+
74
+
75
+ ```python
76
+ from datasets import load_dataset
77
+ from transformers import AutoTokenizer
78
+ from llmcompressor.modifiers.quantization import GPTQModifier
79
+ from llmcompressor.transformers import SparseAutoModelForCausalLM, oneshot, apply
80
+ import argparse
81
+ from compressed_tensors.quantization import QuantizationScheme, QuantizationArgs, QuantizationType, QuantizationStrategy
82
+
83
+
84
+ parser = argparse.ArgumentParser()
85
+ parser.add_argument('--model_path', type=str)
86
+ parser.add_argument('--quant_path', type=str)
87
+ parser.add_argument('--calib_size', type=int, default=256)
88
+ parser.add_argument('--dampening_frac', type=float, default=0.1)
89
+ parser.add_argument('--observer', type=str, default="minmax")
90
+ parser.add_argument('--actorder', type=str, default="None")
91
+
92
+ args = parser.parse_args()
93
+
94
+ model = SparseAutoModelForCausalLM.from_pretrained(
95
+ args.model_path,
96
+ device_map="auto",
97
+ torch_dtype="auto",
98
+ use_cache=False,
99
+ trust_remote_code=True,
100
+ )
101
+ tokenizer = AutoTokenizer.from_pretrained(args.model_path)
102
+
103
+
104
+ NUM_CALIBRATION_SAMPLES = args.calib_size
105
+ DATASET_ID = "neuralmagic/LLM_compression_calibration"
106
+ DATASET_SPLIT = "train"
107
+ ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
108
+ ds = ds.shuffle(seed=42).select(range(NUM_CALIBRATION_SAMPLES))
109
+
110
+ def preprocess(example):
111
+ concat_txt = example["baseion"] + "\n" + example["output"]
112
+ return {"text": concat_txt}
113
+
114
+ ds = ds.map(preprocess)
115
+
116
+ def tokenize(sample):
117
+ return tokenizer(
118
+ sample["text"],
119
+ padding=False,
120
+ truncation=False,
121
+ add_special_tokens=True,
122
+ )
123
+
124
+
125
+ ds = ds.map(tokenize, remove_columns=ds.column_names)
126
+
127
+ recipe = [
128
+ GPTQModifier(
129
+ targets=["Linear"],
130
+ ignore=["lm_head"],
131
+ scheme="w4a16",
132
+ dampening_frac=args.dampening_frac,
133
+ observer=args.observer,
134
+ )
135
+ ]
136
+ oneshot(
137
+ model=model,
138
+ dataset=ds,
139
+ recipe=recipe,
140
+ num_calibration_samples=args.calib_size,
141
+ max_seq_length=8196,
142
+ )
143
+
144
+ # Save to disk compressed.
145
+ model.save_pretrained(SAVE_DIR, save_compressed=True)
146
+ tokenizer.save_pretrained(SAVE_DIR)
147
+ ```
148
+
149
+ ## Evaluation
150
+
151
+ The model was evaluated on OpenLLM Leaderboard [V1](https://huggingface.co/spaces/open-llm-leaderboard-old/open_llm_leaderboard) and on [HumanEval](https://github.com/neuralmagic/evalplus), using the following commands:
152
+
153
+ OpenLLM Leaderboard V1:
154
+ ```
155
+ lm_eval \
156
+ --model vllm \
157
+ --model_args pretrained="neuralmagic-ent/granite-3.1-8b-base-quantized.w4a16",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True \
158
+ --tasks openllm \
159
+ --write_out \
160
+ --batch_size auto \
161
+ --output_path output_dir \
162
+ --show_config
163
+ ```
164
+
165
+ #### HumanEval
166
+ ##### Generation
167
+ ```
168
+ python3 codegen/generate.py \
169
+ --model neuralmagic-ent/granite-3.1-8b-base-quantized.w4a16 \
170
+ --bs 16 \
171
+ --temperature 0.2 \
172
+ --n_samples 50 \
173
+ --root "." \
174
+ --dataset humaneval
175
+ ```
176
+ ##### Sanitization
177
+ ```
178
+ python3 evalplus/sanitize.py \
179
+ humaneval/neuralmagic-ent--granite-3.1-8b-base-quantized.w4a16_vllm_temp_0.2
180
+ ```
181
+ ##### Evaluation
182
+ ```
183
+ evalplus.evaluate \
184
+ --dataset humaneval \
185
+ --samples humaneval/neuralmagic-ent--granite-3.1-8b-base-quantized.w4a16_vllm_temp_0.2-sanitized
186
+ ```
187
+
188
+ ### Accuracy
189
+
190
+ #### OpenLLM Leaderboard V1 evaluation scores
191
+
192
+ Here is the updated table where the column for the quantized model is kept, but its values are removed:
193
+
194
+ ---
195
+
196
+ | Metric | ibm-granite/granite-3.1-8b-base | neuralmagic-ent/granite-3.1-8b-base-quantized.w4a16 |
197
+ |-----------------------------------------|:---------------------------------:|:-------------------------------------------:|
198
+ | ARC-Challenge (Acc-Norm, 25-shot) | 64.68 | 64.25 |
199
+ | GSM8K (Strict-Match, 5-shot) | 60.88 | 60.50 |
200
+ | HellaSwag (Acc-Norm, 10-shot) | 83.52 | 83.22 |
201
+ | MMLU (Acc, 5-shot) | 63.33 | 63.16 |
202
+ | TruthfulQA (MC2, 0-shot) | 51.33 | 52.59 |
203
+ | Winogrande (Acc, 5-shot) | 80.90 | 80.11 |
204
+ | **Average Score** | **67.44** | **67.30** |
205
+ | **Recovery** | **100.00** | **99.80** |
206
+
207
+ ---
208
+
209
+ #### OpenLLM Leaderboard V1 evaluation scores
210
+
211
+ | Metric | ibm-granite/granite-3.1-8b-base | neuralmagic-ent/granite-3.1-8b-base-quantized.w4a16 |
212
+ |-----------------------------------------|:---------------------------------:|:-------------------------------------------:|
213
+ | IFEval (Inst Level Strict Acc, 0-shot) | 49.04 | |
214
+ | BBH (Acc-Norm, 3-shot) | 47.76 | |
215
+ | Math-Hard (Exact-Match, 4-shot) | 7.65 | |
216
+ | GPQA (Acc-Norm, 0-shot) | 28.73 | |
217
+ | MUSR (Acc-Norm, 0-shot) | 38.82 | |
218
+ | MMLU-Pro (Acc, 5-shot) | 32.11 | |
219
+ | **Average Score** | **34.02** | |
220
+ | **Recovery** | **100.00** | |
221
+
222
+ ---
223
+
224
+ #### HumanEval pass@1 scores
225
+
226
+ | Metric | ibm-granite/granite-3.1-8b-base | neuralmagic-ent/granite-3.1-8b-base-quantized.w4a16 |
227
+ |-----------------------------------------|:---------------------------------:|:-------------------------------------------:|
228
+ | HumanEval Pass@1 | 44.10 | 43.10 |
229
+
230
+ ---
231
+
232
+