nm-research commited on
Commit
e9774db
·
verified ·
1 Parent(s): be9bb45

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +183 -0
README.md ADDED
@@ -0,0 +1,183 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ tags:
6
+ - moe
7
+ - fp8
8
+ - vllm
9
+ ---
10
+
11
+ # Mixtral-8x7B-Instruct-v0.1-FP8
12
+
13
+ ## Model Overview
14
+ - **Model Architecture:** Mixtral-8x7B-Instruct-v0.1
15
+ - **Input:** Text
16
+ - **Output:** Text
17
+ - **Model Optimizations:**
18
+ - **Weight quantization:** FP8
19
+ - **Activation quantization:** FP8
20
+ - **Release Date:** 3/6/2025
21
+ - **Version:** 1.0
22
+ - **Model Developers:** Neural Magic
23
+
24
+ Quantized version of [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1).
25
+ It achieves an average score of <TODO> on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves <TODO>.
26
+
27
+ ### Model Optimizations
28
+
29
+ This model was obtained by quantizing the weights and activations to FP8 data type, ready for inference with vLLM.
30
+ This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%. Only the weights and activations of the linear operators within transformers blocks are quantized, except the MLP routers.
31
+
32
+ ## Deployment
33
+
34
+ ### Use with vLLM
35
+
36
+ This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
37
+
38
+ ```python
39
+ from transformers import AutoTokenizer
40
+ from vllm import LLM, SamplingParams
41
+
42
+ max_model_len, tp_size = 4096, 4
43
+ model_name = "neuralmagic/Mixtral-8x7B-Instruct-v0.1-FP8"
44
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
45
+ llm = LLM(model=model_name, tensor_parallel_size=tp_size, max_model_len=max_model_len, trust_remote_code=True)
46
+ sampling_params = SamplingParams(temperature=0.3, max_tokens=256, stop_token_ids=[tokenizer.eos_token_id])
47
+
48
+ messages_list = [
49
+ [{"role": "user", "content": "Who are you? Please respond in pirate speak!"}],
50
+ ]
51
+
52
+ prompt_token_ids = [tokenizer.apply_chat_template(messages, add_generation_prompt=True) for messages in messages_list]
53
+
54
+ outputs = llm.generate(prompt_token_ids=prompt_token_ids, sampling_params=sampling_params)
55
+
56
+ generated_text = [output.outputs[0].text for output in outputs]
57
+ print(generated_text)
58
+ ```
59
+
60
+ vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
61
+
62
+ ## Creation
63
+
64
+ This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below with the following command:
65
+
66
+ ```bash
67
+ python quantize.py --model_path mistralai/Mixtral-8x7B-Instruct-v0.1 --quant_path "output_dir" --calib_size 128
68
+ ```
69
+
70
+
71
+ ```python
72
+ import argparse
73
+ from datasets import load_dataset
74
+ from transformers import AutoModelForCausalLM, AutoTokenizer
75
+ from llmcompressor.modifiers.quantization import QuantizationModifier
76
+ from llmcompressor.transformers import oneshot
77
+ from llmcompressor.transformers.compression.helpers import calculate_offload_device_map
78
+ import torch
79
+ import os
80
+
81
+
82
+ def main():
83
+ # Set up command line argument parsing
84
+ parser = argparse.ArgumentParser(description='Quantize a transformer model to FP8')
85
+ parser.add_argument('--model_id', type=str, required=True,
86
+ help='The model ID from HuggingFace (e.g., "meta-llama/Meta-Llama-3-8B-Instruct")')
87
+ parser.add_argument('--save_path', type=str, default='.',
88
+ help='Custom path to save the quantized model. If not provided, will use model_name-FP8')
89
+ parser.add_argument('--calib_size', type=int, default=256)
90
+ args = parser.parse_args()
91
+
92
+ device_map = calculate_offload_device_map(
93
+ args.model_id,
94
+ reserve_for_hessians=False,
95
+ num_gpus=torch.cuda.device_count(),
96
+ trust_remote_code=True,
97
+ torch_dtype=torch.bfloat16,
98
+ )
99
+
100
+ model = AutoModelForCausalLM.from_pretrained(
101
+ args.model_id, device_map=device_map, torch_dtype=torch.bfloat16, trust_remote_code=True,
102
+ )
103
+ tokenizer = AutoTokenizer.from_pretrained(args.model_id)
104
+
105
+ NUM_CALIBRATION_SAMPLES = args.calib_size
106
+ DATASET_ID = "garage-bAInd/Open-Platypus"
107
+ DATASET_SPLIT = "train"
108
+ ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
109
+ ds = ds.shuffle(seed=42).select(range(NUM_CALIBRATION_SAMPLES))
110
+
111
+ def preprocess(example):
112
+ concat_txt = example["instruction"] + "\n" + example["output"]
113
+ return {"text": concat_txt}
114
+
115
+ ds = ds.map(preprocess)
116
+
117
+ def tokenize(sample):
118
+ return tokenizer(
119
+ sample["text"],
120
+ padding=False,
121
+ truncation=False,
122
+ add_special_tokens=True,
123
+ )
124
+
125
+ ds = ds.map(tokenize, remove_columns=ds.column_names)
126
+
127
+ # Configure the quantization algorithm and scheme
128
+ recipe = QuantizationModifier(
129
+ targets="Linear", scheme="FP8", ignore=["lm_head", "re:.*block_sparse_moe.gate"]
130
+ )
131
+
132
+ # Apply quantization
133
+ oneshot(
134
+ model=model,
135
+ dataset=ds,
136
+ recipe=recipe,
137
+ num_calibration_samples=args.calib_size
138
+ )
139
+
140
+ save_path = os.path.join(args.save_path, args.model_id.split("/")[1] + "-FP8")
141
+ os.makedirs(save_path, exist_ok=True)
142
+
143
+ # Save to disk in compressed-tensors format
144
+ model.save_pretrained(save_path, save_compressed=True, skip_compression_stats=True)
145
+ tokenizer.save_pretrained(save_path)
146
+ print(f"Model and tokenizer saved to: {save_path}")
147
+
148
+ if __name__ == "__main__":
149
+ main()
150
+ ```
151
+
152
+ ## Evaluation
153
+
154
+ The model was evaluated on OpenLLM Leaderboard [V1](https://huggingface.co/spaces/open-llm-leaderboard-old/open_llm_leaderboard) using the following command:
155
+
156
+ OpenLLM Leaderboard V1:
157
+ ```
158
+ lm_eval \
159
+ --model vllm \
160
+ --model_args pretrained="neuralmagic/Mixtral-8x7B-Instruct-v0.1-FP8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=8,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True \
161
+ --tasks openllm \
162
+ --write_out \
163
+ --batch_size auto \
164
+ --output_path output_dir \
165
+ --show_config
166
+ ```
167
+
168
+
169
+ ### Accuracy
170
+
171
+ #### OpenLLM Leaderboard V1 evaluation scores
172
+
173
+ | Metric | mistralai/Mixtral-8x7B-Instruct-v0.1 | neuralmagic/Mixtral-8x7B-Instruct-v0.1-FP8 |
174
+ |-----------------------------------------|:---------------------------------:|:-------------------------------------------:|
175
+ | ARC-Challenge (Acc-Norm, 25-shot) | <TODO> | <TODO> |
176
+ | GSM8K (Strict-Match, 5-shot) | <TODO> | <TODO> |
177
+ | HellaSwag (Acc-Norm, 10-shot) | <TODO> | <TODO> |
178
+ | MMLU (Acc, 5-shot) | <TODO> | <TODO> |
179
+ | TruthfulQA (MC2, 0-shot) | <TODO> | <TODO> |
180
+ | Winogrande (Acc, 5-shot) | <TODO> | <TODO> |
181
+ | **Average Score** | **<TODO>** | **<TODO>** |
182
+ | **Recovery (%)** | **100.00** | **<TODO>** |
183
+