shreyajn commited on
Commit
01d1834
·
verified ·
1 Parent(s): b9f9fcc

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +52 -190
README.md CHANGED
@@ -37,18 +37,39 @@ More details on model performance across various devices, can be found
37
 
38
  | Model | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
39
  |---|---|---|---|---|---|---|---|---|
40
- | TextEncoder_Quantized | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 11.633 ms | 0 - 1 MB | INT8 | NPU | [Stable-Diffusion-v2.1.bin](https://huggingface.co/qualcomm/Stable-Diffusion-v2.1/blob/main/TextEncoder_Quantized.bin) |
41
- | TextEncoder_Quantized | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 7.759 ms | 0 - 8 MB | INT8 | NPU | [Stable-Diffusion-v2.1.bin](https://huggingface.co/qualcomm/Stable-Diffusion-v2.1/blob/main/TextEncoder_Quantized.bin) |
42
- | TextEncoder_Quantized | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 11.773 ms | 0 - 0 MB | INT8 | NPU | Use Export Script |
43
- | TextEncoder_Quantized | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 10.7 ms | 0 - 1 MB | UINT16 | NPU | Use Export Script |
44
- | VAEDecoder_Quantized | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 217.134 ms | 0 - 2 MB | INT8 | NPU | [Stable-Diffusion-v2.1.bin](https://huggingface.co/qualcomm/Stable-Diffusion-v2.1/blob/main/VAEDecoder_Quantized.bin) |
45
- | VAEDecoder_Quantized | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 161.705 ms | 0 - 8 MB | INT8 | NPU | [Stable-Diffusion-v2.1.bin](https://huggingface.co/qualcomm/Stable-Diffusion-v2.1/blob/main/VAEDecoder_Quantized.bin) |
46
- | VAEDecoder_Quantized | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 220.179 ms | 0 - 0 MB | INT8 | NPU | Use Export Script |
47
- | VAEDecoder_Quantized | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 225.416 ms | 0 - 2 MB | UINT16 | NPU | Use Export Script |
48
- | UNet_Quantized | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 101.094 ms | 0 - 2 MB | INT8 | NPU | [Stable-Diffusion-v2.1.bin](https://huggingface.co/qualcomm/Stable-Diffusion-v2.1/blob/main/UNet_Quantized.bin) |
49
- | UNet_Quantized | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 72.62 ms | 0 - 8 MB | INT8 | NPU | [Stable-Diffusion-v2.1.bin](https://huggingface.co/qualcomm/Stable-Diffusion-v2.1/blob/main/UNet_Quantized.bin) |
50
- | UNet_Quantized | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 102.486 ms | 0 - 0 MB | INT8 | NPU | Use Export Script |
51
- | UNet_Quantized | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 96.631 ms | 1 - 2 MB | UINT16 | NPU | Use Export Script |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
52
 
53
 
54
 
@@ -58,7 +79,7 @@ More details on model performance across various devices, can be found
58
 
59
  Install the package via pip:
60
  ```bash
61
- pip install "qai-hub-models[stable-diffusion-v2-1-quantized]"
62
  ```
63
 
64
 
@@ -76,7 +97,7 @@ Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
76
 
77
 
78
 
79
- ## Demo on-device
80
 
81
  The package contains a simple end-to-end demo that downloads pre-trained
82
  weights and runs this model on a sample input.
@@ -109,171 +130,34 @@ python -m qai_hub_models.models.stable_diffusion_v2_1_quantized.export
109
  ```
110
  Profiling Results
111
  ------------------------------------------------------------
112
- TextEncoder_Quantized
113
  Device : Samsung Galaxy S23 (13)
114
  Runtime : QNN
115
- Estimated inference time (ms) : 11.6
116
- Estimated peak memory usage (MB): [0, 1]
117
- Total # Ops : 1040
118
- Compute Unit(s) : NPU (1040 ops)
119
 
120
  ------------------------------------------------------------
121
- VAEDecoder_Quantized
122
  Device : Samsung Galaxy S23 (13)
123
  Runtime : QNN
124
- Estimated inference time (ms) : 217.1
125
- Estimated peak memory usage (MB): [0, 2]
126
- Total # Ops : 170
127
- Compute Unit(s) : NPU (170 ops)
128
 
129
  ------------------------------------------------------------
130
- UNet_Quantized
131
  Device : Samsung Galaxy S23 (13)
132
  Runtime : QNN
133
- Estimated inference time (ms) : 101.1
134
- Estimated peak memory usage (MB): [0, 2]
135
- Total # Ops : 6361
136
- Compute Unit(s) : NPU (6361 ops)
137
  ```
138
 
139
 
140
- ## How does this work?
141
-
142
- This [export script](https://aihub.qualcomm.com/models/stable_diffusion_v2_1_quantized/qai_hub_models/models/Stable-Diffusion-v2.1/export.py)
143
- leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
144
- on-device. Lets go through each step below in detail:
145
-
146
- Step 1: **Compile model for on-device deployment**
147
-
148
- To compile a PyTorch model for on-device deployment, we first trace the model
149
- in memory using the `jit.trace` and then call the `submit_compile_job` API.
150
-
151
- ```python
152
- import torch
153
-
154
- import qai_hub as hub
155
- from qai_hub_models.models.stable_diffusion_v2_1_quantized import Model
156
-
157
- # Load the model
158
- model = Model.from_pretrained()
159
- text_encoder_model = model.text_encoder
160
- unet_model = model.unet
161
- vae_decoder_model = model.vae_decoder
162
-
163
- # Device
164
- device = hub.Device("Samsung Galaxy S23")
165
-
166
- # Trace model
167
- text_encoder_input_shape = text_encoder_model.get_input_spec()
168
- text_encoder_sample_inputs = text_encoder_model.sample_inputs()
169
-
170
- traced_text_encoder_model = torch.jit.trace(text_encoder_model, [torch.tensor(data[0]) for _, data in text_encoder_sample_inputs.items()])
171
-
172
- # Compile model on a specific device
173
- text_encoder_compile_job = hub.submit_compile_job(
174
- model=traced_text_encoder_model ,
175
- device=device,
176
- input_specs=text_encoder_model.get_input_spec(),
177
- )
178
-
179
- # Get target model to run on-device
180
- text_encoder_target_model = text_encoder_compile_job.get_target_model()
181
- # Trace model
182
- unet_input_shape = unet_model.get_input_spec()
183
- unet_sample_inputs = unet_model.sample_inputs()
184
-
185
- traced_unet_model = torch.jit.trace(unet_model, [torch.tensor(data[0]) for _, data in unet_sample_inputs.items()])
186
-
187
- # Compile model on a specific device
188
- unet_compile_job = hub.submit_compile_job(
189
- model=traced_unet_model ,
190
- device=device,
191
- input_specs=unet_model.get_input_spec(),
192
- )
193
-
194
- # Get target model to run on-device
195
- unet_target_model = unet_compile_job.get_target_model()
196
- # Trace model
197
- vae_decoder_input_shape = vae_decoder_model.get_input_spec()
198
- vae_decoder_sample_inputs = vae_decoder_model.sample_inputs()
199
-
200
- traced_vae_decoder_model = torch.jit.trace(vae_decoder_model, [torch.tensor(data[0]) for _, data in vae_decoder_sample_inputs.items()])
201
-
202
- # Compile model on a specific device
203
- vae_decoder_compile_job = hub.submit_compile_job(
204
- model=traced_vae_decoder_model ,
205
- device=device,
206
- input_specs=vae_decoder_model.get_input_spec(),
207
- )
208
-
209
- # Get target model to run on-device
210
- vae_decoder_target_model = vae_decoder_compile_job.get_target_model()
211
-
212
- ```
213
-
214
-
215
- Step 2: **Performance profiling on cloud-hosted device**
216
-
217
- After uploading compiled models from step 1. Models can be profiled model on-device using the
218
- `target_model`. Note that this scripts runs the model on a device automatically
219
- provisioned in the cloud. Once the job is submitted, you can navigate to a
220
- provided job URL to view a variety of on-device performance metrics.
221
- ```python
222
-
223
- # Device
224
- device = hub.Device("Samsung Galaxy S23")
225
- profile_job_textencoder_quantized = hub.submit_profile_job(
226
- model=model_textencoder_quantized,
227
- device=device,
228
- )
229
- profile_job_unet_quantized = hub.submit_profile_job(
230
- model=model_unet_quantized,
231
- device=device,
232
- )
233
- profile_job_vaedecoder_quantized = hub.submit_profile_job(
234
- model=model_vaedecoder_quantized,
235
- device=device,
236
- )
237
-
238
- ```
239
-
240
- Step 3: **Verify on-device accuracy**
241
-
242
- To verify the accuracy of the model on-device, you can run on-device inference
243
- on sample input data on the same cloud hosted device.
244
- ```python
245
-
246
- input_data_textencoder_quantized = model.text_encoder.sample_inputs()
247
- inference_job_textencoder_quantized = hub.submit_inference_job(
248
- model=model_textencoder_quantized,
249
- device=device,
250
- inputs=input_data_textencoder_quantized,
251
- )
252
- on_device_output_textencoder_quantized = inference_job_textencoder_quantized.download_output_data()
253
-
254
- input_data_unet_quantized = model.unet.sample_inputs()
255
- inference_job_unet_quantized = hub.submit_inference_job(
256
- model=model_unet_quantized,
257
- device=device,
258
- inputs=input_data_unet_quantized,
259
- )
260
- on_device_output_unet_quantized = inference_job_unet_quantized.download_output_data()
261
-
262
- input_data_vaedecoder_quantized = model.vae_decoder.sample_inputs()
263
- inference_job_vaedecoder_quantized = hub.submit_inference_job(
264
- model=model_vaedecoder_quantized,
265
- device=device,
266
- inputs=input_data_vaedecoder_quantized,
267
- )
268
- on_device_output_vaedecoder_quantized = inference_job_vaedecoder_quantized.download_output_data()
269
-
270
- ```
271
- With the output of the model, you can compute like PSNR, relative errors or
272
- spot check the output with expected output.
273
-
274
- **Note**: This on-device profiling and inference requires access to Qualcomm®
275
- AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
276
-
277
 
278
 
279
 
@@ -286,9 +170,9 @@ The models can be deployed using multiple runtimes:
286
  guide to deploy the .tflite model in an Android application.
287
 
288
 
289
- - QNN ( `.so` / `.bin` export ): This [sample
290
  app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
291
- provides instructions on how to use the `.so` shared library or `.bin` context binary in an Android application.
292
 
293
 
294
  ## View on Qualcomm® AI Hub
@@ -314,25 +198,3 @@ Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
314
  * For questions or feedback please [reach out to us](mailto:[email protected]).
315
 
316
 
317
- ## Usage and Limitations
318
-
319
- This model may not be used for or in connection with any of the following applications:
320
-
321
- - Accessing essential private and public services and benefits;
322
- - Administration of justice and democratic processes;
323
- - Assessing or recognizing the emotional state of a person;
324
- - Biometric and biometrics-based systems, including categorization of persons based on sensitive characteristics;
325
- - Education and vocational training;
326
- - Employment and workers management;
327
- - Exploitation of the vulnerabilities of persons resulting in harmful behavior;
328
- - General purpose social scoring;
329
- - Law enforcement;
330
- - Management and operation of critical infrastructure;
331
- - Migration, asylum and border control management;
332
- - Predictive policing;
333
- - Real-time remote biometric identification in public spaces;
334
- - Recommender systems of social media platforms;
335
- - Scraping of facial images (from the internet or otherwise); and/or
336
- - Subliminal manipulation
337
-
338
-
 
37
 
38
  | Model | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
39
  |---|---|---|---|---|---|---|---|---|
40
+ | TextEncoderQuantizable | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 6.622 ms | 0 - 2 MB | W8A16 | NPU | [Stable-Diffusion-v2.1.so](https://huggingface.co/qualcomm/Stable-Diffusion-v2.1/blob/main/TextEncoderQuantizable.so) |
41
+ | TextEncoderQuantizable | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 4.851 ms | 0 - 19 MB | W8A16 | NPU | [Stable-Diffusion-v2.1.so](https://huggingface.co/qualcomm/Stable-Diffusion-v2.1/blob/main/TextEncoderQuantizable.so) |
42
+ | TextEncoderQuantizable | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN | 4.198 ms | 0 - 15 MB | W8A16 | NPU | Use Export Script |
43
+ | TextEncoderQuantizable | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 6.896 ms | 0 - 0 MB | W8A16 | NPU | Use Export Script |
44
+ | TextEncoderQuantizable | SA7255P ADP | SA7255P | QNN | 88.097 ms | 0 - 8 MB | W8A16 | NPU | Use Export Script |
45
+ | TextEncoderQuantizable | SA8255 (Proxy) | SA8255P Proxy | QNN | 6.68 ms | 0 - 2 MB | W8A16 | NPU | Use Export Script |
46
+ | TextEncoderQuantizable | SA8650 (Proxy) | SA8650P Proxy | QNN | 6.651 ms | 0 - 5 MB | W8A16 | NPU | Use Export Script |
47
+ | TextEncoderQuantizable | SA8775P ADP | SA8775P | QNN | 7.894 ms | 0 - 10 MB | W8A16 | NPU | Use Export Script |
48
+ | TextEncoderQuantizable | QCS8275 (Proxy) | QCS8275 Proxy | QNN | 88.097 ms | 0 - 8 MB | W8A16 | NPU | Use Export Script |
49
+ | TextEncoderQuantizable | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 6.643 ms | 0 - 3 MB | W8A16 | NPU | Use Export Script |
50
+ | TextEncoderQuantizable | QCS9075 (Proxy) | QCS9075 Proxy | QNN | 7.894 ms | 0 - 10 MB | W8A16 | NPU | Use Export Script |
51
+ | UnetQuantizable | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 97.767 ms | 0 - 3 MB | W8A16 | NPU | [Stable-Diffusion-v2.1.so](https://huggingface.co/qualcomm/Stable-Diffusion-v2.1/blob/main/UnetQuantizable.so) |
52
+ | UnetQuantizable | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 69.335 ms | 0 - 19 MB | W8A16 | NPU | [Stable-Diffusion-v2.1.so](https://huggingface.co/qualcomm/Stable-Diffusion-v2.1/blob/main/UnetQuantizable.so) |
53
+ | UnetQuantizable | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN | 61.27 ms | 0 - 14 MB | W8A16 | NPU | Use Export Script |
54
+ | UnetQuantizable | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 99.423 ms | 0 - 0 MB | W8A16 | NPU | Use Export Script |
55
+ | UnetQuantizable | SA7255P ADP | SA7255P | QNN | 1468.169 ms | 0 - 8 MB | W8A16 | NPU | Use Export Script |
56
+ | UnetQuantizable | SA8255 (Proxy) | SA8255P Proxy | QNN | 96.812 ms | 0 - 2 MB | W8A16 | NPU | Use Export Script |
57
+ | UnetQuantizable | SA8650 (Proxy) | SA8650P Proxy | QNN | 97.233 ms | 0 - 3 MB | W8A16 | NPU | Use Export Script |
58
+ | UnetQuantizable | SA8775P ADP | SA8775P | QNN | 110.658 ms | 0 - 9 MB | W8A16 | NPU | Use Export Script |
59
+ | UnetQuantizable | QCS8275 (Proxy) | QCS8275 Proxy | QNN | 1468.169 ms | 0 - 8 MB | W8A16 | NPU | Use Export Script |
60
+ | UnetQuantizable | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 98.147 ms | 0 - 3 MB | W8A16 | NPU | Use Export Script |
61
+ | UnetQuantizable | QCS9075 (Proxy) | QCS9075 Proxy | QNN | 110.658 ms | 0 - 9 MB | W8A16 | NPU | Use Export Script |
62
+ | VaeDecoderQuantizable | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 274.636 ms | 0 - 4 MB | W8A16 | NPU | [Stable-Diffusion-v2.1.so](https://huggingface.co/qualcomm/Stable-Diffusion-v2.1/blob/main/VaeDecoderQuantizable.so) |
63
+ | VaeDecoderQuantizable | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 206.701 ms | 0 - 18 MB | W8A16 | NPU | [Stable-Diffusion-v2.1.so](https://huggingface.co/qualcomm/Stable-Diffusion-v2.1/blob/main/VaeDecoderQuantizable.so) |
64
+ | VaeDecoderQuantizable | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN | 189.387 ms | 0 - 355 MB | W8A16 | NPU | Use Export Script |
65
+ | VaeDecoderQuantizable | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 266.827 ms | 0 - 0 MB | W8A16 | NPU | Use Export Script |
66
+ | VaeDecoderQuantizable | SA7255P ADP | SA7255P | QNN | 4462.005 ms | 1 - 10 MB | W8A16 | NPU | Use Export Script |
67
+ | VaeDecoderQuantizable | SA8255 (Proxy) | SA8255P Proxy | QNN | 274.28 ms | 0 - 3 MB | W8A16 | NPU | Use Export Script |
68
+ | VaeDecoderQuantizable | SA8650 (Proxy) | SA8650P Proxy | QNN | 272.687 ms | 0 - 2 MB | W8A16 | NPU | Use Export Script |
69
+ | VaeDecoderQuantizable | SA8775P ADP | SA8775P | QNN | 301.027 ms | 0 - 10 MB | W8A16 | NPU | Use Export Script |
70
+ | VaeDecoderQuantizable | QCS8275 (Proxy) | QCS8275 Proxy | QNN | 4462.005 ms | 1 - 10 MB | W8A16 | NPU | Use Export Script |
71
+ | VaeDecoderQuantizable | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 259.311 ms | 0 - 3 MB | W8A16 | NPU | Use Export Script |
72
+ | VaeDecoderQuantizable | QCS9075 (Proxy) | QCS9075 Proxy | QNN | 301.027 ms | 0 - 10 MB | W8A16 | NPU | Use Export Script |
73
 
74
 
75
 
 
79
 
80
  Install the package via pip:
81
  ```bash
82
+ pip install "qai-hub-models[stable-diffusion-v2-1-quantized]" -f https://qaihub-public-python-wheels.s3.us-west-2.amazonaws.com/index.html
83
  ```
84
 
85
 
 
97
 
98
 
99
 
100
+ ## Demo off target
101
 
102
  The package contains a simple end-to-end demo that downloads pre-trained
103
  weights and runs this model on a sample input.
 
130
  ```
131
  Profiling Results
132
  ------------------------------------------------------------
133
+ TextEncoderQuantizable
134
  Device : Samsung Galaxy S23 (13)
135
  Runtime : QNN
136
+ Estimated inference time (ms) : 6.6
137
+ Estimated peak memory usage (MB): [0, 2]
138
+ Total # Ops : 787
139
+ Compute Unit(s) : NPU (787 ops)
140
 
141
  ------------------------------------------------------------
142
+ UnetQuantizable
143
  Device : Samsung Galaxy S23 (13)
144
  Runtime : QNN
145
+ Estimated inference time (ms) : 97.8
146
+ Estimated peak memory usage (MB): [0, 3]
147
+ Total # Ops : 5891
148
+ Compute Unit(s) : NPU (5891 ops)
149
 
150
  ------------------------------------------------------------
151
+ VaeDecoderQuantizable
152
  Device : Samsung Galaxy S23 (13)
153
  Runtime : QNN
154
+ Estimated inference time (ms) : 274.6
155
+ Estimated peak memory usage (MB): [0, 4]
156
+ Total # Ops : 189
157
+ Compute Unit(s) : NPU (189 ops)
158
  ```
159
 
160
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
161
 
162
 
163
 
 
170
  guide to deploy the .tflite model in an Android application.
171
 
172
 
173
+ - QNN (`.so` export ): This [sample
174
  app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
175
+ provides instructions on how to use the `.so` shared library in an Android application.
176
 
177
 
178
  ## View on Qualcomm® AI Hub
 
198
  * For questions or feedback please [reach out to us](mailto:[email protected]).
199
 
200