qaihm-bot commited on
Commit
8e8123f
·
verified ·
1 Parent(s): 784c5d9

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +40 -19
README.md CHANGED
@@ -14,7 +14,7 @@ tags:
14
 
15
  QuickSRNet Small is designed for upscaling images on mobile platforms to sharpen in real-time.
16
 
17
- This model is an implementation of QuickSRNetSmall found [here](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/quicksrnet).
18
  This repository provides scripts to run QuickSRNetSmall on Qualcomm® devices.
19
  More details on model performance across various devices, can be found
20
  [here](https://aihub.qualcomm.com/models/quicksrnetsmall).
@@ -29,15 +29,32 @@ More details on model performance across various devices, can be found
29
  - Number of parameters: 27.2K
30
  - Model size: 110 KB
31
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
 
33
 
34
 
35
- | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
36
- | ---|---|---|---|---|---|---|---|
37
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 1.298 ms | 7 - 75 MB | FP16 | NPU | [QuickSRNetSmall.tflite](https://huggingface.co/qualcomm/QuickSRNetSmall/blob/main/QuickSRNetSmall.tflite)
38
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 1.01 ms | 0 - 62 MB | FP16 | NPU | [QuickSRNetSmall.so](https://huggingface.co/qualcomm/QuickSRNetSmall/blob/main/QuickSRNetSmall.so)
39
-
40
-
41
 
42
  ## Installation
43
 
@@ -92,16 +109,16 @@ device. This script does the following:
92
  ```bash
93
  python -m qai_hub_models.models.quicksrnetsmall.export
94
  ```
95
-
96
  ```
97
- Profile Job summary of QuickSRNetSmall
98
- --------------------------------------------------
99
- Device: Snapdragon X Elite CRD (11)
100
- Estimated Inference Time: 0.94 ms
101
- Estimated Peak Memory Range: 0.20-0.20 MB
102
- Compute Units: NPU (11) | Total (11)
103
-
104
-
 
105
  ```
106
 
107
 
@@ -200,15 +217,19 @@ provides instructions on how to use the `.so` shared library in an Android appl
200
  Get more details on QuickSRNetSmall's performance across various devices [here](https://aihub.qualcomm.com/models/quicksrnetsmall).
201
  Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
202
 
 
203
  ## License
204
- - The license for the original implementation of QuickSRNetSmall can be found
205
- [here](https://github.com/quic/aimet-model-zoo/blob/develop/LICENSE.pdf).
206
- - The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
 
207
 
208
  ## References
209
  * [QuickSRNet: Plain Single-Image Super-Resolution Architecture for Faster Inference on Mobile Platforms](https://arxiv.org/abs/2303.04336)
210
  * [Source Model Implementation](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/quicksrnet)
211
 
 
 
212
  ## Community
213
  * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
214
  * For questions or feedback please [reach out to us](mailto:[email protected]).
 
14
 
15
  QuickSRNet Small is designed for upscaling images on mobile platforms to sharpen in real-time.
16
 
17
+ This model is an implementation of QuickSRNetSmall found [here]({source_repo}).
18
  This repository provides scripts to run QuickSRNetSmall on Qualcomm® devices.
19
  More details on model performance across various devices, can be found
20
  [here](https://aihub.qualcomm.com/models/quicksrnetsmall).
 
29
  - Number of parameters: 27.2K
30
  - Model size: 110 KB
31
 
32
+ | Model | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
33
+ |---|---|---|---|---|---|---|---|---|
34
+ | QuickSRNetSmall | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | TFLITE | 1.34 ms | 0 - 8 MB | FP16 | NPU | [QuickSRNetSmall.tflite](https://huggingface.co/qualcomm/QuickSRNetSmall/blob/main/QuickSRNetSmall.tflite) |
35
+ | QuickSRNetSmall | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 1.061 ms | 0 - 7 MB | FP16 | NPU | [QuickSRNetSmall.so](https://huggingface.co/qualcomm/QuickSRNetSmall/blob/main/QuickSRNetSmall.so) |
36
+ | QuickSRNetSmall | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | ONNX | 1.44 ms | 0 - 2 MB | FP16 | NPU | [QuickSRNetSmall.onnx](https://huggingface.co/qualcomm/QuickSRNetSmall/blob/main/QuickSRNetSmall.onnx) |
37
+ | QuickSRNetSmall | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | TFLITE | 0.83 ms | 0 - 20 MB | FP16 | NPU | [QuickSRNetSmall.tflite](https://huggingface.co/qualcomm/QuickSRNetSmall/blob/main/QuickSRNetSmall.tflite) |
38
+ | QuickSRNetSmall | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 0.638 ms | 0 - 12 MB | FP16 | NPU | [QuickSRNetSmall.so](https://huggingface.co/qualcomm/QuickSRNetSmall/blob/main/QuickSRNetSmall.so) |
39
+ | QuickSRNetSmall | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | ONNX | 0.997 ms | 0 - 21 MB | FP16 | NPU | [QuickSRNetSmall.onnx](https://huggingface.co/qualcomm/QuickSRNetSmall/blob/main/QuickSRNetSmall.onnx) |
40
+ | QuickSRNetSmall | QCS8550 (Proxy) | QCS8550 Proxy | TFLITE | 1.36 ms | 0 - 1 MB | FP16 | NPU | [QuickSRNetSmall.tflite](https://huggingface.co/qualcomm/QuickSRNetSmall/blob/main/QuickSRNetSmall.tflite) |
41
+ | QuickSRNetSmall | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 0.863 ms | 0 - 1 MB | FP16 | NPU | Use Export Script |
42
+ | QuickSRNetSmall | SA8255 (Proxy) | SA8255P Proxy | TFLITE | 1.413 ms | 0 - 22 MB | FP16 | NPU | [QuickSRNetSmall.tflite](https://huggingface.co/qualcomm/QuickSRNetSmall/blob/main/QuickSRNetSmall.tflite) |
43
+ | QuickSRNetSmall | SA8255 (Proxy) | SA8255P Proxy | QNN | 0.877 ms | 0 - 4 MB | FP16 | NPU | Use Export Script |
44
+ | QuickSRNetSmall | SA8775 (Proxy) | SA8775P Proxy | TFLITE | 1.361 ms | 0 - 1 MB | FP16 | NPU | [QuickSRNetSmall.tflite](https://huggingface.co/qualcomm/QuickSRNetSmall/blob/main/QuickSRNetSmall.tflite) |
45
+ | QuickSRNetSmall | SA8775 (Proxy) | SA8775P Proxy | QNN | 0.863 ms | 0 - 1 MB | FP16 | NPU | Use Export Script |
46
+ | QuickSRNetSmall | SA8650 (Proxy) | SA8650P Proxy | TFLITE | 1.316 ms | 0 - 1 MB | FP16 | NPU | [QuickSRNetSmall.tflite](https://huggingface.co/qualcomm/QuickSRNetSmall/blob/main/QuickSRNetSmall.tflite) |
47
+ | QuickSRNetSmall | SA8650 (Proxy) | SA8650P Proxy | QNN | 0.872 ms | 0 - 1 MB | FP16 | NPU | Use Export Script |
48
+ | QuickSRNetSmall | QCS8450 (Proxy) | QCS8450 Proxy | TFLITE | 2.807 ms | 0 - 20 MB | FP16 | NPU | [QuickSRNetSmall.tflite](https://huggingface.co/qualcomm/QuickSRNetSmall/blob/main/QuickSRNetSmall.tflite) |
49
+ | QuickSRNetSmall | QCS8450 (Proxy) | QCS8450 Proxy | QNN | 1.118 ms | 0 - 12 MB | FP16 | NPU | Use Export Script |
50
+ | QuickSRNetSmall | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | TFLITE | 0.746 ms | 0 - 14 MB | FP16 | NPU | [QuickSRNetSmall.tflite](https://huggingface.co/qualcomm/QuickSRNetSmall/blob/main/QuickSRNetSmall.tflite) |
51
+ | QuickSRNetSmall | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN | 0.663 ms | 0 - 9 MB | FP16 | NPU | Use Export Script |
52
+ | QuickSRNetSmall | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | ONNX | 0.968 ms | 0 - 14 MB | FP16 | NPU | [QuickSRNetSmall.onnx](https://huggingface.co/qualcomm/QuickSRNetSmall/blob/main/QuickSRNetSmall.onnx) |
53
+ | QuickSRNetSmall | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 0.942 ms | 0 - 0 MB | FP16 | NPU | Use Export Script |
54
+ | QuickSRNetSmall | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 1.468 ms | 9 - 9 MB | FP16 | NPU | [QuickSRNetSmall.onnx](https://huggingface.co/qualcomm/QuickSRNetSmall/blob/main/QuickSRNetSmall.onnx) |
55
 
56
 
57
 
 
 
 
 
 
 
58
 
59
  ## Installation
60
 
 
109
  ```bash
110
  python -m qai_hub_models.models.quicksrnetsmall.export
111
  ```
 
112
  ```
113
+ Profiling Results
114
+ ------------------------------------------------------------
115
+ QuickSRNetSmall
116
+ Device : Samsung Galaxy S23 (13)
117
+ Runtime : TFLITE
118
+ Estimated inference time (ms) : 1.3
119
+ Estimated peak memory usage (MB): [0, 8]
120
+ Total # Ops : 11
121
+ Compute Unit(s) : NPU (8 ops) CPU (3 ops)
122
  ```
123
 
124
 
 
217
  Get more details on QuickSRNetSmall's performance across various devices [here](https://aihub.qualcomm.com/models/quicksrnetsmall).
218
  Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
219
 
220
+
221
  ## License
222
+ * The license for the original implementation of QuickSRNetSmall can be found [here](https://github.com/quic/aimet-model-zoo/blob/develop/LICENSE.pdf).
223
+ * The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
224
+
225
+
226
 
227
  ## References
228
  * [QuickSRNet: Plain Single-Image Super-Resolution Architecture for Faster Inference on Mobile Platforms](https://arxiv.org/abs/2303.04336)
229
  * [Source Model Implementation](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/quicksrnet)
230
 
231
+
232
+
233
  ## Community
234
  * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
235
  * For questions or feedback please [reach out to us](mailto:[email protected]).