qaihm-bot commited on
Commit
f3fa102
·
verified ·
1 Parent(s): 355e773

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +15 -9
README.md CHANGED
@@ -24,16 +24,19 @@ More details on model performance across various devices, can be found
24
 
25
  - **Model Type:** Super resolution
26
  - **Model Stats:**
27
- - Model checkpoint: quicksrnet_large_4x_checkpoint_float32
28
- - Input resolution: 128x128
29
- - Number of parameters: 436K
30
- - Model size: 1.67 MB
 
 
31
 
32
 
33
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
34
  | ---|---|---|---|---|---|---|---|
35
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 2.401 ms | 0 - 16 MB | FP16 | NPU | [QuickSRNetLarge.tflite](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.tflite)
36
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 2.092 ms | 0 - 12 MB | FP16 | NPU | [QuickSRNetLarge.so](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.so)
 
37
 
38
 
39
  ## Installation
@@ -95,14 +98,16 @@ Profile Job summary of QuickSRNetLarge
95
  --------------------------------------------------
96
  Device: Snapdragon X Elite CRD (11)
97
  Estimated Inference Time: 2.95 ms
98
- Estimated Peak Memory Range: 0.24-0.24 MB
99
  Compute Units: NPU (31) | Total (31)
100
 
101
 
102
  ```
 
 
103
  ## How does this work?
104
 
105
- This [export script](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/QuickSRNetLarge/export.py)
106
  leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
107
  on-device. Lets go through each step below in detail:
108
 
@@ -179,6 +184,7 @@ spot check the output with expected output.
179
  AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
180
 
181
 
 
182
  ## Run demo on a cloud-hosted device
183
 
184
  You can also run the demo on-device.
@@ -215,7 +221,7 @@ Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
215
  ## License
216
  - The license for the original implementation of QuickSRNetLarge can be found
217
  [here](https://github.com/quic/aimet-model-zoo/blob/develop/LICENSE.pdf).
218
- - The license for the compiled assets for on-device deployment can be found [here]({deploy_license_url})
219
 
220
  ## References
221
  * [QuickSRNet: Plain Single-Image Super-Resolution Architecture for Faster Inference on Mobile Platforms](https://arxiv.org/abs/2303.04336)
 
24
 
25
  - **Model Type:** Super resolution
26
  - **Model Stats:**
27
+ - Model checkpoint: quicksrnet_large_3x_checkpoint
28
+ - Input resolution: 640x360
29
+ - Number of parameters: 424K
30
+ - Model size: 1.63 MB
31
+
32
+
33
 
34
 
35
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
36
  | ---|---|---|---|---|---|---|---|
37
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 2.412 ms | 0 - 1 MB | FP16 | NPU | [QuickSRNetLarge.tflite](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.tflite)
38
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 2.108 ms | 0 - 5 MB | FP16 | NPU | [QuickSRNetLarge.so](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.so)
39
+
40
 
41
 
42
  ## Installation
 
98
  --------------------------------------------------
99
  Device: Snapdragon X Elite CRD (11)
100
  Estimated Inference Time: 2.95 ms
101
+ Estimated Peak Memory Range: 0.20-0.20 MB
102
  Compute Units: NPU (31) | Total (31)
103
 
104
 
105
  ```
106
+
107
+
108
  ## How does this work?
109
 
110
+ This [export script](https://aihub.qualcomm.com/models/quicksrnetlarge/qai_hub_models/models/QuickSRNetLarge/export.py)
111
  leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
112
  on-device. Lets go through each step below in detail:
113
 
 
184
  AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
185
 
186
 
187
+
188
  ## Run demo on a cloud-hosted device
189
 
190
  You can also run the demo on-device.
 
221
  ## License
222
  - The license for the original implementation of QuickSRNetLarge can be found
223
  [here](https://github.com/quic/aimet-model-zoo/blob/develop/LICENSE.pdf).
224
+ - The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
225
 
226
  ## References
227
  * [QuickSRNet: Plain Single-Image Super-Resolution Architecture for Faster Inference on Mobile Platforms](https://arxiv.org/abs/2303.04336)