qaihm-bot commited on
Commit
33346c8
·
verified ·
1 Parent(s): 2e15cb1

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +5 -4
README.md CHANGED
@@ -33,8 +33,8 @@ More details on model performance across various devices, can be found
33
 
34
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
35
  | ---|---|---|---|---|---|---|---|
36
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 0.957 ms | 1 - 3 MB | INT8 | NPU | [QuickSRNetSmall-Quantized.tflite](https://huggingface.co/qualcomm/QuickSRNetSmall-Quantized/blob/main/QuickSRNetSmall-Quantized.tflite)
37
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 0.662 ms | 0 - 2 MB | INT8 | NPU | [QuickSRNetSmall-Quantized.so](https://huggingface.co/qualcomm/QuickSRNetSmall-Quantized/blob/main/QuickSRNetSmall-Quantized.so)
38
 
39
 
40
  ## Installation
@@ -42,10 +42,11 @@ More details on model performance across various devices, can be found
42
  This model can be installed as a Python package via pip.
43
 
44
  ```bash
45
- pip install qai-hub-models
46
  ```
47
 
48
 
 
49
  ## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
50
 
51
  Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
@@ -95,7 +96,7 @@ python -m qai_hub_models.models.quicksrnetsmall_quantized.export
95
  Profile Job summary of QuickSRNetSmall-Quantized
96
  --------------------------------------------------
97
  Device: Snapdragon X Elite CRD (11)
98
- Estimated Inference Time: 0.76 ms
99
  Estimated Peak Memory Range: 0.05-0.05 MB
100
  Compute Units: NPU (8) | Total (8)
101
 
 
33
 
34
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
35
  | ---|---|---|---|---|---|---|---|
36
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 0.95 ms | 0 - 2 MB | INT8 | NPU | [QuickSRNetSmall-Quantized.tflite](https://huggingface.co/qualcomm/QuickSRNetSmall-Quantized/blob/main/QuickSRNetSmall-Quantized.tflite)
37
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 0.668 ms | 0 - 2 MB | INT8 | NPU | [QuickSRNetSmall-Quantized.so](https://huggingface.co/qualcomm/QuickSRNetSmall-Quantized/blob/main/QuickSRNetSmall-Quantized.so)
38
 
39
 
40
  ## Installation
 
42
  This model can be installed as a Python package via pip.
43
 
44
  ```bash
45
+ pip install "qai-hub-models[quicksrnetsmall_quantized]"
46
  ```
47
 
48
 
49
+
50
  ## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
51
 
52
  Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
 
96
  Profile Job summary of QuickSRNetSmall-Quantized
97
  --------------------------------------------------
98
  Device: Snapdragon X Elite CRD (11)
99
+ Estimated Inference Time: 0.74 ms
100
  Estimated Peak Memory Range: 0.05-0.05 MB
101
  Compute Units: NPU (8) | Total (8)
102