HRNET_W48_OCR: Optimized for Mobile Deployment

Semantic segmentation in higher resuolution

HRNET_W48_OCR is a machine learning model that can segment images from the Cityscape dataset. It has lightweight and hardware-efficient operations and thus delivers significant speedup on diverse hardware platforms

This model is an implementation of HRNET_W48_OCR found here.

This repository provides scripts to run HRNET_W48_OCR on Qualcomm® devices. More details on model performance across various devices, can be found here.

Model Details

  • Model Type: Semantic segmentation
  • Model Stats:
    • Model checkpoint: hrnet_ocr_cs_8162_torch11.pth
    • Input resolution: 2048x1024
    • Number of output classes: 19
    • Number of parameters: 70.3M
    • Model size: 268 MB
Model Device Chipset Target Runtime Inference Time (ms) Peak Memory Range (MB) Precision Primary Compute Unit Target Model
HRNET_W48_OCR Samsung Galaxy S23 Snapdragon® 8 Gen 2 TFLITE 81759.68 ms 4 - 90 MB FP16 NPU HRNET_W48_OCR.tflite
HRNET_W48_OCR Samsung Galaxy S23 Snapdragon® 8 Gen 2 QNN 85025.233 ms 25 - 27 MB FP16 NPU HRNET_W48_OCR.so
HRNET_W48_OCR Samsung Galaxy S23 Snapdragon® 8 Gen 2 ONNX 91343.058 ms 4 - 322 MB FP16 NPU HRNET_W48_OCR.onnx
HRNET_W48_OCR Samsung Galaxy S24 Snapdragon® 8 Gen 3 TFLITE 56149.777 ms 7 - 535 MB FP16 NPU HRNET_W48_OCR.tflite
HRNET_W48_OCR Samsung Galaxy S24 Snapdragon® 8 Gen 3 QNN 56705.212 ms 24 - 43 MB FP16 NPU HRNET_W48_OCR.so
HRNET_W48_OCR Samsung Galaxy S24 Snapdragon® 8 Gen 3 ONNX 58687.824 ms 30 - 455 MB FP16 NPU HRNET_W48_OCR.onnx
HRNET_W48_OCR Snapdragon 8 Elite QRD Snapdragon® 8 Elite TFLITE 39559.656 ms 4 - 597 MB FP16 NPU HRNET_W48_OCR.tflite
HRNET_W48_OCR Snapdragon 8 Elite QRD Snapdragon® 8 Elite QNN 50457.903 ms 24 - 479 MB FP16 NPU Use Export Script
HRNET_W48_OCR Snapdragon 8 Elite QRD Snapdragon® 8 Elite ONNX 51984.785 ms 24 - 510 MB FP16 NPU HRNET_W48_OCR.onnx
HRNET_W48_OCR QCS8275 (Proxy) QCS8275 Proxy TFLITE 154673.294 ms 4 - 602 MB FP16 NPU HRNET_W48_OCR.tflite
HRNET_W48_OCR QCS8275 (Proxy) QCS8275 Proxy QNN 156564.278 ms 24 - 33 MB FP16 NPU Use Export Script
HRNET_W48_OCR QCS8550 (Proxy) QCS8550 Proxy TFLITE 79765.95 ms 11 - 98 MB FP16 NPU HRNET_W48_OCR.tflite
HRNET_W48_OCR QCS8550 (Proxy) QCS8550 Proxy QNN 82649.772 ms 24 - 27 MB FP16 NPU Use Export Script
HRNET_W48_OCR QCS9075 (Proxy) QCS9075 Proxy TFLITE 97744.736 ms 5 - 600 MB FP16 NPU HRNET_W48_OCR.tflite
HRNET_W48_OCR QCS9075 (Proxy) QCS9075 Proxy QNN 99531.62 ms 24 - 34 MB FP16 NPU Use Export Script
HRNET_W48_OCR QCS8450 (Proxy) QCS8450 Proxy TFLITE 78052.977 ms 13 - 841 MB FP16 NPU HRNET_W48_OCR.tflite
HRNET_W48_OCR QCS8450 (Proxy) QCS8450 Proxy QNN 86782.982 ms 40 - 661 MB FP16 NPU Use Export Script
HRNET_W48_OCR Snapdragon X Elite CRD Snapdragon® X Elite QNN 83601.493 ms 24 - 24 MB FP16 NPU Use Export Script
HRNET_W48_OCR Snapdragon X Elite CRD Snapdragon® X Elite ONNX 80943.071 ms 132 - 132 MB FP16 NPU HRNET_W48_OCR.onnx

Installation

Install the package via pip:

pip install "qai-hub-models[hrnet-w48-ocr]"

Configure Qualcomm® AI Hub to run this model on a cloud-hosted device

Sign-in to Qualcomm® AI Hub with your Qualcomm® ID. Once signed in navigate to Account -> Settings -> API Token.

With this API token, you can configure your client to run models on the cloud hosted devices.

qai-hub configure --api_token API_TOKEN

Navigate to docs for more information.

Demo off target

The package contains a simple end-to-end demo that downloads pre-trained weights and runs this model on a sample input.

python -m qai_hub_models.models.hrnet_w48_ocr.demo

The above demo runs a reference implementation of pre-processing, model inference, and post processing.

NOTE: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above).

%run -m qai_hub_models.models.hrnet_w48_ocr.demo

Run model on a cloud-hosted device

In addition to the demo, you can also run the model on a cloud-hosted Qualcomm® device. This script does the following:

  • Performance check on-device on a cloud-hosted device
  • Downloads compiled assets that can be deployed on-device for Android.
  • Accuracy check between PyTorch and on-device outputs.
python -m qai_hub_models.models.hrnet_w48_ocr.export
Profiling Results
------------------------------------------------------------
HRNET_W48_OCR
Device                          : Samsung Galaxy S23 (13)
Runtime                         : TFLITE                 
Estimated inference time (ms)   : 81759.7                
Estimated peak memory usage (MB): [4, 90]                
Total # Ops                     : 578                    
Compute Unit(s)                 : NPU (578 ops)          

How does this work?

This export script leverages Qualcomm® AI Hub to optimize, validate, and deploy this model on-device. Lets go through each step below in detail:

Step 1: Compile model for on-device deployment

To compile a PyTorch model for on-device deployment, we first trace the model in memory using the jit.trace and then call the submit_compile_job API.

import torch

import qai_hub as hub
from qai_hub_models.models.hrnet_w48_ocr import Model

# Load the model
torch_model = Model.from_pretrained()

# Device
device = hub.Device("Samsung Galaxy S24")

# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()

pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])

# Compile model on a specific device
compile_job = hub.submit_compile_job(
    model=pt_model,
    device=device,
    input_specs=torch_model.get_input_spec(),
)

# Get target model to run on-device
target_model = compile_job.get_target_model()

Step 2: Performance profiling on cloud-hosted device

After compiling models from step 1. Models can be profiled model on-device using the target_model. Note that this scripts runs the model on a device automatically provisioned in the cloud. Once the job is submitted, you can navigate to a provided job URL to view a variety of on-device performance metrics.

profile_job = hub.submit_profile_job(
    model=target_model,
    device=device,
)
        

Step 3: Verify on-device accuracy

To verify the accuracy of the model on-device, you can run on-device inference on sample input data on the same cloud hosted device.

input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
    model=target_model,
    device=device,
    inputs=input_data,
)
    on_device_output = inference_job.download_output_data()

With the output of the model, you can compute like PSNR, relative errors or spot check the output with expected output.

Note: This on-device profiling and inference requires access to Qualcomm® AI Hub. Sign up for access.

Deploying compiled model to Android

The models can be deployed using multiple runtimes:

  • TensorFlow Lite (.tflite export): This tutorial provides a guide to deploy the .tflite model in an Android application.

  • QNN (.so export ): This sample app provides instructions on how to use the .so shared library in an Android application.

View on Qualcomm® AI Hub

Get more details on HRNET_W48_OCR's performance across various devices here. Explore all available models on Qualcomm® AI Hub

License

  • The license for the original implementation of HRNET_W48_OCR can be found here.
  • The license for the compiled assets for on-device deployment can be found here

References

Community

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support image-segmentation models for pytorch library.