Inception-v3: Optimized for Mobile Deployment

Imagenet classifier and general purpose backbone

InceptionNetV3 is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.

This model is an implementation of Inception-v3 found here.

This repository provides scripts to run Inception-v3 on Qualcomm® devices. More details on model performance across various devices, can be found here.

Model Details

  • Model Type: Model_use_case.image_classification
  • Model Stats:
    • Model checkpoint: Imagenet
    • Input resolution: 224x224
    • Number of parameters: 23.9M
    • Model size (float): 90.9 MB
    • Model size (w8a8): 23.3 MB
Model Precision Device Chipset Target Runtime Inference Time (ms) Peak Memory Range (MB) Primary Compute Unit Target Model
Inception-v3 float QCS8275 (Proxy) Qualcomm® QCS8275 (Proxy) TFLITE 7.739 ms 0 - 60 MB NPU Inception-v3.tflite
Inception-v3 float QCS8275 (Proxy) Qualcomm® QCS8275 (Proxy) QNN_DLC 7.7 ms 0 - 24 MB NPU Inception-v3.dlc
Inception-v3 float QCS8450 (Proxy) Qualcomm® QCS8450 (Proxy) TFLITE 2.112 ms 0 - 99 MB NPU Inception-v3.tflite
Inception-v3 float QCS8450 (Proxy) Qualcomm® QCS8450 (Proxy) QNN_DLC 2.343 ms 0 - 36 MB NPU Inception-v3.dlc
Inception-v3 float QCS8550 (Proxy) Qualcomm® QCS8550 (Proxy) TFLITE 1.293 ms 0 - 361 MB NPU Inception-v3.tflite
Inception-v3 float QCS8550 (Proxy) Qualcomm® QCS8550 (Proxy) QNN_DLC 1.349 ms 0 - 130 MB NPU Inception-v3.dlc
Inception-v3 float QCS9075 (Proxy) Qualcomm® QCS9075 (Proxy) TFLITE 2.176 ms 0 - 60 MB NPU Inception-v3.tflite
Inception-v3 float QCS9075 (Proxy) Qualcomm® QCS9075 (Proxy) QNN_DLC 2.22 ms 1 - 26 MB NPU Inception-v3.dlc
Inception-v3 float SA7255P ADP Qualcomm® SA7255P TFLITE 7.739 ms 0 - 60 MB NPU Inception-v3.tflite
Inception-v3 float SA7255P ADP Qualcomm® SA7255P QNN_DLC 7.7 ms 0 - 24 MB NPU Inception-v3.dlc
Inception-v3 float SA8255 (Proxy) Qualcomm® SA8255P (Proxy) TFLITE 1.297 ms 0 - 364 MB NPU Inception-v3.tflite
Inception-v3 float SA8255 (Proxy) Qualcomm® SA8255P (Proxy) QNN_DLC 1.349 ms 0 - 138 MB NPU Inception-v3.dlc
Inception-v3 float SA8295P ADP Qualcomm® SA8295P TFLITE 2.562 ms 0 - 62 MB NPU Inception-v3.tflite
Inception-v3 float SA8295P ADP Qualcomm® SA8295P QNN_DLC 2.599 ms 1 - 28 MB NPU Inception-v3.dlc
Inception-v3 float SA8650 (Proxy) Qualcomm® SA8650P (Proxy) TFLITE 1.296 ms 0 - 364 MB NPU Inception-v3.tflite
Inception-v3 float SA8650 (Proxy) Qualcomm® SA8650P (Proxy) QNN_DLC 1.348 ms 0 - 147 MB NPU Inception-v3.dlc
Inception-v3 float SA8775P ADP Qualcomm® SA8775P TFLITE 2.176 ms 0 - 60 MB NPU Inception-v3.tflite
Inception-v3 float SA8775P ADP Qualcomm® SA8775P QNN_DLC 2.22 ms 1 - 26 MB NPU Inception-v3.dlc
Inception-v3 float Samsung Galaxy S23 Snapdragon® 8 Gen 2 Mobile TFLITE 1.298 ms 0 - 375 MB NPU Inception-v3.tflite
Inception-v3 float Samsung Galaxy S23 Snapdragon® 8 Gen 2 Mobile QNN_DLC 1.351 ms 0 - 161 MB NPU Inception-v3.dlc
Inception-v3 float Samsung Galaxy S23 Snapdragon® 8 Gen 2 Mobile ONNX 1.586 ms 0 - 179 MB NPU Inception-v3.onnx.zip
Inception-v3 float Samsung Galaxy S24 Snapdragon® 8 Gen 3 Mobile TFLITE 0.973 ms 0 - 98 MB NPU Inception-v3.tflite
Inception-v3 float Samsung Galaxy S24 Snapdragon® 8 Gen 3 Mobile QNN_DLC 1.0 ms 0 - 32 MB NPU Inception-v3.dlc
Inception-v3 float Samsung Galaxy S24 Snapdragon® 8 Gen 3 Mobile ONNX 1.143 ms 0 - 29 MB NPU Inception-v3.onnx.zip
Inception-v3 float Snapdragon 8 Elite QRD Snapdragon® 8 Elite Mobile TFLITE 0.939 ms 0 - 65 MB NPU Inception-v3.tflite
Inception-v3 float Snapdragon 8 Elite QRD Snapdragon® 8 Elite Mobile QNN_DLC 0.96 ms 1 - 30 MB NPU Inception-v3.dlc
Inception-v3 float Snapdragon 8 Elite QRD Snapdragon® 8 Elite Mobile ONNX 1.169 ms 1 - 26 MB NPU Inception-v3.onnx.zip
Inception-v3 float Snapdragon X Elite CRD Snapdragon® X Elite QNN_DLC 1.419 ms 147 - 147 MB NPU Inception-v3.dlc
Inception-v3 float Snapdragon X Elite CRD Snapdragon® X Elite ONNX 1.481 ms 46 - 46 MB NPU Inception-v3.onnx.zip
Inception-v3 w8a8 QCS8275 (Proxy) Qualcomm® QCS8275 (Proxy) TFLITE 1.492 ms 0 - 37 MB NPU Inception-v3.tflite
Inception-v3 w8a8 QCS8275 (Proxy) Qualcomm® QCS8275 (Proxy) QNN_DLC 1.407 ms 0 - 41 MB NPU Inception-v3.dlc
Inception-v3 w8a8 QCS8450 (Proxy) Qualcomm® QCS8450 (Proxy) TFLITE 0.755 ms 0 - 61 MB NPU Inception-v3.tflite
Inception-v3 w8a8 QCS8450 (Proxy) Qualcomm® QCS8450 (Proxy) QNN_DLC 0.9 ms 0 - 55 MB NPU Inception-v3.dlc
Inception-v3 w8a8 QCS8550 (Proxy) Qualcomm® QCS8550 (Proxy) TFLITE 0.632 ms 0 - 134 MB NPU Inception-v3.tflite
Inception-v3 w8a8 QCS8550 (Proxy) Qualcomm® QCS8550 (Proxy) QNN_DLC 0.603 ms 0 - 144 MB NPU Inception-v3.dlc
Inception-v3 w8a8 QCS9075 (Proxy) Qualcomm® QCS9075 (Proxy) TFLITE 0.832 ms 0 - 37 MB NPU Inception-v3.tflite
Inception-v3 w8a8 QCS9075 (Proxy) Qualcomm® QCS9075 (Proxy) QNN_DLC 0.776 ms 0 - 41 MB NPU Inception-v3.dlc
Inception-v3 w8a8 RB3 Gen 2 (Proxy) Qualcomm® QCS6490 (Proxy) TFLITE 2.561 ms 0 - 56 MB NPU Inception-v3.tflite
Inception-v3 w8a8 RB3 Gen 2 (Proxy) Qualcomm® QCS6490 (Proxy) QNN_DLC 2.764 ms 0 - 52 MB NPU Inception-v3.dlc
Inception-v3 w8a8 RB5 (Proxy) Qualcomm® QCS8250 (Proxy) TFLITE 7.792 ms 0 - 3 MB NPU Inception-v3.tflite
Inception-v3 w8a8 SA7255P ADP Qualcomm® SA7255P TFLITE 1.492 ms 0 - 37 MB NPU Inception-v3.tflite
Inception-v3 w8a8 SA7255P ADP Qualcomm® SA7255P QNN_DLC 1.407 ms 0 - 41 MB NPU Inception-v3.dlc
Inception-v3 w8a8 SA8255 (Proxy) Qualcomm® SA8255P (Proxy) TFLITE 0.64 ms 0 - 135 MB NPU Inception-v3.tflite
Inception-v3 w8a8 SA8255 (Proxy) Qualcomm® SA8255P (Proxy) QNN_DLC 0.612 ms 0 - 142 MB NPU Inception-v3.dlc
Inception-v3 w8a8 SA8295P ADP Qualcomm® SA8295P TFLITE 1.112 ms 0 - 44 MB NPU Inception-v3.tflite
Inception-v3 w8a8 SA8295P ADP Qualcomm® SA8295P QNN_DLC 1.13 ms 0 - 48 MB NPU Inception-v3.dlc
Inception-v3 w8a8 SA8650 (Proxy) Qualcomm® SA8650P (Proxy) TFLITE 0.64 ms 0 - 133 MB NPU Inception-v3.tflite
Inception-v3 w8a8 SA8650 (Proxy) Qualcomm® SA8650P (Proxy) QNN_DLC 0.611 ms 0 - 143 MB NPU Inception-v3.dlc
Inception-v3 w8a8 SA8775P ADP Qualcomm® SA8775P TFLITE 0.832 ms 0 - 37 MB NPU Inception-v3.tflite
Inception-v3 w8a8 SA8775P ADP Qualcomm® SA8775P QNN_DLC 0.776 ms 0 - 41 MB NPU Inception-v3.dlc
Inception-v3 w8a8 Samsung Galaxy S23 Snapdragon® 8 Gen 2 Mobile TFLITE 0.638 ms 0 - 134 MB NPU Inception-v3.tflite
Inception-v3 w8a8 Samsung Galaxy S23 Snapdragon® 8 Gen 2 Mobile QNN_DLC 0.613 ms 0 - 143 MB NPU Inception-v3.dlc
Inception-v3 w8a8 Samsung Galaxy S24 Snapdragon® 8 Gen 3 Mobile TFLITE 0.478 ms 0 - 56 MB NPU Inception-v3.tflite
Inception-v3 w8a8 Samsung Galaxy S24 Snapdragon® 8 Gen 3 Mobile QNN_DLC 0.456 ms 0 - 54 MB NPU Inception-v3.dlc
Inception-v3 w8a8 Snapdragon 8 Elite QRD Snapdragon® 8 Elite Mobile TFLITE 0.453 ms 0 - 44 MB NPU Inception-v3.tflite
Inception-v3 w8a8 Snapdragon 8 Elite QRD Snapdragon® 8 Elite Mobile QNN_DLC 0.446 ms 0 - 48 MB NPU Inception-v3.dlc
Inception-v3 w8a8 Snapdragon X Elite CRD Snapdragon® X Elite QNN_DLC 0.652 ms 134 - 134 MB NPU Inception-v3.dlc

Installation

Install the package via pip:

pip install qai-hub-models

Configure Qualcomm® AI Hub to run this model on a cloud-hosted device

Sign-in to Qualcomm® AI Hub with your Qualcomm® ID. Once signed in navigate to Account -> Settings -> API Token.

With this API token, you can configure your client to run models on the cloud hosted devices.

qai-hub configure --api_token API_TOKEN

Navigate to docs for more information.

Demo off target

The package contains a simple end-to-end demo that downloads pre-trained weights and runs this model on a sample input.

python -m qai_hub_models.models.inception_v3.demo

The above demo runs a reference implementation of pre-processing, model inference, and post processing.

NOTE: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above).

%run -m qai_hub_models.models.inception_v3.demo

Run model on a cloud-hosted device

In addition to the demo, you can also run the model on a cloud-hosted Qualcomm® device. This script does the following:

  • Performance check on-device on a cloud-hosted device
  • Downloads compiled assets that can be deployed on-device for Android.
  • Accuracy check between PyTorch and on-device outputs.
python -m qai_hub_models.models.inception_v3.export

How does this work?

This export script leverages Qualcomm® AI Hub to optimize, validate, and deploy this model on-device. Lets go through each step below in detail:

Step 1: Compile model for on-device deployment

To compile a PyTorch model for on-device deployment, we first trace the model in memory using the jit.trace and then call the submit_compile_job API.

import torch

import qai_hub as hub
from qai_hub_models.models.inception_v3 import Model

# Load the model
torch_model = Model.from_pretrained()

# Device
device = hub.Device("Samsung Galaxy S24")

# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()

pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])

# Compile model on a specific device
compile_job = hub.submit_compile_job(
    model=pt_model,
    device=device,
    input_specs=torch_model.get_input_spec(),
)

# Get target model to run on-device
target_model = compile_job.get_target_model()

Step 2: Performance profiling on cloud-hosted device

After compiling models from step 1. Models can be profiled model on-device using the target_model. Note that this scripts runs the model on a device automatically provisioned in the cloud. Once the job is submitted, you can navigate to a provided job URL to view a variety of on-device performance metrics.

profile_job = hub.submit_profile_job(
    model=target_model,
    device=device,
)
        

Step 3: Verify on-device accuracy

To verify the accuracy of the model on-device, you can run on-device inference on sample input data on the same cloud hosted device.

input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
    model=target_model,
    device=device,
    inputs=input_data,
)
    on_device_output = inference_job.download_output_data()

With the output of the model, you can compute like PSNR, relative errors or spot check the output with expected output.

Note: This on-device profiling and inference requires access to Qualcomm® AI Hub. Sign up for access.

Run demo on a cloud-hosted device

You can also run the demo on-device.

python -m qai_hub_models.models.inception_v3.demo --eval-mode on-device

NOTE: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above).

%run -m qai_hub_models.models.inception_v3.demo -- --eval-mode on-device

Deploying compiled model to Android

The models can be deployed using multiple runtimes:

  • TensorFlow Lite (.tflite export): This tutorial provides a guide to deploy the .tflite model in an Android application.

  • QNN (.so export ): This sample app provides instructions on how to use the .so shared library in an Android application.

View on Qualcomm® AI Hub

Get more details on Inception-v3's performance across various devices here. Explore all available models on Qualcomm® AI Hub

License

  • The license for the original implementation of Inception-v3 can be found here.
  • The license for the compiled assets for on-device deployment can be found here

References

Community

Downloads last month
738
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support