Whisper-Base: Optimized for Mobile Deployment

Transformer-based automatic speech recognition (ASR) model for multilingual transcription and translation available on HuggingFace

HuggingFace Whisper-Small ASR (Automatic Speech Recognition) model is a state-of-the-art system designed for transcribing spoken language into written text. This model is based on the transformer architecture and has been optimized for edge inference by replacing Multi-Head Attention (MHA) with Single-Head Attention (SHA) and linear layers with convolutional (conv) layers. It exhibits robust performance in realistic, noisy environments, making it highly reliable for real-world applications. Specifically, it excels in long-form transcription, capable of accurately transcribing audio clips up to 30 seconds long. Time to the first token is the encoder's latency, while time to each additional token is decoder's latency, where we assume a max decoded length specified below.

This model is an implementation of Whisper-Base found here.

This repository provides scripts to run Whisper-Base on Qualcomm® devices. More details on model performance across various devices, can be found here.

Model Details

  • Model Type: Model_use_case.speech_recognition
  • Model Stats:
    • Model checkpoint: openai/whisper-base
    • Input resolution: 80x3000 (30 seconds audio)
    • Max decoded sequence length: 200 tokens
    • Number of parameters (HfWhisperEncoder): 23.7M
    • Model size (HfWhisperEncoder) (float): 90.7 MB
    • Number of parameters (HfWhisperDecoder): 48.9M
    • Model size (HfWhisperDecoder) (float): 187 MB
Model Precision Device Chipset Target Runtime Inference Time (ms) Peak Memory Range (MB) Primary Compute Unit Target Model
HfWhisperEncoder float QCS8275 (Proxy) Qualcomm® QCS8275 (Proxy) QNN_CONTEXT_BINARY 130.662 ms 1 - 10 MB NPU Use Export Script
HfWhisperEncoder float QCS8450 (Proxy) Qualcomm® QCS8450 (Proxy) QNN_CONTEXT_BINARY 111.747 ms 1 - 17 MB NPU Use Export Script
HfWhisperEncoder float QCS8550 (Proxy) Qualcomm® QCS8550 (Proxy) QNN_CONTEXT_BINARY 43.301 ms 1 - 3 MB NPU Use Export Script
HfWhisperEncoder float QCS9075 (Proxy) Qualcomm® QCS9075 (Proxy) QNN_CONTEXT_BINARY 50.193 ms 1 - 11 MB NPU Use Export Script
HfWhisperEncoder float SA7255P ADP Qualcomm® SA7255P QNN_CONTEXT_BINARY 130.662 ms 1 - 10 MB NPU Use Export Script
HfWhisperEncoder float SA8255 (Proxy) Qualcomm® SA8255P (Proxy) QNN_CONTEXT_BINARY 43.516 ms 1 - 2 MB NPU Use Export Script
HfWhisperEncoder float SA8295P ADP Qualcomm® SA8295P QNN_CONTEXT_BINARY 92.575 ms 1 - 17 MB NPU Use Export Script
HfWhisperEncoder float SA8650 (Proxy) Qualcomm® SA8650P (Proxy) QNN_CONTEXT_BINARY 43.199 ms 1 - 3 MB NPU Use Export Script
HfWhisperEncoder float SA8775P ADP Qualcomm® SA8775P QNN_CONTEXT_BINARY 50.193 ms 1 - 11 MB NPU Use Export Script
HfWhisperEncoder float Samsung Galaxy S23 Snapdragon® 8 Gen 2 Mobile QNN_CONTEXT_BINARY 43.355 ms 1 - 4 MB NPU Use Export Script
HfWhisperEncoder float Samsung Galaxy S23 Snapdragon® 8 Gen 2 Mobile PRECOMPILED_QNN_ONNX 44.114 ms 0 - 67 MB NPU Use Export Script
HfWhisperEncoder float Samsung Galaxy S24 Snapdragon® 8 Gen 3 Mobile QNN_CONTEXT_BINARY 34.042 ms 1 - 18 MB NPU Use Export Script
HfWhisperEncoder float Samsung Galaxy S24 Snapdragon® 8 Gen 3 Mobile PRECOMPILED_QNN_ONNX 34.282 ms 34 - 53 MB NPU Use Export Script
HfWhisperEncoder float Snapdragon 8 Elite QRD Snapdragon® 8 Elite Mobile QNN_CONTEXT_BINARY 28.367 ms 0 - 14 MB NPU Use Export Script
HfWhisperEncoder float Snapdragon 8 Elite QRD Snapdragon® 8 Elite Mobile PRECOMPILED_QNN_ONNX 29.337 ms 39 - 52 MB NPU Use Export Script
HfWhisperEncoder float Snapdragon X Elite CRD Snapdragon® X Elite QNN_CONTEXT_BINARY 42.085 ms 0 - 0 MB NPU Use Export Script
HfWhisperEncoder float Snapdragon X Elite CRD Snapdragon® X Elite PRECOMPILED_QNN_ONNX 42.398 ms 66 - 66 MB NPU Use Export Script
HfWhisperDecoder float QCS8275 (Proxy) Qualcomm® QCS8275 (Proxy) QNN_CONTEXT_BINARY 6.224 ms 20 - 29 MB NPU Use Export Script
HfWhisperDecoder float QCS8450 (Proxy) Qualcomm® QCS8450 (Proxy) QNN_CONTEXT_BINARY 5.156 ms 20 - 42 MB NPU Use Export Script
HfWhisperDecoder float QCS8550 (Proxy) Qualcomm® QCS8550 (Proxy) QNN_CONTEXT_BINARY 4.053 ms 20 - 22 MB NPU Use Export Script
HfWhisperDecoder float QCS9075 (Proxy) Qualcomm® QCS9075 (Proxy) QNN_CONTEXT_BINARY 4.663 ms 20 - 30 MB NPU Use Export Script
HfWhisperDecoder float SA7255P ADP Qualcomm® SA7255P QNN_CONTEXT_BINARY 6.224 ms 20 - 29 MB NPU Use Export Script
HfWhisperDecoder float SA8255 (Proxy) Qualcomm® SA8255P (Proxy) QNN_CONTEXT_BINARY 3.858 ms 20 - 22 MB NPU Use Export Script
HfWhisperDecoder float SA8295P ADP Qualcomm® SA8295P QNN_CONTEXT_BINARY 5.199 ms 20 - 37 MB NPU Use Export Script
HfWhisperDecoder float SA8650 (Proxy) Qualcomm® SA8650P (Proxy) QNN_CONTEXT_BINARY 3.982 ms 19 - 21 MB NPU Use Export Script
HfWhisperDecoder float SA8775P ADP Qualcomm® SA8775P QNN_CONTEXT_BINARY 4.663 ms 20 - 30 MB NPU Use Export Script
HfWhisperDecoder float Samsung Galaxy S23 Snapdragon® 8 Gen 2 Mobile QNN_CONTEXT_BINARY 3.951 ms 19 - 21 MB NPU Use Export Script
HfWhisperDecoder float Samsung Galaxy S23 Snapdragon® 8 Gen 2 Mobile PRECOMPILED_QNN_ONNX 4.654 ms 0 - 143 MB NPU Use Export Script
HfWhisperDecoder float Samsung Galaxy S24 Snapdragon® 8 Gen 3 Mobile QNN_CONTEXT_BINARY 3.142 ms 0 - 19 MB NPU Use Export Script
HfWhisperDecoder float Samsung Galaxy S24 Snapdragon® 8 Gen 3 Mobile PRECOMPILED_QNN_ONNX 3.662 ms 26 - 46 MB NPU Use Export Script
HfWhisperDecoder float Snapdragon 8 Elite QRD Snapdragon® 8 Elite Mobile QNN_CONTEXT_BINARY 2.638 ms 18 - 32 MB NPU Use Export Script
HfWhisperDecoder float Snapdragon 8 Elite QRD Snapdragon® 8 Elite Mobile PRECOMPILED_QNN_ONNX 3.068 ms 17 - 31 MB NPU Use Export Script
HfWhisperDecoder float Snapdragon X Elite CRD Snapdragon® X Elite QNN_CONTEXT_BINARY 3.458 ms 20 - 20 MB NPU Use Export Script
HfWhisperDecoder float Snapdragon X Elite CRD Snapdragon® X Elite PRECOMPILED_QNN_ONNX 3.681 ms 125 - 125 MB NPU Use Export Script

Installation

Install the package via pip:

pip install "qai-hub-models[whisper-base]"

Configure Qualcomm® AI Hub to run this model on a cloud-hosted device

Sign-in to Qualcomm® AI Hub with your Qualcomm® ID. Once signed in navigate to Account -> Settings -> API Token.

With this API token, you can configure your client to run models on the cloud hosted devices.

qai-hub configure --api_token API_TOKEN

Navigate to docs for more information.

Demo off target

The package contains a simple end-to-end demo that downloads pre-trained weights and runs this model on a sample input.

python -m qai_hub_models.models.whisper_base.demo

The above demo runs a reference implementation of pre-processing, model inference, and post processing.

NOTE: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above).

%run -m qai_hub_models.models.whisper_base.demo

Run model on a cloud-hosted device

In addition to the demo, you can also run the model on a cloud-hosted Qualcomm® device. This script does the following:

  • Performance check on-device on a cloud-hosted device
  • Downloads compiled assets that can be deployed on-device for Android.
  • Accuracy check between PyTorch and on-device outputs.
python -m qai_hub_models.models.whisper_base.export

How does this work?

This export script leverages Qualcomm® AI Hub to optimize, validate, and deploy this model on-device. Lets go through each step below in detail:

Step 1: Compile model for on-device deployment

To compile a PyTorch model for on-device deployment, we first trace the model in memory using the jit.trace and then call the submit_compile_job API.

import torch

import qai_hub as hub
from qai_hub_models.models.whisper_base import Model

# Load the model
torch_model = Model.from_pretrained()

# Device
device = hub.Device("Samsung Galaxy S24")

# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()

pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])

# Compile model on a specific device
compile_job = hub.submit_compile_job(
    model=pt_model,
    device=device,
    input_specs=torch_model.get_input_spec(),
)

# Get target model to run on-device
target_model = compile_job.get_target_model()

Step 2: Performance profiling on cloud-hosted device

After compiling models from step 1. Models can be profiled model on-device using the target_model. Note that this scripts runs the model on a device automatically provisioned in the cloud. Once the job is submitted, you can navigate to a provided job URL to view a variety of on-device performance metrics.

profile_job = hub.submit_profile_job(
    model=target_model,
    device=device,
)
        

Step 3: Verify on-device accuracy

To verify the accuracy of the model on-device, you can run on-device inference on sample input data on the same cloud hosted device.

input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
    model=target_model,
    device=device,
    inputs=input_data,
)
    on_device_output = inference_job.download_output_data()

With the output of the model, you can compute like PSNR, relative errors or spot check the output with expected output.

Note: This on-device profiling and inference requires access to Qualcomm® AI Hub. Sign up for access.

Deploying compiled model to Android

The models can be deployed using multiple runtimes:

  • TensorFlow Lite (.tflite export): This tutorial provides a guide to deploy the .tflite model in an Android application.

  • QNN (.so export ): This sample app provides instructions on how to use the .so shared library in an Android application.

View on Qualcomm® AI Hub

Get more details on Whisper-Base's performance across various devices here. Explore all available models on Qualcomm® AI Hub

License

  • The license for the original implementation of Whisper-Base can be found here.
  • The license for the compiled assets for on-device deployment can be found here

References

Community

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support