Whisper-Large-V3-Turbo: Optimized for Qualcomm Devices
Whisper large-v3-turbo is a finetuned version of a pruned Whisper large-v3. In other words, it's the exact same model, except that the number of decoding layers have reduced from 32 to 4. As a result, the model is way faster, at the expense of a minor quality degradation. This model is based on the transformer architecture and has been optimized for edge inference by replacing Multi-Head Attention (MHA) with Single-Head Attention (SHA) and linear layers with convolutional (conv) layers. It exhibits robust performance in realistic, noisy environments, making it highly reliable for real-world applications. Specifically, it excels in long-form transcription, capable of accurately transcribing audio clips up to 30 seconds long. Time to the first token is the encoder's latency, while time to each additional token is decoder's latency, where we assume a max decoded length specified below.
This is based on the implementation of Whisper-Large-V3-Turbo found here. This repository contains pre-exported model files optimized for Qualcomm® devices. You can use the Qualcomm® AI Hub Models library to export with custom configurations. More details on model performance across various devices, can be found here.
Qualcomm AI Hub Models uses Qualcomm AI Hub Workbench to compile, profile, and evaluate this model. Sign up to run these models on a hosted Qualcomm® device.
Getting Started
There are two ways to deploy this model on your device:
Option 1: Download Pre-Exported Models
Below are pre-exported model assets ready for deployment.
| Runtime | Precision | Chipset | SDK Versions | Download |
|---|---|---|---|---|
| PRECOMPILED_QNN_ONNX | float | qualcomm_qcs8550_proxy | QAIRT 2.42, ONNX Runtime 1.24.3 | Download |
| PRECOMPILED_QNN_ONNX | float | qualcomm_qcs9075 | QAIRT 2.42, ONNX Runtime 1.24.3 | Download |
| PRECOMPILED_QNN_ONNX | float | qualcomm_snapdragon_8_elite_for_galaxy | QAIRT 2.42, ONNX Runtime 1.24.3 | Download |
| PRECOMPILED_QNN_ONNX | float | qualcomm_snapdragon_8_elite_gen5 | QAIRT 2.42, ONNX Runtime 1.24.3 | Download |
| PRECOMPILED_QNN_ONNX | float | qualcomm_snapdragon_8gen3 | QAIRT 2.42, ONNX Runtime 1.24.3 | Download |
| PRECOMPILED_QNN_ONNX | float | qualcomm_snapdragon_x2_elite | QAIRT 2.42, ONNX Runtime 1.24.3 | Download |
| PRECOMPILED_QNN_ONNX | float | qualcomm_snapdragon_x_elite | QAIRT 2.42, ONNX Runtime 1.24.3 | Download |
| QNN_CONTEXT_BINARY | float | qualcomm_qcs8450_proxy | QAIRT 2.43 | Download |
| QNN_CONTEXT_BINARY | float | qualcomm_qcs8550_proxy | QAIRT 2.43 | Download |
| QNN_CONTEXT_BINARY | float | qualcomm_qcs9075 | QAIRT 2.43 | Download |
| QNN_CONTEXT_BINARY | float | qualcomm_sa7255p | QAIRT 2.43 | Download |
| QNN_CONTEXT_BINARY | float | qualcomm_sa8295p | QAIRT 2.43 | Download |
| QNN_CONTEXT_BINARY | float | qualcomm_sa8775p | QAIRT 2.43 | Download |
| QNN_CONTEXT_BINARY | float | qualcomm_snapdragon_8_elite_for_galaxy | QAIRT 2.43 | Download |
| QNN_CONTEXT_BINARY | float | qualcomm_snapdragon_8_elite_gen5 | QAIRT 2.43 | Download |
| QNN_CONTEXT_BINARY | float | qualcomm_snapdragon_8gen3 | QAIRT 2.43 | Download |
| QNN_CONTEXT_BINARY | float | qualcomm_snapdragon_x2_elite | QAIRT 2.43 | Download |
| QNN_CONTEXT_BINARY | float | qualcomm_snapdragon_x_elite | QAIRT 2.43 | Download |
For more device-specific assets and performance metrics, visit Whisper-Large-V3-Turbo on Qualcomm® AI Hub.
Option 2: Export with Custom Configurations
Use the Qualcomm® AI Hub Models Python library to compile and export the model with your own:
- Custom weights (e.g., fine-tuned checkpoints)
- Custom input shapes
- Target device and runtime configurations
This option is ideal if you need to customize the model beyond the default configuration provided here.
See our repository for Whisper-Large-V3-Turbo on GitHub for usage instructions.
Model Details
Model Type: Model_use_case.speech_recognition
Model Stats:
- Model checkpoint: openai/whisper-large-v3-turbo
- Input resolution: 128x3000 (30 seconds audio)
- Max decoded sequence length: 200 tokens
Performance Summary
| Model | Runtime | Precision | Chipset | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit |
|---|---|---|---|---|---|---|
| decoder | PRECOMPILED_QNN_ONNX | float | Snapdragon® 8 Elite Gen 5 Mobile | 6.141 ms | 42 - 52 MB | NPU |
| decoder | PRECOMPILED_QNN_ONNX | float | Snapdragon® X2 Elite | 4.546 ms | 399 - 399 MB | NPU |
| decoder | PRECOMPILED_QNN_ONNX | float | Snapdragon® X Elite | 8.301 ms | 399 - 399 MB | NPU |
| decoder | PRECOMPILED_QNN_ONNX | float | Snapdragon® 8 Gen 3 Mobile | 7.703 ms | 44 - 55 MB | NPU |
| decoder | PRECOMPILED_QNN_ONNX | float | Qualcomm® QCS8550 (Proxy) | 9.71 ms | 34 - 36 MB | NPU |
| decoder | PRECOMPILED_QNN_ONNX | float | Qualcomm® QCS9075 | 10.284 ms | 33 - 69 MB | NPU |
| decoder | PRECOMPILED_QNN_ONNX | float | Snapdragon® 8 Elite For Galaxy Mobile | 6.631 ms | 20 - 30 MB | NPU |
| decoder | QNN_CONTEXT_BINARY | float | Snapdragon® 8 Elite Gen 5 Mobile | 6.163 ms | 33 - 43 MB | NPU |
| decoder | QNN_CONTEXT_BINARY | float | Snapdragon® X2 Elite | 5.099 ms | 33 - 33 MB | NPU |
| decoder | QNN_CONTEXT_BINARY | float | Snapdragon® X Elite | 8.106 ms | 33 - 33 MB | NPU |
| decoder | QNN_CONTEXT_BINARY | float | Snapdragon® 8 Gen 3 Mobile | 7.805 ms | 33 - 41 MB | NPU |
| decoder | QNN_CONTEXT_BINARY | float | Qualcomm® QCS8275 (Proxy) | 15.264 ms | 33 - 40 MB | NPU |
| decoder | QNN_CONTEXT_BINARY | float | Qualcomm® QCS8550 (Proxy) | 9.658 ms | 33 - 38 MB | NPU |
| decoder | QNN_CONTEXT_BINARY | float | Qualcomm® SA8775P | 10.554 ms | 33 - 40 MB | NPU |
| decoder | QNN_CONTEXT_BINARY | float | Qualcomm® QCS9075 | 10.323 ms | 33 - 72 MB | NPU |
| decoder | QNN_CONTEXT_BINARY | float | Qualcomm® QCS8450 (Proxy) | 16.33 ms | 33 - 42 MB | NPU |
| decoder | QNN_CONTEXT_BINARY | float | Qualcomm® SA7255P | 15.264 ms | 33 - 40 MB | NPU |
| decoder | QNN_CONTEXT_BINARY | float | Qualcomm® SA8295P | 11.891 ms | 33 - 38 MB | NPU |
| decoder | QNN_CONTEXT_BINARY | float | Snapdragon® 8 Elite For Galaxy Mobile | 6.616 ms | 4 - 17 MB | NPU |
| encoder | PRECOMPILED_QNN_ONNX | float | Snapdragon® 8 Elite Gen 5 Mobile | 249.498 ms | 62 - 70 MB | NPU |
| encoder | PRECOMPILED_QNN_ONNX | float | Snapdragon® X2 Elite | 250.303 ms | 1276 - 1276 MB | NPU |
| encoder | PRECOMPILED_QNN_ONNX | float | Snapdragon® X Elite | 585.925 ms | 1274 - 1274 MB | NPU |
| encoder | PRECOMPILED_QNN_ONNX | float | Snapdragon® 8 Gen 3 Mobile | 422.988 ms | 64 - 75 MB | NPU |
| encoder | PRECOMPILED_QNN_ONNX | float | Qualcomm® QCS8550 (Proxy) | 577.46 ms | 0 - 1282 MB | NPU |
| encoder | PRECOMPILED_QNN_ONNX | float | Qualcomm® QCS9075 | 700.978 ms | 33 - 36 MB | NPU |
| encoder | PRECOMPILED_QNN_ONNX | float | Snapdragon® 8 Elite For Galaxy Mobile | 317.001 ms | 64 - 76 MB | NPU |
| encoder | QNN_CONTEXT_BINARY | float | Snapdragon® 8 Elite Gen 5 Mobile | 248.558 ms | 7 - 16 MB | NPU |
| encoder | QNN_CONTEXT_BINARY | float | Snapdragon® X2 Elite | 247.321 ms | 1 - 1 MB | NPU |
| encoder | QNN_CONTEXT_BINARY | float | Snapdragon® X Elite | 580.939 ms | 1 - 1 MB | NPU |
| encoder | QNN_CONTEXT_BINARY | float | Snapdragon® 8 Gen 3 Mobile | 416.445 ms | 1 - 8 MB | NPU |
| encoder | QNN_CONTEXT_BINARY | float | Qualcomm® QCS8275 (Proxy) | 2174.853 ms | 1 - 7 MB | NPU |
| encoder | QNN_CONTEXT_BINARY | float | Qualcomm® QCS8550 (Proxy) | 579.071 ms | 1 - 3 MB | NPU |
| encoder | QNN_CONTEXT_BINARY | float | Qualcomm® SA8775P | 707.156 ms | 1 - 7 MB | NPU |
| encoder | QNN_CONTEXT_BINARY | float | Qualcomm® QCS9075 | 699.773 ms | 1 - 32 MB | NPU |
| encoder | QNN_CONTEXT_BINARY | float | Qualcomm® QCS8450 (Proxy) | 1312.236 ms | 1 - 15 MB | NPU |
| encoder | QNN_CONTEXT_BINARY | float | Qualcomm® SA7255P | 2174.853 ms | 1 - 7 MB | NPU |
| encoder | QNN_CONTEXT_BINARY | float | Qualcomm® SA8295P | 864.952 ms | 1 - 10 MB | NPU |
| encoder | QNN_CONTEXT_BINARY | float | Snapdragon® 8 Elite For Galaxy Mobile | 306.785 ms | 1 - 14 MB | NPU |
License
- The license for the original implementation of Whisper-Large-V3-Turbo can be found here.
References
Community
- Join our AI Hub Slack community to collaborate, post questions and learn more about on-device AI.
- For questions or feedback please reach out to us.
