ConvNext-Tiny: Optimized for Qualcomm Devices

ConvNextTiny is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.

This is based on the implementation of ConvNext-Tiny found here. This repository contains pre-exported model files optimized for Qualcomm® devices. You can use the Qualcomm® AI Hub Models library to export with custom configurations. More details on model performance across various devices, can be found here.

Qualcomm AI Hub Models uses Qualcomm AI Hub Workbench to compile, profile, and evaluate this model. Sign up to run these models on a hosted Qualcomm® device.

Getting Started

There are two ways to deploy this model on your device:

Option 1: Download Pre-Exported Models

Below are pre-exported model assets ready for deployment.

Runtime Precision Chipset SDK Versions Download
ONNX float Universal QAIRT 2.42, ONNX Runtime 1.24.3 Download
ONNX w8a16 Universal QAIRT 2.42, ONNX Runtime 1.24.3 Download
QNN_DLC float Universal QAIRT 2.43 Download
QNN_DLC w8a16 Universal QAIRT 2.43 Download
TFLITE float Universal QAIRT 2.43, TFLite 2.19.1 Download

For more device-specific assets and performance metrics, visit ConvNext-Tiny on Qualcomm® AI Hub.

Option 2: Export with Custom Configurations

Use the Qualcomm® AI Hub Models Python library to compile and export the model with your own:

  • Custom weights (e.g., fine-tuned checkpoints)
  • Custom input shapes
  • Target device and runtime configurations

This option is ideal if you need to customize the model beyond the default configuration provided here.

See our repository for ConvNext-Tiny on GitHub for usage instructions.

Model Details

Model Type: Model_use_case.image_classification

Model Stats:

  • Model checkpoint: Imagenet
  • Input resolution: 224x224
  • Number of parameters: 28.6M
  • Model size (float): 109 MB
  • Model size (w8a16): 28.9 MB

Performance Summary

Model Runtime Precision Chipset Inference Time (ms) Peak Memory Range (MB) Primary Compute Unit
ConvNext-Tiny ONNX float Snapdragon® 8 Elite Gen 5 Mobile 1.268 ms 1 - 127 MB NPU
ConvNext-Tiny ONNX float Snapdragon® X2 Elite 1.338 ms 57 - 57 MB NPU
ConvNext-Tiny ONNX float Snapdragon® X Elite 2.904 ms 57 - 57 MB NPU
ConvNext-Tiny ONNX float Snapdragon® 8 Gen 3 Mobile 2.036 ms 0 - 172 MB NPU
ConvNext-Tiny ONNX float Qualcomm® QCS8550 (Proxy) 2.701 ms 0 - 38 MB NPU
ConvNext-Tiny ONNX float Qualcomm® QCS9075 3.946 ms 1 - 4 MB NPU
ConvNext-Tiny ONNX float Snapdragon® 8 Elite For Galaxy Mobile 1.551 ms 0 - 122 MB NPU
ConvNext-Tiny ONNX w8a16 Snapdragon® 8 Elite Gen 5 Mobile 1.083 ms 0 - 115 MB NPU
ConvNext-Tiny ONNX w8a16 Snapdragon® X2 Elite 1.201 ms 29 - 29 MB NPU
ConvNext-Tiny ONNX w8a16 Snapdragon® X Elite 2.818 ms 29 - 29 MB NPU
ConvNext-Tiny ONNX w8a16 Snapdragon® 8 Gen 3 Mobile 1.812 ms 0 - 142 MB NPU
ConvNext-Tiny ONNX w8a16 Qualcomm® QCS6490 414.728 ms 49 - 64 MB CPU
ConvNext-Tiny ONNX w8a16 Qualcomm® QCS8550 (Proxy) 2.525 ms 0 - 35 MB NPU
ConvNext-Tiny ONNX w8a16 Qualcomm® QCS9075 2.668 ms 0 - 3 MB NPU
ConvNext-Tiny ONNX w8a16 Qualcomm® QCM6690 208.185 ms 61 - 75 MB CPU
ConvNext-Tiny ONNX w8a16 Snapdragon® 8 Elite For Galaxy Mobile 1.384 ms 0 - 109 MB NPU
ConvNext-Tiny ONNX w8a16 Snapdragon® 7 Gen 4 Mobile 200.224 ms 59 - 74 MB CPU
ConvNext-Tiny QNN_DLC float Snapdragon® 8 Elite Gen 5 Mobile 1.63 ms 1 - 127 MB NPU
ConvNext-Tiny QNN_DLC float Snapdragon® X2 Elite 2.037 ms 1 - 1 MB NPU
ConvNext-Tiny QNN_DLC float Snapdragon® X Elite 3.923 ms 1 - 1 MB NPU
ConvNext-Tiny QNN_DLC float Snapdragon® 8 Gen 3 Mobile 2.659 ms 0 - 170 MB NPU
ConvNext-Tiny QNN_DLC float Qualcomm® QCS8550 (Proxy) 3.685 ms 1 - 2 MB NPU
ConvNext-Tiny QNN_DLC float Qualcomm® SA8775P 5.045 ms 1 - 126 MB NPU
ConvNext-Tiny QNN_DLC float Qualcomm® QCS9075 4.876 ms 1 - 3 MB NPU
ConvNext-Tiny QNN_DLC float Qualcomm® QCS8450 (Proxy) 9.636 ms 0 - 169 MB NPU
ConvNext-Tiny QNN_DLC float Qualcomm® SA8295P 8.974 ms 1 - 125 MB NPU
ConvNext-Tiny QNN_DLC float Snapdragon® 8 Elite For Galaxy Mobile 2.031 ms 1 - 126 MB NPU
ConvNext-Tiny QNN_DLC w8a16 Snapdragon® 8 Elite Gen 5 Mobile 1.284 ms 0 - 100 MB NPU
ConvNext-Tiny QNN_DLC w8a16 Snapdragon® X2 Elite 1.607 ms 0 - 0 MB NPU
ConvNext-Tiny QNN_DLC w8a16 Snapdragon® X Elite 3.4 ms 0 - 0 MB NPU
ConvNext-Tiny QNN_DLC w8a16 Snapdragon® 8 Gen 3 Mobile 2.201 ms 0 - 122 MB NPU
ConvNext-Tiny QNN_DLC w8a16 Qualcomm® QCS6490 9.081 ms 0 - 2 MB NPU
ConvNext-Tiny QNN_DLC w8a16 Qualcomm® QCS8275 (Proxy) 6.881 ms 0 - 96 MB NPU
ConvNext-Tiny QNN_DLC w8a16 Qualcomm® QCS8550 (Proxy) 3.115 ms 0 - 137 MB NPU
ConvNext-Tiny QNN_DLC w8a16 Qualcomm® SA8775P 3.524 ms 0 - 97 MB NPU
ConvNext-Tiny QNN_DLC w8a16 Qualcomm® QCS9075 3.361 ms 0 - 2 MB NPU
ConvNext-Tiny QNN_DLC w8a16 Qualcomm® QCM6690 21.836 ms 0 - 250 MB NPU
ConvNext-Tiny QNN_DLC w8a16 Qualcomm® QCS8450 (Proxy) 4.233 ms 0 - 121 MB NPU
ConvNext-Tiny QNN_DLC w8a16 Qualcomm® SA7255P 6.881 ms 0 - 96 MB NPU
ConvNext-Tiny QNN_DLC w8a16 Qualcomm® SA8295P 4.726 ms 0 - 93 MB NPU
ConvNext-Tiny QNN_DLC w8a16 Snapdragon® 8 Elite For Galaxy Mobile 1.605 ms 0 - 98 MB NPU
ConvNext-Tiny QNN_DLC w8a16 Snapdragon® 7 Gen 4 Mobile 3.45 ms 0 - 107 MB NPU
ConvNext-Tiny TFLITE float Snapdragon® 8 Elite Gen 5 Mobile 1.303 ms 0 - 122 MB NPU
ConvNext-Tiny TFLITE float Snapdragon® 8 Gen 3 Mobile 2.133 ms 0 - 169 MB NPU
ConvNext-Tiny TFLITE float Qualcomm® QCS8275 (Proxy) 13.977 ms 0 - 121 MB NPU
ConvNext-Tiny TFLITE float Qualcomm® QCS8550 (Proxy) 2.843 ms 0 - 2 MB NPU
ConvNext-Tiny TFLITE float Qualcomm® SA8775P 4.274 ms 0 - 123 MB NPU
ConvNext-Tiny TFLITE float Qualcomm® QCS9075 4.082 ms 0 - 59 MB NPU
ConvNext-Tiny TFLITE float Qualcomm® QCS8450 (Proxy) 8.889 ms 0 - 160 MB NPU
ConvNext-Tiny TFLITE float Qualcomm® SA7255P 13.977 ms 0 - 121 MB NPU
ConvNext-Tiny TFLITE float Qualcomm® SA8295P 7.901 ms 0 - 119 MB NPU
ConvNext-Tiny TFLITE float Snapdragon® 8 Elite For Galaxy Mobile 1.588 ms 0 - 124 MB NPU

License

  • The license for the original implementation of ConvNext-Tiny can be found here.

References

Community

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for qualcomm/ConvNext-Tiny