--- library_name: pytorch license: other tags: - bu_auto - android pipeline_tag: image-classification --- ![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/internimage/web-assets/model_demo.png) # InternImage: Optimized for Qualcomm Devices InternImage employs DCNv3 as its core operator to equips the model with dynamic and effective receptive fields required for downstream tasks like object detection and segmentation, while enabling adaptive spatial aggregation. This repository contains pre-exported model files optimized for Qualcomm® devices. You can use the [Qualcomm® AI Hub Models](https://github.com/qualcomm/ai-hub-models/blob/main/src/qai_hub_models/models/internimage) library to export with custom configurations. More details on model performance across various devices, can be found [here](#performance-summary). Qualcomm AI Hub Models uses [Qualcomm AI Hub Workbench](https://workbench.aihub.qualcomm.com) to compile, profile, and evaluate this model. [Sign up](https://myaccount.qualcomm.com/signup) to run these models on a hosted Qualcomm® device. ## Getting Started There are two ways to deploy this model on your device: ### Option 1: Download Pre-Exported Models Below are pre-exported model assets ready for deployment. | Runtime | Precision | Chipset | SDK Versions | Download | |---|---|---|---|---| | QNN_CONTEXT_BINARY | float | qualcomm_qcs8450_proxy | QAIRT 2.43 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/internimage/releases/v0.50.2/internimage-qnn_context_binary-float-qualcomm_qcs8450_proxy.zip) | QNN_CONTEXT_BINARY | float | qualcomm_qcs8550_proxy | QAIRT 2.43 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/internimage/releases/v0.50.2/internimage-qnn_context_binary-float-qualcomm_qcs8550_proxy.zip) | QNN_CONTEXT_BINARY | float | qualcomm_qcs9075 | QAIRT 2.43 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/internimage/releases/v0.50.2/internimage-qnn_context_binary-float-qualcomm_qcs9075.zip) | QNN_CONTEXT_BINARY | float | qualcomm_sa7255p | QAIRT 2.43 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/internimage/releases/v0.50.2/internimage-qnn_context_binary-float-qualcomm_sa7255p.zip) | QNN_CONTEXT_BINARY | float | qualcomm_sa8295p | QAIRT 2.43 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/internimage/releases/v0.50.2/internimage-qnn_context_binary-float-qualcomm_sa8295p.zip) | QNN_CONTEXT_BINARY | float | qualcomm_sa8775p | QAIRT 2.43 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/internimage/releases/v0.50.2/internimage-qnn_context_binary-float-qualcomm_sa8775p.zip) | QNN_CONTEXT_BINARY | float | qualcomm_snapdragon_8_elite_for_galaxy | QAIRT 2.43 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/internimage/releases/v0.50.2/internimage-qnn_context_binary-float-qualcomm_snapdragon_8_elite_for_galaxy.zip) | QNN_CONTEXT_BINARY | float | qualcomm_snapdragon_8_elite_gen5 | QAIRT 2.43 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/internimage/releases/v0.50.2/internimage-qnn_context_binary-float-qualcomm_snapdragon_8_elite_gen5.zip) | QNN_CONTEXT_BINARY | float | qualcomm_snapdragon_8gen3 | QAIRT 2.43 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/internimage/releases/v0.50.2/internimage-qnn_context_binary-float-qualcomm_snapdragon_8gen3.zip) | QNN_CONTEXT_BINARY | float | qualcomm_snapdragon_x2_elite | QAIRT 2.43 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/internimage/releases/v0.50.2/internimage-qnn_context_binary-float-qualcomm_snapdragon_x2_elite.zip) | QNN_CONTEXT_BINARY | float | qualcomm_snapdragon_x_elite | QAIRT 2.43 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/internimage/releases/v0.50.2/internimage-qnn_context_binary-float-qualcomm_snapdragon_x_elite.zip) For more device-specific assets and performance metrics, visit **[InternImage on Qualcomm® AI Hub](https://aihub.qualcomm.com/models/internimage)**. ### Option 2: Export with Custom Configurations Use the [Qualcomm® AI Hub Models](https://github.com/qualcomm/ai-hub-models/blob/main/src/qai_hub_models/models/internimage) Python library to compile and export the model with your own: - Custom weights (e.g., fine-tuned checkpoints) - Custom input shapes - Target device and runtime configurations This option is ideal if you need to customize the model beyond the default configuration provided here. See our repository for [InternImage on GitHub](https://github.com/qualcomm/ai-hub-models/blob/main/src/qai_hub_models/models/internimage) for usage instructions. ## Model Details **Model Type:** Model_use_case.image_classification **Model Stats:** - Model checkpoint: internimage_t_1k_224 - Input resolution: 1x3x224x224 - Number of parameters: 30.6M - Model size (float): 117 MB ## Performance Summary | Model | Runtime | Precision | Chipset | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit |---|---|---|---|---|---|--- | InternImage | PRECOMPILED_QNN_ONNX | float | Snapdragon® 8 Elite Gen 5 Mobile | 19.931 ms | 1 - 10 MB | NPU | InternImage | PRECOMPILED_QNN_ONNX | float | Snapdragon® X2 Elite | 20.745 ms | 66 - 66 MB | NPU | InternImage | PRECOMPILED_QNN_ONNX | float | Snapdragon® X Elite | 52.06 ms | 66 - 66 MB | NPU | InternImage | PRECOMPILED_QNN_ONNX | float | Snapdragon® 8 Gen 3 Mobile | 34.983 ms | 1 - 7 MB | NPU | InternImage | PRECOMPILED_QNN_ONNX | float | Qualcomm® QCS8550 (Proxy) | 50.021 ms | 0 - 81 MB | NPU | InternImage | PRECOMPILED_QNN_ONNX | float | Qualcomm® QCS9075 | 51.659 ms | 1 - 4 MB | NPU | InternImage | PRECOMPILED_QNN_ONNX | float | Snapdragon® 8 Elite For Galaxy Mobile | 25.44 ms | 1 - 7 MB | NPU | InternImage | QNN_CONTEXT_BINARY | float | Snapdragon® 8 Elite Gen 5 Mobile | 20.488 ms | 1 - 10 MB | NPU | InternImage | QNN_CONTEXT_BINARY | float | Snapdragon® X2 Elite | 21.475 ms | 1 - 1 MB | NPU | InternImage | QNN_CONTEXT_BINARY | float | Snapdragon® X Elite | 53.296 ms | 1 - 1 MB | NPU | InternImage | QNN_CONTEXT_BINARY | float | Snapdragon® 8 Gen 3 Mobile | 36.282 ms | 1 - 8 MB | NPU | InternImage | QNN_CONTEXT_BINARY | float | Qualcomm® QCS8275 (Proxy) | 98.726 ms | 1 - 9 MB | NPU | InternImage | QNN_CONTEXT_BINARY | float | Qualcomm® QCS8550 (Proxy) | 51.086 ms | 1 - 2 MB | NPU | InternImage | QNN_CONTEXT_BINARY | float | Qualcomm® QCS9075 | 52.771 ms | 3 - 5 MB | NPU | InternImage | QNN_CONTEXT_BINARY | float | Qualcomm® QCS8450 (Proxy) | 59.749 ms | 1 - 10 MB | NPU | InternImage | QNN_CONTEXT_BINARY | float | Snapdragon® 8 Elite For Galaxy Mobile | 25.901 ms | 1 - 10 MB | NPU ## License * The license for the original implementation of InternImage can be found [here](https://github.com/OpenGVLab/InternImage/tree/master?tab=MIT-1-ov-file). ## Community * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).