ConvNext-Base / README.md
qaihm-bot's picture
v0.46.0
595b66e verified
---
library_name: pytorch
license: other
tags:
- bu_auto
- android
pipeline_tag: image-classification
---
![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/convnext_base/web-assets/model_demo.png)
# ConvNext-Base: Optimized for Qualcomm Devices
ConvNextBase is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.
This is based on the implementation of ConvNext-Base found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/convnext.py).
This repository contains pre-exported model files optimized for Qualcomm® devices. You can use the [Qualcomm® AI Hub Models](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/convnext_base) library to export with custom configurations. More details on model performance across various devices, can be found [here](#performance-summary).
Qualcomm AI Hub Models uses [Qualcomm AI Hub Workbench](https://workbench.aihub.qualcomm.com) to compile, profile, and evaluate this model. [Sign up](https://myaccount.qualcomm.com/signup) to run these models on a hosted Qualcomm® device.
## Getting Started
There are two ways to deploy this model on your device:
### Option 1: Download Pre-Exported Models
Below are pre-exported model assets ready for deployment.
| Runtime | Precision | Chipset | SDK Versions | Download |
|---|---|---|---|---|
| ONNX | float | Universal | QAIRT 2.37, ONNX Runtime 1.23.0 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/convnext_base/releases/v0.46.0/convnext_base-onnx-float.zip)
| ONNX | w8a16 | Universal | QAIRT 2.37, ONNX Runtime 1.23.0 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/convnext_base/releases/v0.46.0/convnext_base-onnx-w8a16.zip)
| QNN_DLC | float | Universal | QAIRT 2.42 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/convnext_base/releases/v0.46.0/convnext_base-qnn_dlc-float.zip)
| QNN_DLC | w8a16 | Universal | QAIRT 2.42 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/convnext_base/releases/v0.46.0/convnext_base-qnn_dlc-w8a16.zip)
| TFLITE | float | Universal | QAIRT 2.42, TFLite 2.17.0 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/convnext_base/releases/v0.46.0/convnext_base-tflite-float.zip)
For more device-specific assets and performance metrics, visit **[ConvNext-Base on Qualcomm® AI Hub](https://aihub.qualcomm.com/models/convnext_base)**.
### Option 2: Export with Custom Configurations
Use the [Qualcomm® AI Hub Models](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/convnext_base) Python library to compile and export the model with your own:
- Custom weights (e.g., fine-tuned checkpoints)
- Custom input shapes
- Target device and runtime configurations
This option is ideal if you need to customize the model beyond the default configuration provided here.
See our repository for [ConvNext-Base on GitHub](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/convnext_base) for usage instructions.
## Model Details
**Model Type:** Model_use_case.image_classification
**Model Stats:**
- Model checkpoint: Imagenet
- Input resolution: 224x224
- Number of parameters: 88.6M
- Model size (float): 338 MB
- Model size (w8a16): 88.7 MB
## Performance Summary
| Model | Runtime | Precision | Chipset | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit
|---|---|---|---|---|---|---
| ConvNext-Base | ONNX | float | Snapdragon® X Elite | 7.488 ms | 176 - 176 MB | NPU
| ConvNext-Base | ONNX | float | Snapdragon® 8 Gen 3 Mobile | 5.436 ms | 0 - 394 MB | NPU
| ConvNext-Base | ONNX | float | Qualcomm® QCS8550 (Proxy) | 7.317 ms | 0 - 638 MB | NPU
| ConvNext-Base | ONNX | float | Qualcomm® QCS9075 | 11.598 ms | 0 - 4 MB | NPU
| ConvNext-Base | ONNX | float | Snapdragon® 8 Elite For Galaxy Mobile | 4.246 ms | 0 - 329 MB | NPU
| ConvNext-Base | ONNX | float | Snapdragon® 8 Elite Gen 5 Mobile | 3.352 ms | 0 - 332 MB | NPU
| ConvNext-Base | ONNX | w8a16 | Snapdragon® X Elite | 219.265 ms | 137 - 137 MB | NPU
| ConvNext-Base | ONNX | w8a16 | Qualcomm® QCS6490 | 1158.253 ms | 41 - 86 MB | CPU
| ConvNext-Base | ONNX | w8a16 | Qualcomm® QCS9075 | 317.703 ms | 93 - 96 MB | NPU
| ConvNext-Base | ONNX | w8a16 | Qualcomm® QCM6690 | 737.008 ms | 34 - 46 MB | CPU
| ConvNext-Base | ONNX | w8a16 | Snapdragon® 8 Elite For Galaxy Mobile | 268.327 ms | 78 - 229 MB | NPU
| ConvNext-Base | ONNX | w8a16 | Snapdragon® 7 Gen 4 Mobile | 691.251 ms | 35 - 49 MB | CPU
| ConvNext-Base | ONNX | w8a16 | Snapdragon® 8 Elite Gen 5 Mobile | 237.073 ms | 89 - 240 MB | NPU
| ConvNext-Base | QNN_DLC | float | Snapdragon® X Elite | 8.589 ms | 1 - 1 MB | NPU
| ConvNext-Base | QNN_DLC | float | Snapdragon® 8 Gen 3 Mobile | 6.113 ms | 0 - 350 MB | NPU
| ConvNext-Base | QNN_DLC | float | Qualcomm® QCS8275 (Proxy) | 42.453 ms | 1 - 279 MB | NPU
| ConvNext-Base | QNN_DLC | float | Qualcomm® QCS8550 (Proxy) | 8.213 ms | 0 - 33 MB | NPU
| ConvNext-Base | QNN_DLC | float | Qualcomm® QCS9075 | 12.381 ms | 1 - 3 MB | NPU
| ConvNext-Base | QNN_DLC | float | Qualcomm® QCS8450 (Proxy) | 20.603 ms | 0 - 337 MB | NPU
| ConvNext-Base | QNN_DLC | float | Snapdragon® 8 Elite For Galaxy Mobile | 4.689 ms | 1 - 281 MB | NPU
| ConvNext-Base | QNN_DLC | float | Snapdragon® 8 Elite Gen 5 Mobile | 3.534 ms | 1 - 283 MB | NPU
| ConvNext-Base | QNN_DLC | w8a16 | Snapdragon® X Elite | 6.26 ms | 0 - 0 MB | NPU
| ConvNext-Base | QNN_DLC | w8a16 | Snapdragon® 8 Gen 3 Mobile | 4.106 ms | 0 - 248 MB | NPU
| ConvNext-Base | QNN_DLC | w8a16 | Qualcomm® QCS6490 | 23.818 ms | 0 - 2 MB | NPU
| ConvNext-Base | QNN_DLC | w8a16 | Qualcomm® QCS8275 (Proxy) | 14.472 ms | 0 - 199 MB | NPU
| ConvNext-Base | QNN_DLC | w8a16 | Qualcomm® QCS8550 (Proxy) | 5.888 ms | 0 - 2 MB | NPU
| ConvNext-Base | QNN_DLC | w8a16 | Qualcomm® QCS9075 | 6.122 ms | 0 - 2 MB | NPU
| ConvNext-Base | QNN_DLC | w8a16 | Qualcomm® QCM6690 | 71.461 ms | 0 - 395 MB | NPU
| ConvNext-Base | QNN_DLC | w8a16 | Qualcomm® QCS8450 (Proxy) | 9.182 ms | 0 - 246 MB | NPU
| ConvNext-Base | QNN_DLC | w8a16 | Snapdragon® 8 Elite For Galaxy Mobile | 3.31 ms | 0 - 190 MB | NPU
| ConvNext-Base | QNN_DLC | w8a16 | Snapdragon® 7 Gen 4 Mobile | 7.715 ms | 0 - 247 MB | NPU
| ConvNext-Base | QNN_DLC | w8a16 | Snapdragon® 8 Elite Gen 5 Mobile | 2.559 ms | 0 - 201 MB | NPU
| ConvNext-Base | TFLITE | float | Snapdragon® 8 Gen 3 Mobile | 5.533 ms | 0 - 345 MB | NPU
| ConvNext-Base | TFLITE | float | Qualcomm® QCS8275 (Proxy) | 41.241 ms | 0 - 274 MB | NPU
| ConvNext-Base | TFLITE | float | Qualcomm® QCS8550 (Proxy) | 7.334 ms | 0 - 3 MB | NPU
| ConvNext-Base | TFLITE | float | Qualcomm® QCS9075 | 11.149 ms | 0 - 177 MB | NPU
| ConvNext-Base | TFLITE | float | Qualcomm® QCS8450 (Proxy) | 19.73 ms | 0 - 330 MB | NPU
| ConvNext-Base | TFLITE | float | Snapdragon® 8 Elite For Galaxy Mobile | 4.167 ms | 0 - 277 MB | NPU
| ConvNext-Base | TFLITE | float | Snapdragon® 8 Elite Gen 5 Mobile | 3.174 ms | 0 - 279 MB | NPU
## License
* The license for the original implementation of ConvNext-Base can be found
[here](https://github.com/pytorch/vision/blob/main/LICENSE).
## References
* [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545)
* [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/convnext.py)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).