Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -34,10 +34,12 @@ More details on model performance across various devices, can be found
|
|
34 |
- Model size: 330 MB
|
35 |
|
36 |
|
|
|
|
|
37 |
| Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
|
38 |
| ---|---|---|---|---|---|---|---|
|
39 |
-
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite |
|
40 |
-
|
41 |
|
42 |
|
43 |
## Installation
|
@@ -94,19 +96,11 @@ device. This script does the following:
|
|
94 |
python -m qai_hub_models.models.vit.export
|
95 |
```
|
96 |
|
97 |
-
```
|
98 |
-
Profile Job summary of VIT
|
99 |
-
--------------------------------------------------
|
100 |
-
Device: Snapdragon X Elite CRD (11)
|
101 |
-
Estimated Inference Time: 65.94 ms
|
102 |
-
Estimated Peak Memory Range: 0.57-0.57 MB
|
103 |
-
Compute Units: NPU (385) | Total (385)
|
104 |
|
105 |
|
106 |
-
```
|
107 |
## How does this work?
|
108 |
|
109 |
-
This [export script](https://
|
110 |
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
|
111 |
on-device. Lets go through each step below in detail:
|
112 |
|
@@ -183,6 +177,7 @@ spot check the output with expected output.
|
|
183 |
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
|
184 |
|
185 |
|
|
|
186 |
## Run demo on a cloud-hosted device
|
187 |
|
188 |
You can also run the demo on-device.
|
@@ -219,7 +214,7 @@ Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
|
|
219 |
## License
|
220 |
- The license for the original implementation of VIT can be found
|
221 |
[here](https://github.com/pytorch/vision/blob/main/LICENSE).
|
222 |
-
- The license for the compiled assets for on-device deployment can be found [here](
|
223 |
|
224 |
## References
|
225 |
* [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929)
|
|
|
34 |
- Model size: 330 MB
|
35 |
|
36 |
|
37 |
+
|
38 |
+
|
39 |
| Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
|
40 |
| ---|---|---|---|---|---|---|---|
|
41 |
+
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 78.496 ms | 0 - 3 MB | FP16 | NPU | [VIT.tflite](https://huggingface.co/qualcomm/VIT/blob/main/VIT.tflite)
|
42 |
+
|
43 |
|
44 |
|
45 |
## Installation
|
|
|
96 |
python -m qai_hub_models.models.vit.export
|
97 |
```
|
98 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
99 |
|
100 |
|
|
|
101 |
## How does this work?
|
102 |
|
103 |
+
This [export script](https://aihub.qualcomm.com/models/vit/qai_hub_models/models/VIT/export.py)
|
104 |
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
|
105 |
on-device. Lets go through each step below in detail:
|
106 |
|
|
|
177 |
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
|
178 |
|
179 |
|
180 |
+
|
181 |
## Run demo on a cloud-hosted device
|
182 |
|
183 |
You can also run the demo on-device.
|
|
|
214 |
## License
|
215 |
- The license for the original implementation of VIT can be found
|
216 |
[here](https://github.com/pytorch/vision/blob/main/LICENSE).
|
217 |
+
- The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
|
218 |
|
219 |
## References
|
220 |
* [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929)
|