qaihm-bot commited on
Commit
edb8f9b
·
verified ·
1 Parent(s): aa04ee6

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +40 -19
README.md CHANGED
@@ -18,7 +18,7 @@ tags:
18
 
19
  SqueezeNet is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.
20
 
21
- This model is an implementation of SqueezeNet-1_1 found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/squeezenet.py).
22
  This repository provides scripts to run SqueezeNet-1_1 on Qualcomm® devices.
23
  More details on model performance across various devices, can be found
24
  [here](https://aihub.qualcomm.com/models/squeezenet1_1).
@@ -33,15 +33,32 @@ More details on model performance across various devices, can be found
33
  - Number of parameters: 1.24M
34
  - Model size: 4.73 MB
35
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
 
37
 
38
 
39
- | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
40
- | ---|---|---|---|---|---|---|---|
41
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 0.643 ms | 0 - 2 MB | FP16 | NPU | [SqueezeNet-1_1.tflite](https://huggingface.co/qualcomm/SqueezeNet-1_1/blob/main/SqueezeNet-1_1.tflite)
42
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 0.713 ms | 1 - 3 MB | FP16 | NPU | [SqueezeNet-1_1.so](https://huggingface.co/qualcomm/SqueezeNet-1_1/blob/main/SqueezeNet-1_1.so)
43
-
44
-
45
 
46
  ## Installation
47
 
@@ -96,16 +113,16 @@ device. This script does the following:
96
  ```bash
97
  python -m qai_hub_models.models.squeezenet1_1.export
98
  ```
99
-
100
  ```
101
- Profile Job summary of SqueezeNet-1_1
102
- --------------------------------------------------
103
- Device: Snapdragon X Elite CRD (11)
104
- Estimated Inference Time: 0.78 ms
105
- Estimated Peak Memory Range: 0.57-0.57 MB
106
- Compute Units: NPU (70) | Total (70)
107
-
108
-
 
109
  ```
110
 
111
 
@@ -204,15 +221,19 @@ provides instructions on how to use the `.so` shared library in an Android appl
204
  Get more details on SqueezeNet-1_1's performance across various devices [here](https://aihub.qualcomm.com/models/squeezenet1_1).
205
  Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
206
 
 
207
  ## License
208
- - The license for the original implementation of SqueezeNet-1_1 can be found
209
- [here](https://github.com/pytorch/vision/blob/main/LICENSE).
210
- - The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
 
211
 
212
  ## References
213
  * [SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size](https://arxiv.org/abs/1602.07360)
214
  * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/squeezenet.py)
215
 
 
 
216
  ## Community
217
  * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
218
  * For questions or feedback please [reach out to us](mailto:[email protected]).
 
18
 
19
  SqueezeNet is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.
20
 
21
+ This model is an implementation of SqueezeNet-1_1 found [here]({source_repo}).
22
  This repository provides scripts to run SqueezeNet-1_1 on Qualcomm® devices.
23
  More details on model performance across various devices, can be found
24
  [here](https://aihub.qualcomm.com/models/squeezenet1_1).
 
33
  - Number of parameters: 1.24M
34
  - Model size: 4.73 MB
35
 
36
+ | Model | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
37
+ |---|---|---|---|---|---|---|---|---|
38
+ | SqueezeNet-1_1 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | TFLITE | 0.641 ms | 0 - 2 MB | FP16 | NPU | [SqueezeNet-1_1.tflite](https://huggingface.co/qualcomm/SqueezeNet-1_1/blob/main/SqueezeNet-1_1.tflite) |
39
+ | SqueezeNet-1_1 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 0.71 ms | 1 - 6 MB | FP16 | NPU | [SqueezeNet-1_1.so](https://huggingface.co/qualcomm/SqueezeNet-1_1/blob/main/SqueezeNet-1_1.so) |
40
+ | SqueezeNet-1_1 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | ONNX | 0.653 ms | 0 - 40 MB | FP16 | NPU | [SqueezeNet-1_1.onnx](https://huggingface.co/qualcomm/SqueezeNet-1_1/blob/main/SqueezeNet-1_1.onnx) |
41
+ | SqueezeNet-1_1 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | TFLITE | 0.461 ms | 0 - 26 MB | FP16 | NPU | [SqueezeNet-1_1.tflite](https://huggingface.co/qualcomm/SqueezeNet-1_1/blob/main/SqueezeNet-1_1.tflite) |
42
+ | SqueezeNet-1_1 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 0.512 ms | 1 - 12 MB | FP16 | NPU | [SqueezeNet-1_1.so](https://huggingface.co/qualcomm/SqueezeNet-1_1/blob/main/SqueezeNet-1_1.so) |
43
+ | SqueezeNet-1_1 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | ONNX | 0.578 ms | 0 - 27 MB | FP16 | NPU | [SqueezeNet-1_1.onnx](https://huggingface.co/qualcomm/SqueezeNet-1_1/blob/main/SqueezeNet-1_1.onnx) |
44
+ | SqueezeNet-1_1 | QCS8550 (Proxy) | QCS8550 Proxy | TFLITE | 0.64 ms | 0 - 2 MB | FP16 | NPU | [SqueezeNet-1_1.tflite](https://huggingface.co/qualcomm/SqueezeNet-1_1/blob/main/SqueezeNet-1_1.tflite) |
45
+ | SqueezeNet-1_1 | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 0.645 ms | 1 - 2 MB | FP16 | NPU | Use Export Script |
46
+ | SqueezeNet-1_1 | SA8255 (Proxy) | SA8255P Proxy | TFLITE | 0.641 ms | 0 - 6 MB | FP16 | NPU | [SqueezeNet-1_1.tflite](https://huggingface.co/qualcomm/SqueezeNet-1_1/blob/main/SqueezeNet-1_1.tflite) |
47
+ | SqueezeNet-1_1 | SA8255 (Proxy) | SA8255P Proxy | QNN | 0.654 ms | 1 - 2 MB | FP16 | NPU | Use Export Script |
48
+ | SqueezeNet-1_1 | SA8775 (Proxy) | SA8775P Proxy | TFLITE | 0.64 ms | 0 - 2 MB | FP16 | NPU | [SqueezeNet-1_1.tflite](https://huggingface.co/qualcomm/SqueezeNet-1_1/blob/main/SqueezeNet-1_1.tflite) |
49
+ | SqueezeNet-1_1 | SA8775 (Proxy) | SA8775P Proxy | QNN | 0.644 ms | 0 - 2 MB | FP16 | NPU | Use Export Script |
50
+ | SqueezeNet-1_1 | SA8650 (Proxy) | SA8650P Proxy | TFLITE | 0.639 ms | 0 - 2 MB | FP16 | NPU | [SqueezeNet-1_1.tflite](https://huggingface.co/qualcomm/SqueezeNet-1_1/blob/main/SqueezeNet-1_1.tflite) |
51
+ | SqueezeNet-1_1 | SA8650 (Proxy) | SA8650P Proxy | QNN | 0.642 ms | 1 - 2 MB | FP16 | NPU | Use Export Script |
52
+ | SqueezeNet-1_1 | QCS8450 (Proxy) | QCS8450 Proxy | TFLITE | 0.813 ms | 0 - 27 MB | FP16 | NPU | [SqueezeNet-1_1.tflite](https://huggingface.co/qualcomm/SqueezeNet-1_1/blob/main/SqueezeNet-1_1.tflite) |
53
+ | SqueezeNet-1_1 | QCS8450 (Proxy) | QCS8450 Proxy | QNN | 0.891 ms | 1 - 14 MB | FP16 | NPU | Use Export Script |
54
+ | SqueezeNet-1_1 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | TFLITE | 0.431 ms | 0 - 16 MB | FP16 | NPU | [SqueezeNet-1_1.tflite](https://huggingface.co/qualcomm/SqueezeNet-1_1/blob/main/SqueezeNet-1_1.tflite) |
55
+ | SqueezeNet-1_1 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN | 0.391 ms | 1 - 9 MB | FP16 | NPU | Use Export Script |
56
+ | SqueezeNet-1_1 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | ONNX | 0.42 ms | 0 - 17 MB | FP16 | NPU | [SqueezeNet-1_1.onnx](https://huggingface.co/qualcomm/SqueezeNet-1_1/blob/main/SqueezeNet-1_1.onnx) |
57
+ | SqueezeNet-1_1 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 0.784 ms | 1 - 1 MB | FP16 | NPU | Use Export Script |
58
+ | SqueezeNet-1_1 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 0.697 ms | 3 - 3 MB | FP16 | NPU | [SqueezeNet-1_1.onnx](https://huggingface.co/qualcomm/SqueezeNet-1_1/blob/main/SqueezeNet-1_1.onnx) |
59
 
60
 
61
 
 
 
 
 
 
 
62
 
63
  ## Installation
64
 
 
113
  ```bash
114
  python -m qai_hub_models.models.squeezenet1_1.export
115
  ```
 
116
  ```
117
+ Profiling Results
118
+ ------------------------------------------------------------
119
+ SqueezeNet-1_1
120
+ Device : Samsung Galaxy S23 (13)
121
+ Runtime : TFLITE
122
+ Estimated inference time (ms) : 0.6
123
+ Estimated peak memory usage (MB): [0, 2]
124
+ Total # Ops : 41
125
+ Compute Unit(s) : NPU (41 ops)
126
  ```
127
 
128
 
 
221
  Get more details on SqueezeNet-1_1's performance across various devices [here](https://aihub.qualcomm.com/models/squeezenet1_1).
222
  Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
223
 
224
+
225
  ## License
226
+ * The license for the original implementation of SqueezeNet-1_1 can be found [here](https://github.com/pytorch/vision/blob/main/LICENSE).
227
+ * The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
228
+
229
+
230
 
231
  ## References
232
  * [SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size](https://arxiv.org/abs/1602.07360)
233
  * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/squeezenet.py)
234
 
235
+
236
+
237
  ## Community
238
  * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
239
  * For questions or feedback please [reach out to us](mailto:[email protected]).