Felix Marty
commited on
Commit
·
f6c4ac6
1
Parent(s):
3c0e73e
better readme
Browse files
README.md
CHANGED
@@ -16,6 +16,28 @@ and the optimized model is converted to the onnx format.
|
|
16 |
|
17 |
## How to use
|
18 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
19 |
```python
|
20 |
from optimum.onnxruntime.modeling_ort import ORTModelForImageClassification
|
21 |
from transformers import AutoModelForImageClassification
|
@@ -65,6 +87,8 @@ Optimum-onnxruntime runtime:
|
|
65 |
|
66 |
```
|
67 |
|
|
|
|
|
68 |
```python
|
69 |
from optimum.runs_base import TimeBenchmark
|
70 |
|
|
|
16 |
|
17 |
## How to use
|
18 |
|
19 |
+
```python
|
20 |
+
from optimum.onnxruntime.modeling_ort import ORTModelForImageClassification
|
21 |
+
from transformers import AutoFeatureExtractor
|
22 |
+
|
23 |
+
from PIL import Image
|
24 |
+
import requests
|
25 |
+
|
26 |
+
preprocessor = AutoFeatureExtractor.from_pretrained("fxmarty/levit-256-onnx")
|
27 |
+
ort_model = ORTModelForImageClassification.from_pretrained("fxmarty/levit-256-onnx")
|
28 |
+
|
29 |
+
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
|
30 |
+
image = Image.open(requests.get(url, stream=True).raw)
|
31 |
+
|
32 |
+
inputs = preprocessor(images=image, return_tensors="pt")
|
33 |
+
outputs = model(**inputs)
|
34 |
+
|
35 |
+
predicted_class_idx = outputs.logits.argmax(-1).item()
|
36 |
+
print("Predicted class:", model.config.id2label[predicted_class_idx])
|
37 |
+
```
|
38 |
+
|
39 |
+
To be safe, check as well that the onnx model returns the same logits as the PyTorch model:
|
40 |
+
|
41 |
```python
|
42 |
from optimum.onnxruntime.modeling_ort import ORTModelForImageClassification
|
43 |
from transformers import AutoModelForImageClassification
|
|
|
87 |
|
88 |
```
|
89 |
|
90 |
+
Run on your own machine with:
|
91 |
+
|
92 |
```python
|
93 |
from optimum.runs_base import TimeBenchmark
|
94 |
|