Update README.md
Browse files
README.md
CHANGED
@@ -5,13 +5,25 @@ tags:
|
|
5 |
library_name: transformers
|
6 |
---
|
7 |
|
8 |
-
![image/png](https://cdn-uploads.huggingface.co/production/uploads/660bc03b5294ca0aada80fb9/Kl8Yd8fwFLtmeDbBLi4Fz.png)
|
9 |
|
10 |
## Model Details
|
11 |
|
12 |
### The CLIP model was pretrained from openai/clip-vit-base-patch32 , to learn about what contributes to robustness in computer vision tasks.
|
13 |
### The model has the ability to generalize to arbitrary image classification tasks in a zero-shot manner.
|
14 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
### Use with Transformers
|
16 |
|
17 |
```python3
|
|
|
5 |
library_name: transformers
|
6 |
---
|
7 |
|
|
|
8 |
|
9 |
## Model Details
|
10 |
|
11 |
### The CLIP model was pretrained from openai/clip-vit-base-patch32 , to learn about what contributes to robustness in computer vision tasks.
|
12 |
### The model has the ability to generalize to arbitrary image classification tasks in a zero-shot manner.
|
13 |
|
14 |
+
Top predictions:
|
15 |
+
|
16 |
+
Saree: 64.89%
|
17 |
+
Dupatta: 25.81%
|
18 |
+
Lehenga: 7.51%
|
19 |
+
Leggings and Salwar: 0.84%
|
20 |
+
Women Kurta: 0.44%
|
21 |
+
|
22 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/660bc03b5294ca0aada80fb9/Kl8Yd8fwFLtmeDbBLi4Fz.png)
|
23 |
+
|
24 |
+
|
25 |
+
|
26 |
+
|
27 |
### Use with Transformers
|
28 |
|
29 |
```python3
|