Update README.md
Browse files
README.md
CHANGED
@@ -7,6 +7,7 @@ tags:
|
|
7 |
- generated_from_trainer
|
8 |
datasets:
|
9 |
- imagefolder
|
|
|
10 |
metrics:
|
11 |
- accuracy
|
12 |
model-index:
|
@@ -25,6 +26,7 @@ model-index:
|
|
25 |
- name: Accuracy
|
26 |
type: accuracy
|
27 |
value: 0.6125
|
|
|
28 |
---
|
29 |
|
30 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
@@ -32,22 +34,36 @@ should probably proofread and complete it, then remove this comment. -->
|
|
32 |
|
33 |
# vit-emotion-classification
|
34 |
|
35 |
-
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the FastJobs/Visual_Emotional_Analysis dataset.
|
36 |
It achieves the following results on the evaluation set:
|
37 |
- Loss: 1.3802
|
38 |
- Accuracy: 0.6125
|
39 |
|
40 |
-
## Model description
|
41 |
-
|
42 |
-
More information needed
|
43 |
-
|
44 |
## Intended uses & limitations
|
45 |
|
46 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
47 |
|
48 |
## Training and evaluation data
|
49 |
|
50 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
51 |
|
52 |
## Training procedure
|
53 |
|
@@ -72,10 +88,32 @@ The following hyperparameters were used during training:
|
|
72 |
| 0.0474 | 7.5 | 300 | 1.3802 | 0.6125 |
|
73 |
| 0.0368 | 10.0 | 400 | 1.4388 | 0.5938 |
|
74 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
75 |
|
|
|
|
|
|
|
76 |
### Framework versions
|
77 |
|
78 |
- Transformers 4.47.1
|
79 |
- Pytorch 2.5.1+cu121
|
80 |
- Datasets 3.2.0
|
81 |
-
- Tokenizers 0.21.0
|
|
|
7 |
- generated_from_trainer
|
8 |
datasets:
|
9 |
- imagefolder
|
10 |
+
- FastJobs/Visual_Emotional_Analysis
|
11 |
metrics:
|
12 |
- accuracy
|
13 |
model-index:
|
|
|
26 |
- name: Accuracy
|
27 |
type: accuracy
|
28 |
value: 0.6125
|
29 |
+
pipeline_tag: image-classification
|
30 |
---
|
31 |
|
32 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
|
34 |
|
35 |
# vit-emotion-classification
|
36 |
|
37 |
+
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the [FastJobs/Visual_Emotional_Analysis](https://huggingface.co/datasets/FastJobs/Visual_Emotional_Analysis) dataset.
|
38 |
It achieves the following results on the evaluation set:
|
39 |
- Loss: 1.3802
|
40 |
- Accuracy: 0.6125
|
41 |
|
|
|
|
|
|
|
|
|
42 |
## Intended uses & limitations
|
43 |
|
44 |
+
### Intended Uses
|
45 |
+
- Emotion classification from visual inputs (images).
|
46 |
+
|
47 |
+
### Limitations
|
48 |
+
- May reflect biases from the training dataset.
|
49 |
+
- Performance may degrade in domains outside the training data.
|
50 |
+
- Not suitable for critical or sensitive decision-making tasks.
|
51 |
|
52 |
## Training and evaluation data
|
53 |
|
54 |
+
This model was trained on the [FastJobs/Visual_Emotional_Analysis](https://huggingface.co/datasets/FastJobs/Visual_Emotional_Analysis) dataset.
|
55 |
+
|
56 |
+
The dataset contains:
|
57 |
+
- **800 images** annotated with **8 emotion labels**:
|
58 |
+
- Anger
|
59 |
+
- Contempt
|
60 |
+
- Disgust
|
61 |
+
- Fear
|
62 |
+
- Happy
|
63 |
+
- Neutral
|
64 |
+
- Sad
|
65 |
+
- Surprise
|
66 |
+
|
67 |
|
68 |
## Training procedure
|
69 |
|
|
|
88 |
| 0.0474 | 7.5 | 300 | 1.3802 | 0.6125 |
|
89 |
| 0.0368 | 10.0 | 400 | 1.4388 | 0.5938 |
|
90 |
|
91 |
+
## How to use this model
|
92 |
+
```python
|
93 |
+
from transformers import AutoImageProcessor, ViTForImageClassification
|
94 |
+
import torch
|
95 |
+
from PIL import Image
|
96 |
+
import requests
|
97 |
+
|
98 |
+
from huggingface_hub import login
|
99 |
+
login(api_key)
|
100 |
+
|
101 |
+
image = Image.open("image.jpg").convert("RGB")
|
102 |
+
|
103 |
+
image_processor = AutoImageProcessor.from_pretrained("digo-prayudha/vit-emotion-classification")
|
104 |
+
model = ViTForImageClassification.from_pretrained("digo-prayudha/vit-emotion-classification")
|
105 |
+
|
106 |
+
inputs = image_processor(image, return_tensors="pt")
|
107 |
+
|
108 |
+
with torch.no_grad():
|
109 |
+
logits = model(**inputs).logits
|
110 |
|
111 |
+
predicted_label = logits.argmax(-1).item()
|
112 |
+
print(model.config.id2label[predicted_label])
|
113 |
+
```
|
114 |
### Framework versions
|
115 |
|
116 |
- Transformers 4.47.1
|
117 |
- Pytorch 2.5.1+cu121
|
118 |
- Datasets 3.2.0
|
119 |
+
- Tokenizers 0.21.0
|