File size: 1,693 Bytes
5ca7995
 
 
 
 
8f3aec2
5ca7995
 
8f3aec2
 
 
 
5b7c37b
 
5ca7995
 
 
 
 
 
4d824ff
 
 
 
8bb2136
 
 
 
 
 
4d824ff
 
 
 
 
 
 
 
 
 
 
 
b763c18
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4d824ff
 
5ca7995
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8f3aec2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
---
tags:
- autotrain
- text-classification
widget:
- text: I love AutoTrain
datasets:
- dair-ai/emotion
license: mit
language:
- en
pipeline_tag: text-classification
base_model:
- google-bert/bert-base-uncased
---

# Model Trained Using AutoTrain

- Problem type: Text Classification

# Request Example
```Python
from transformers import pipeline

# Ensure the model and tokenizer are loaded on the GPU by setting device=0
emotion_classifier = pipeline(
    "text-classification",
    model="XuehangCang/Emotion-Classification",
    # device=0  # Use the first GPU device
)

texts = [
    "I'm so happy today!",
    "This is really sad.",
    "I'm a bit nervous about what's going to happen.",
    "This news makes me angry."
]

for text in texts:
    result = emotion_classifier(text)
    print(f"Text: {text}")
    print(f"Emotion classification result: {result}\n")

"""
Device set to use cpu
Text: I'm so happy today!
Emotion classification result: [{'label': 'joy', 'score': 0.9994311928749084}]

Text: This is really sad.
Emotion classification result: [{'label': 'sadness', 'score': 0.9989039897918701}]

Text: I'm a bit nervous about what's going to happen.
Emotion classification result: [{'label': 'fear', 'score': 0.998763918876648}]

Text: This news makes me angry.
Emotion classification result: [{'label': 'anger', 'score': 0.9977891445159912}]
"""
```

## Validation Metrics
loss: 0.13341853022575378

f1_macro: 0.9169826832623412

f1_micro: 0.943

f1_weighted: 0.9427985114313238

precision_macro: 0.9227534317185495

precision_micro: 0.943

precision_weighted: 0.9430912986498113

recall_macro: 0.9119580961776227

recall_micro: 0.943

recall_weighted: 0.943

accuracy: 0.943