kellywong commited on
Commit
7887c07
·
1 Parent(s): ddbae5f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +53 -0
README.md CHANGED
@@ -93,6 +93,59 @@ pip install sgnlp
93
  ## Examples
94
  For more full code (such as Emotion Entailment), please refer to this [SGNLP-Github](https://github.com/aisingapore/sgnlp). <br> Alternatively, you can also try out the [SGNLP-Demo](https://sgnlp.aisingapore.net/emotion-entailment) for Emotion Entailment.
95
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
96
  # Training
97
  The train and evaluation datasets were derived from the RECCON dataset. The full dataset can be downloaded from the author's [github repository](https://github.com/declare-lab/RECCON/tree/main/data).
98
 
 
93
  ## Examples
94
  For more full code (such as Emotion Entailment), please refer to this [SGNLP-Github](https://github.com/aisingapore/sgnlp). <br> Alternatively, you can also try out the [SGNLP-Demo](https://sgnlp.aisingapore.net/emotion-entailment) for Emotion Entailment.
95
 
96
+ Example of Emotion Entailment (for happiness):
97
+
98
+ ```python
99
+ from sgnlp.models.emotion_entailment import (
100
+ RecconEmotionEntailmentConfig,
101
+ RecconEmotionEntailmentTokenizer,
102
+ RecconEmotionEntailmentModel,
103
+ RecconEmotionEntailmentPreprocessor,
104
+ RecconEmotionEntailmentPostprocessor,
105
+ )
106
+
107
+ # Load model
108
+ config = RecconEmotionEntailmentConfig.from_pretrained(
109
+ "https://storage.googleapis.com/sgnlp/models/reccon_emotion_entailment/config.json"
110
+ )
111
+ tokenizer = RecconEmotionEntailmentTokenizer.from_pretrained("roberta-base")
112
+ model = RecconEmotionEntailmentModel.from_pretrained(
113
+ "https://storage.googleapis.com/sgnlp/models/reccon_emotion_entailment/pytorch_model.bin",
114
+ config=config,
115
+ )
116
+ preprocessor = RecconEmotionEntailmentPreprocessor(tokenizer)
117
+ postprocessor = RecconEmotionEntailmentPostprocessor()
118
+
119
+ # Model predict
120
+ input_batch = {
121
+ "emotion": ["happiness", "happiness", "happiness", "happiness"],
122
+ "target_utterance": [
123
+ "Thank you very much .",
124
+ "Thank you very much .",
125
+ "Thank you very much .",
126
+ "Thank you very much .",
127
+ ],
128
+ "evidence_utterance": [
129
+ "It's very thoughtful of you to invite me to your wedding .",
130
+ "How can I forget my old friend ?",
131
+ "My best wishes to you and the bride !",
132
+ "Thank you very much .",
133
+ ],
134
+ "conversation_history": [
135
+ "It's very thoughtful of you to invite me to your wedding . How can I forget my old friend ? My best wishes to you and the bride ! Thank you very much .",
136
+ "It's very thoughtful of you to invite me to your wedding . How can I forget my old friend ? My best wishes to you and the bride ! Thank you very much .",
137
+ "It's very thoughtful of you to invite me to your wedding . How can I forget my old friend ? My best wishes to you and the bride ! Thank you very much .",
138
+ "It's very thoughtful of you to invite me to your wedding . How can I forget my old friend ? My best wishes to you and the bride ! Thank you very much .",
139
+ ],
140
+ }
141
+
142
+ tensor_dict = preprocessor(input_batch)
143
+ raw_output = model(**tensor_dict)
144
+ output = postprocessor(raw_output)
145
+
146
+
147
+ ```
148
+
149
  # Training
150
  The train and evaluation datasets were derived from the RECCON dataset. The full dataset can be downloaded from the author's [github repository](https://github.com/declare-lab/RECCON/tree/main/data).
151