hplisiecki commited on
Commit
60681e6
·
verified ·
1 Parent(s): 39bf9bb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +69 -3
README.md CHANGED
@@ -1,3 +1,69 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ # Affective Norms Extrapolation Model for Polish Language
5
+
6
+ ## Model Description
7
+
8
+ This transformer-based model is designed to extrapolate affective norms for Polish words, including metrics such as valence, arousal, dominance, concreteness, age of acquisition, origin, significance, and imageability. It has been finetuned from the Polish RoBerta Model (https://github.com/sdadas/polish-roberta), enhanced with additional layers to predict the affective dimensions. This model was first released as a part of the publication: "Extrapolation of affective norms using transformer-based neural networks and its application to experimental stimuli selection."
9
+
10
+
11
+ ## Training Data
12
+
13
+ The model was trained on the Polish affective norms dataset by Imbir (2016), which includes 4900 words rated by participants on various emotional and semantic dimensions. The dataset was split into training, validation, and test sets in an 8:1:1 ratio.
14
+
15
+ ## Performance
16
+
17
+ The model achieved the following Pearson correlations with human judgments on the test set:
18
+
19
+ - Valence: 0.93
20
+ - Arousal: 0.86
21
+ - Dominance: 0.92
22
+ - Concreteness: 0.95
23
+ - Age of Acquisition: 0.81
24
+ - Origin: 0.86
25
+ - Significance: 0.88
26
+ - Imageability: 0.88
27
+
28
+
29
+ ## Usage
30
+
31
+ You can use the model and tokenizer as follows:
32
+
33
+ First run the bash code below to clone the repository (this will take some time). Because of the custom model class, this model cannot be run with the usual huggingface Model setups.
34
+
35
+ ```bash
36
+ git clone https://huggingface.co/hplisiecki/polemo_intensity
37
+ ```
38
+
39
+ Proceed as follows:
40
+
41
+ ```python
42
+ from word2affect_polish.model_script import CustomModel # importing the custom model class
43
+ from transformers import PreTrainedTokenizerFast
44
+
45
+ model_directory = "word2affect_polish" # path to the cloned repository
46
+ model = CustomModel.from_pretrained('word2affect_polish')
47
+ tokenizer = PreTrainedTokenizerFast.from_pretrained('word2affect_polish')
48
+ inputs = tokenizer("This is a test input.", return_tensors="pt")
49
+ outputs = model(inputs['input_ids'], inputs['attention_mask'])
50
+
51
+ # Print out the emotion ratings
52
+ for emotion, rating in zip(['Valence', 'Arousal', 'Dominance', 'Origin', 'Significance', 'Concreteness', 'Imageability', 'Acquisition'], outputs):
53
+ print(f"{emotion}: {rating.item()}")
54
+ ```
55
+
56
+ ## Citation
57
+
58
+ If you use this model please cite the following paper.
59
+
60
+ ```sql
61
+ @article{Plisiecki_Sobieszek_2023,
62
+ title={Extrapolation of affective norms using transformer-based neural networks and its application to experimental stimuli selection},
63
+ author={Plisiecki, Hubert and Sobieszek, Adam},
64
+ journal={Behavior Research Methods},
65
+ year={2023},
66
+ pages={1-16}
67
+ doi={https://doi.org/10.3758/s13428-023-02212-3}
68
+ }
69
+ ```