Update README.md
Browse files
README.md
CHANGED
@@ -2,200 +2,122 @@
|
|
2 |
library_name: transformers
|
3 |
language:
|
4 |
- de
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
base_model: deepset/gbert-large
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
-
|
23 |
-
|
24 |
-
|
25 |
-
-
|
26 |
-
|
27 |
-
|
28 |
-
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
<!-- Provide the basic links for the model. -->
|
33 |
-
|
34 |
-
- **Repository:** [More Information Needed]
|
35 |
-
- **Paper [optional]:** [More Information Needed]
|
36 |
-
- **Demo [optional]:** [More Information Needed]
|
37 |
-
|
38 |
-
## Uses
|
39 |
-
|
40 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
41 |
-
|
42 |
-
### Direct Use
|
43 |
-
|
44 |
-
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
45 |
-
|
46 |
-
[More Information Needed]
|
47 |
-
|
48 |
-
### Downstream Use [optional]
|
49 |
-
|
50 |
-
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
51 |
-
|
52 |
-
[More Information Needed]
|
53 |
-
|
54 |
-
### Out-of-Scope Use
|
55 |
-
|
56 |
-
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
57 |
-
|
58 |
-
[More Information Needed]
|
59 |
-
|
60 |
-
## Bias, Risks, and Limitations
|
61 |
-
|
62 |
-
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
63 |
-
|
64 |
-
[More Information Needed]
|
65 |
-
|
66 |
-
### Recommendations
|
67 |
-
|
68 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
69 |
-
|
70 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
71 |
-
|
72 |
-
## How to Get Started with the Model
|
73 |
-
|
74 |
-
Use the code below to get started with the model.
|
75 |
-
|
76 |
-
[More Information Needed]
|
77 |
-
|
78 |
-
## Training Details
|
79 |
|
80 |
-
|
81 |
-
|
82 |
-
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
83 |
-
|
84 |
-
[More Information Needed]
|
85 |
-
|
86 |
-
### Training Procedure
|
87 |
-
|
88 |
-
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
89 |
-
|
90 |
-
#### Preprocessing [optional]
|
91 |
-
|
92 |
-
[More Information Needed]
|
93 |
-
|
94 |
-
|
95 |
-
#### Training Hyperparameters
|
96 |
-
|
97 |
-
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
98 |
-
|
99 |
-
#### Speeds, Sizes, Times [optional]
|
100 |
-
|
101 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
102 |
-
|
103 |
-
[More Information Needed]
|
104 |
-
|
105 |
-
## Evaluation
|
106 |
-
|
107 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
108 |
-
|
109 |
-
### Testing Data, Factors & Metrics
|
110 |
-
|
111 |
-
#### Testing Data
|
112 |
-
|
113 |
-
<!-- This should link to a Dataset Card if possible. -->
|
114 |
-
|
115 |
-
[More Information Needed]
|
116 |
-
|
117 |
-
#### Factors
|
118 |
-
|
119 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
120 |
-
|
121 |
-
[More Information Needed]
|
122 |
-
|
123 |
-
#### Metrics
|
124 |
-
|
125 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
126 |
-
|
127 |
-
[More Information Needed]
|
128 |
-
|
129 |
-
### Results
|
130 |
-
|
131 |
-
[More Information Needed]
|
132 |
-
|
133 |
-
#### Summary
|
134 |
-
|
135 |
-
|
136 |
-
|
137 |
-
## Model Examination [optional]
|
138 |
-
|
139 |
-
<!-- Relevant interpretability work for the model goes here -->
|
140 |
-
|
141 |
-
[More Information Needed]
|
142 |
|
143 |
-
|
144 |
|
145 |
-
|
146 |
|
147 |
-
|
148 |
|
149 |
-
|
150 |
-
- **Hours used:** [More Information Needed]
|
151 |
-
- **Cloud Provider:** [More Information Needed]
|
152 |
-
- **Compute Region:** [More Information Needed]
|
153 |
-
- **Carbon Emitted:** [More Information Needed]
|
154 |
|
155 |
-
|
|
|
|
|
|
|
156 |
|
157 |
-
|
158 |
|
159 |
-
|
160 |
|
161 |
-
|
162 |
|
163 |
-
|
|
|
|
|
|
|
164 |
|
165 |
-
|
166 |
|
167 |
-
|
168 |
|
169 |
-
|
170 |
|
171 |
-
|
172 |
|
173 |
-
|
174 |
|
175 |
-
|
|
|
|
|
176 |
|
177 |
-
|
|
|
|
|
178 |
|
179 |
-
|
|
|
180 |
|
181 |
-
|
|
|
|
|
|
|
182 |
|
183 |
-
|
|
|
|
|
184 |
|
185 |
-
|
186 |
|
187 |
-
|
188 |
|
189 |
-
|
|
|
|
|
190 |
|
191 |
-
|
192 |
|
193 |
-
|
194 |
|
195 |
-
|
196 |
|
197 |
-
|
198 |
|
199 |
-
|
200 |
|
201 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
library_name: transformers
|
3 |
language:
|
4 |
- de
|
5 |
+
license: apache-2.0
|
6 |
+
tags:
|
7 |
+
- text-classification
|
8 |
+
- political-communication
|
9 |
+
- social-science
|
10 |
+
- synthetic-data
|
11 |
+
model_name: gbert-CTA-w-synth
|
12 |
base_model: deepset/gbert-large
|
13 |
+
pipeline_tag: text-classification
|
14 |
+
widget:
|
15 |
+
- text: "Heute Abend live dabei sein! Unser Wahlkampf-Event beginnt um 19 Uhr. Sei dabei!"
|
16 |
+
- text: "Mehr Informationen auf unserer Website."
|
17 |
+
- text: "Erfahre mehr über unser Wahlprogramm!"
|
18 |
+
results:
|
19 |
+
- task:
|
20 |
+
type: text-classification
|
21 |
+
name: Call to Action Detection
|
22 |
+
dataset:
|
23 |
+
name: German Instagram Political Content 2021
|
24 |
+
type: custom
|
25 |
+
metrics:
|
26 |
+
- name: Macro F1 Score
|
27 |
+
type: f1
|
28 |
+
value: 0.93
|
29 |
+
- name: Binary F1 Score
|
30 |
+
type: f1
|
31 |
+
value: 0.89
|
32 |
+
- name: Precision
|
33 |
+
type: precision
|
34 |
+
value: 0.98
|
35 |
+
- name: Recall
|
36 |
+
type: recall
|
37 |
+
value: 0.81
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
|
39 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
40 |
|
41 |
+
# gbert-CTA-w-synth
|
42 |
|
43 |
+
gbert-CTA-w-synth is a fine-tuned version of the [German BERT model (GBERT)](https://huggingface.co/deepset/gbert-large) designed to detect Calls to Action (CTAs) in political Instagram content. It was developed to analyze political mobilization strategies during the 2021 German Federal Election, focusing on Instagram stories and posts.
|
44 |
|
45 |
+
This model is trained on real-world and synthetic data to mitigate class imbalances and improve performance. It specializes in detecting explicit and implicit CTAs in multimodal content, including captions, Optical Character Recognition (OCR) text from images, and video transcriptions.
|
46 |
|
47 |
+
## Model Description
|
|
|
|
|
|
|
|
|
48 |
|
49 |
+
- **Base Model**: `deepset/gbert-large`
|
50 |
+
- **Fine-tuned on**: German Instagram content, including captions, OCR text, and transcriptions
|
51 |
+
- **Synthetic Data**: [Augmented with synthetic training data generated using OpenAI’s GPT-4 to address class imbalance.](https://huggingface.co/datasets/chaichy/CTA-synthetic-dataset/)
|
52 |
+
- **Tasks**: Binary classification of CTA presence or absence in Instagram posts and stories.
|
53 |
|
54 |
+
For video transcriptions, we used [bofenghuang/whisper-large-v2-cv11-german](https://huggingface.co/bofenghuang/whisper-large-v2-cv11-german), a fine-tuned version of OpenAI's Whisper model adapted for the German language.
|
55 |
|
56 |
+
## Performance
|
57 |
|
58 |
+
The model was evaluated against human-annotated ground truth labels to ensure classification quality. We performed an evaluation using five-fold cross-validation to validate the model’s generalizability. The model was benchmarked with the following metrics:
|
59 |
|
60 |
+
- **Macro F1 score**: 0.93
|
61 |
+
- **Binary F1 score**: 0.89
|
62 |
+
- **Precision**: 0.98
|
63 |
+
- **Recall**: 0.81
|
64 |
|
65 |
+
The evaluation was based on a dataset containing 1,388 documents annotated by nine contributors. Disagreements were resolved using majority decisions.
|
66 |
|
67 |
+
## Usage
|
68 |
|
69 |
+
This model is intended for computational social science and political communication research, specifically for studying how political actors mobilize audiences on social media. It is effective for detecting Calls to Action in German-language social media content.
|
70 |
|
71 |
+
### How to Use
|
72 |
|
73 |
+
You can use this model with the `transformers` library in Python:
|
74 |
|
75 |
+
```python
|
76 |
+
from transformers import BertTokenizer, BertForSequenceClassification
|
77 |
+
import torch
|
78 |
|
79 |
+
# Load model and tokenizer
|
80 |
+
tokenizer = BertTokenizer.from_pretrained('chaichy/gbert-CTA-w-synth')
|
81 |
+
model = BertForSequenceClassification.from_pretrained('chaichy/gbert-CTA-w-synth')
|
82 |
|
83 |
+
# Tokenize input
|
84 |
+
inputs = tokenizer("Input text here", return_tensors="pt")
|
85 |
|
86 |
+
# Get classification results
|
87 |
+
outputs = model(**inputs)
|
88 |
+
logits = outputs.logits
|
89 |
+
predicted_class = torch.argmax(logits, dim=1)
|
90 |
|
91 |
+
# 0 for absence, 1 for presence of CTA
|
92 |
+
print(f"Predicted class: {predicted_class.item()}")
|
93 |
+
```
|
94 |
|
95 |
+
### Data
|
96 |
|
97 |
+
The model was trained on Instagram content collected during the 2021 German Federal Election campaign. This included:
|
98 |
|
99 |
+
- **Captions**: Text accompanying images or videos in posts.
|
100 |
+
- **OCR text**: Optical Character Recognition (OCR) extracted text from images.
|
101 |
+
- **Transcriptions**: Text extracted from video audio, using [bofenghuang/whisper-large-v2-cv11-german](https://huggingface.co/bofenghuang/whisper-large-v2-cv11-german).
|
102 |
|
103 |
+
The dataset contains both explicit and implicit CTAs, which are binary labeled (True/False). We generated [synthetic training data based on the original human-annotated dataset](https://huggingface.co/datasets/chaichy/CTA-synthetic-dataset) to handle class imbalance. The synthetic dataset was created using OpenAI’s GPT-4o, which mimicked real-world CTAs by generating new examples in a consistent political communication style.
|
104 |
|
105 |
+
## Ethical Considerations
|
106 |
|
107 |
+
The training data was collected from publicly available Instagram posts and stories shared by verified political accounts during the 2021 German Federal Election. No personal or sensitive data was included.
|
108 |
|
109 |
+
## Citation
|
110 |
|
111 |
+
If you use this model, please cite the following:
|
112 |
|
113 |
+
```
|
114 |
+
@misc{achmanndenkler2024detectingcallsactionmultimodal,
|
115 |
+
title={Detecting Calls to Action in Multimodal Content: Analysis of the 2021 German Federal Election Campaign on Instagram},
|
116 |
+
author={Michael Achmann-Denkler and Jakob Fehle and Mario Haim and Christian Wolff},
|
117 |
+
year={2024},
|
118 |
+
eprint={2409.02690},
|
119 |
+
archivePrefix={arXiv},
|
120 |
+
primaryClass={cs.SI},
|
121 |
+
url={https://arxiv.org/abs/2409.02690},
|
122 |
+
}
|
123 |
+
```
|