File size: 2,020 Bytes
c25ae4f 794f4fc c25ae4f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
---
license: cc
language:
- en
library_name: transformers
tags:
- social media
- contrastive learning
---
# Contrastive Learning of Sociopragmatic Meaning in Social Media
<p align="center"> <a href="https://chiyuzhang94.github.io/" target="_blank">Chiyu Zhang</a>, <a href="https://mageed.arts.ubc.ca/" target="_blank">Muhammad Abdul-Mageed</a>, <a href="https://ganeshjawahar.github.io/" target="_blank">Ganesh Jarwaha</a></p>
<p align="center" float="left">
<p align="center">Publish at Findings of ACL 2023</p>
[]()
[]()
<p align="center" width="100%">
<a><img src="https://github.com/UBC-NLP/infodcl/blob/master/images/infodcl_vis.png?raw=true" alt="Title" style="width: 90%; min-width: 300px; display: block; margin: auto;"></a>
</p>
Illustration of our proposed InfoDCL framework. We exploit distant/surrogate labels (i.e., emojis) to supervise two contrastive losses, corpus-aware contrastive loss (CCL) and Light label-aware contrastive loss (LCL-LiT). Sequence representations from our model should keep the cluster of each class distinguishable and preserve semantic relationships between classes.
## Checkpoints of Models Pre-Trained with InfoDCL
Enlish Models:
* InfoDCL-RoBERTa trained with TweetEmoji-EN: https://huggingface.co/UBC-NLP/InfoDCL-emoji
* InfoDCL-RoBERTa trained with TweetHashtag-EN: https://huggingface.co/UBC-NLP/InfoDCL-hashtag
Multilingual Model:
* InfoDCL-XLMR trained with multilingual TweetEmoji-multi: https://huggingface.co/UBC-NLP/InfoDCL-Emoji-XLMR-Base
## Model Performance
<p align="center" width="100%">
<a><img src="https://github.com/UBC-NLP/infodcl/blob/master/images/main_table.png?raw=true" alt="main table" style="width: 95%; min-width: 300px; display: block; margin: auto;"></a>
</p>
Fine-tuning results on our 24 Socio-pragmatic Meaning datasets (average macro-F1 over five runs). |