Commit
Β·
f3d1380
1
Parent(s):
ca8513b
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,72 @@
|
|
1 |
---
|
2 |
license: agpl-3.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: agpl-3.0
|
3 |
---
|
4 |
+
|
5 |
+
# SentMix-3L: A Bangla-English-Hindi Code-Mixed Dataset for Sentiment Analysis
|
6 |
+
|
7 |
+
**Publication**: *The First Workshop in South East Asian Language Processing Workshop under AACL-2023.*
|
8 |
+
|
9 |
+
---
|
10 |
+
|
11 |
+
## π Introduction
|
12 |
+
|
13 |
+
Code-mixing is a well-studied linguistic phenomenon when two or more languages are mixed in text or speech. Several datasets have been built with the goal of training computational models for code-mixing. Although it is very common to observe code-mixing with multiple languages, most datasets available contain code-mixed between only two languages. In this paper, we introduce **SentMix-3L**, a novel dataset for sentiment analysis containing code-mixed data between three languages: Bangla, English, and Hindi. We show that zero-shot prompting with GPT-3.5 outperforms all transformer-based models on SentMix-3L.
|
14 |
+
|
15 |
+
---
|
16 |
+
|
17 |
+
## π Dataset Details
|
18 |
+
|
19 |
+
We introduce **SentMix-3L**, a novel three-language code-mixed test dataset with gold standard labels in Bangla-Hindi-English for the task of Sentiment Analysis, containing 1,007 instances.
|
20 |
+
|
21 |
+
> We are presenting this dataset exclusively as a test set due to the unique and specialized nature of the task. Such data is very difficult to gather and requires significant expertise to access. The size of the dataset, while limiting for training purposes, offers a high-quality testing environment with gold-standard labels that can serve as a benchmark in this domain.
|
22 |
+
|
23 |
+
---
|
24 |
+
|
25 |
+
## π Dataset Statistics
|
26 |
+
|
27 |
+
| | **All** | **Bangla** | **English** | **Hindi** | **Other** |
|
28 |
+
|-------------------|---------|------------|-------------|-----------|-----------|
|
29 |
+
| Tokens | 89494 | 32133 | 5998 | 15131 | 36232 |
|
30 |
+
| Types | 19686 | 8167 | 1073 | 1474 | 9092 |
|
31 |
+
| Max. in instance | 173 | 62 | 20 | 47 | 93 |
|
32 |
+
| Min. in instance | 41 | 4 | 3 | 2 | 8 |
|
33 |
+
| Avg | 88.87 | 31.91 | 5.96 | 15.03 | 35.98 |
|
34 |
+
| Std Dev | 19.19 | 8.39 | 2.94 | 5.81 | 9.70 |
|
35 |
+
|
36 |
+
*The row 'Avg' represents the average number of tokens with its standard deviation in row 'Std Dev'.*
|
37 |
+
|
38 |
+
---
|
39 |
+
|
40 |
+
## π Results
|
41 |
+
|
42 |
+
| **Models** | **Weighted F1 Score** |
|
43 |
+
|---------------|-----------------------|
|
44 |
+
| GPT 3.5 Turbo | **0.62** |
|
45 |
+
| XLM-R | 0.59 |
|
46 |
+
| BanglishBERT | 0.56 |
|
47 |
+
| mBERT | 0.56 |
|
48 |
+
| BERT | 0.55 |
|
49 |
+
| roBERTa | 0.54 |
|
50 |
+
| MuRIL | 0.54 |
|
51 |
+
| IndicBERT | 0.53 |
|
52 |
+
| DistilBERT | 0.53 |
|
53 |
+
| HindiBERT | 0.48 |
|
54 |
+
| HingBERT | 0.47 |
|
55 |
+
| BanglaBERT | 0.47 |
|
56 |
+
|
57 |
+
*Weighted F-1 score for different models: training on synthetic, testing on natural data.*
|
58 |
+
|
59 |
+
---
|
60 |
+
|
61 |
+
## π Citation
|
62 |
+
|
63 |
+
If you utilize this dataset, kindly cite our paper.
|
64 |
+
|
65 |
+
```bibtex
|
66 |
+
@article{raihan2023sentmix,
|
67 |
+
title={SentMix-3L: A Bangla-English-Hindi Code-Mixed Dataset for Sentiment Analysis},
|
68 |
+
author={Raihan, Md Nishat and Goswami, Dhiman and Mahmud, Antara and Anstasopoulos, Antonios and Zampieri, Marcos},
|
69 |
+
journal={arXiv preprint arXiv:2310.18023},
|
70 |
+
year={2023}
|
71 |
+
}
|
72 |
+
|