Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -19,11 +19,12 @@ configs:
|
|
19 |
# Dataset Card for Media-Bias-Identification-Benchmark
|
20 |
|
21 |
## Table of Contents
|
22 |
-
- [Dataset Card for
|
23 |
- [Table of Contents](#table-of-contents)
|
24 |
- [Dataset Description](#dataset-description)
|
25 |
- [Dataset Summary](#dataset-summary)
|
26 |
- [Tasks and Information](#tasks-and-information)
|
|
|
27 |
- [Languages](#languages)
|
28 |
- [Dataset Structure](#dataset-structure)
|
29 |
- [Data Instances](#data-instances)
|
@@ -47,38 +48,22 @@ configs:
|
|
47 |
|
48 |
TODO
|
49 |
|
50 |
-
### Tasks and Information
|
51 |
-
|
52 |
-
|
53 |
-
<table>
|
54 |
-
<tr><td>Dataset</td><td>Source</td><td>Sub-domain</td><td>Task Type</td><td>Classes</td><tr>
|
55 |
-
<tr><td>ECtHR (Task A)</td><td> <a href="https://aclanthology.org/P19-1424/">Chalkidis et al. (2019)</a> </td><td>ECHR</td><td>Multi-label classification</td><td>10+1</td></tr>
|
56 |
-
<tr><td>ECtHR (Task B)</td><td> <a href="https://aclanthology.org/2021.naacl-main.22/">Chalkidis et al. (2021a)</a> </td><td>ECHR</td><td>Multi-label classification </td><td>10+1</td></tr>
|
57 |
-
<tr><td>SCOTUS</td><td> <a href="http://scdb.wustl.edu">Spaeth et al. (2020)</a></td><td>US Law</td><td>Multi-class classification</td><td>14</td></tr>
|
58 |
-
<tr><td>EUR-LEX</td><td> <a href="https://arxiv.org/abs/2109.00904">Chalkidis et al. (2021b)</a></td><td>EU Law</td><td>Multi-label classification</td><td>100</td></tr>
|
59 |
-
<tr><td>LEDGAR</td><td> <a href="https://aclanthology.org/2020.lrec-1.155/">Tuggener et al. (2020)</a></td><td>Contracts</td><td>Multi-class classification</td><td>100</td></tr>
|
60 |
-
<tr><td>UNFAIR-ToS</td><td><a href="https://arxiv.org/abs/1805.01217"> Lippi et al. (2019)</a></td><td>Contracts</td><td>Multi-label classification</td><td>8+1</td></tr>
|
61 |
-
<tr><td>CaseHOLD</td><td><a href="https://arxiv.org/abs/2104.08671">Zheng et al. (2021)</a></td><td>US Law</td><td>Multiple choice QA</td><td>n/a</td></tr>
|
62 |
-
</table>
|
63 |
|
64 |
|
|
|
65 |
|
66 |
-
Baseline
|
67 |
|
68 |
<table>
|
69 |
-
<tr><td><b>
|
70 |
-
|
71 |
-
<
|
72 |
-
<
|
73 |
-
<td>
|
74 |
-
<td>
|
75 |
-
<td>
|
76 |
-
<td>
|
77 |
-
<td>
|
78 |
-
<td>
|
79 |
-
<td>CaseLaw-BERT</td> <td> 69.8 / 62.9 </td> <td> 78.8 / 70.3 </td> <td> 76.6 / 65.9 </td> <td> 70.7 / 56.6 </td> <td> 88.3 / 83.0 </td> <td> <b>96.0</b> / 82.3 </td> <td> <b>75.4</b> </td> </tr>
|
80 |
-
<tr><td colspan="8" style='text-align:center'><b>Large-sized Models (L=24, H=1024, A=18)</b></td></tr>
|
81 |
-
<tr><td>RoBERTa</td> <td> <b>73.8</b> / <b>67.6</b> </td> <td> 79.8 / 71.6 </td> <td> 75.5 / 66.3 </td> <td> 67.9 / 50.3 </td> <td> <b>88.6</b> / <b>83.6</b> </td> <td> 95.8 / 81.6 </td> <td> 74.4 </td> </tr>
|
82 |
</table>
|
83 |
|
84 |
|
|
|
19 |
# Dataset Card for Media-Bias-Identification-Benchmark
|
20 |
|
21 |
## Table of Contents
|
22 |
+
- [Dataset Card for Media-Bias-Identification-Benchmark](#dataset-card-for-mbib)
|
23 |
- [Table of Contents](#table-of-contents)
|
24 |
- [Dataset Description](#dataset-description)
|
25 |
- [Dataset Summary](#dataset-summary)
|
26 |
- [Tasks and Information](#tasks-and-information)
|
27 |
+
- [Baseline](#baseline)
|
28 |
- [Languages](#languages)
|
29 |
- [Dataset Structure](#dataset-structure)
|
30 |
- [Data Instances](#data-instances)
|
|
|
48 |
|
49 |
TODO
|
50 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
51 |
|
52 |
|
53 |
+
### Baseline
|
54 |
|
|
|
55 |
|
56 |
<table>
|
57 |
+
<tr><td><b>Task</b></td><td><b>Model</b></td><td><b>Micro F1</b></td><td><b>Macro F1</b></td></tr>
|
58 |
+
|
59 |
+
<td>cognitive-bias</td> <td> ConvBERT/ConvBERT</td> <td>0.7126</td> <td> 0.7664</td></tr>
|
60 |
+
<td>fake-news</td> <td>Bart/RoBERTa-T</td> <td>0.6811</td> <td> 0.7533</td> </tr>
|
61 |
+
<td>gender-bias</td> <td> RoBERTa-T/ELECTRA</td> <td>0.8334</td> <td>0.8211</td> </tr>
|
62 |
+
<td>hate-speech</td> <td>RoBERTA-T/Bart</td> <td>0.8897</td> <td> 0.7310</td> </tr>
|
63 |
+
<td>linguistic-bias</td> <td> ConvBERT/Bart </td> <td> 0.7044 </td> <td> 0.4995 </td> </tr>
|
64 |
+
<td>political-bias</td> <td> ConvBERT/ConvBERT </td> <td> 0.7041 </td> <td> 0.7110 </td> </tr>
|
65 |
+
<td>racial-bias</td> <td> ConvBERT/ELECTRA </td> <td> 0.8772 </td> <td> 0.6170 </td> </tr>
|
66 |
+
<td>text-leve-bias</td> <td> ConvBERT/ConvBERT </td> <td> 0.7697</td> <td> 0.7532 </td> </tr>
|
|
|
|
|
|
|
67 |
</table>
|
68 |
|
69 |
|