Papers
arxiv:2205.10773

A Domain-adaptive Pre-training Approach for Language Bias Detection in News

Published on May 22, 2022
Authors:
,
,
,
,

Abstract

A new transformer-based model, DA-RoBERTa, along with DA-BERT and DA-BART, detects sentence-level media bias with high accuracy, outperforming previous approaches.

AI-generated summary

Media bias is a multi-faceted construct influencing individual behavior and collective decision-making. Slanted news reporting is the result of one-sided and polarized writing which can occur in various forms. In this work, we focus on an important form of media bias, i.e. bias by word choice. Detecting biased word choices is a challenging task due to its linguistic complexity and the lack of representative gold-standard corpora. We present DA-RoBERTa, a new state-of-the-art transformer-based model adapted to the media bias domain which identifies sentence-level bias with an F1 score of 0.814. In addition, we also train, DA-BERT and DA-BART, two more transformer models adapted to the bias domain. Our proposed domain-adapted models outperform prior bias detection approaches on the same data.

Community

Sign up or log in to comment

Models citing this paper 2

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2205.10773 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2205.10773 in a Space README.md to link it from this page.

Collections including this paper 1