Towards A Reliable Ground-Truth For Biased Language Detection
- 1. University of Wuppertal
- 2. University of Konstanz
Description
Reference texts such as encyclopedias and news articles can manifest biased language when objective reporting is substituted by subjective writing. Existing methods to detect linguistic cues of bias mostly rely on annotated data to train machine learning models. However, low annotator agreement and comparability is a substantial drawback in available media bias corpora. To improve available datasets, we collect and compare labels obtained from two popular crowdsourcing platforms. Our results demonstrate the existing crowdsourcing approaches' lack of data quality, underlining the need for a trained expert framework to gather a more reliable dataset. Improving the agreement from Krippendorff's \(\alpha\) = 0.144 (crowdsourcing labels) to \(\alpha\) = 0.419 (expert labels), we assume that trained annotators' linguistic knowledge increases data quality improving the performance of existing bias detection systems.
The expert annotations are meant to be used to enrich the dataset MBIC – A Media Bias Annotation Dataset Including Annotator Characteristics available at https://zenodo.org/record/4474336#.YBHO6xYxmK8.