Published March 20, 2021 | Version v1
Dataset Open

Towards A Reliable Ground-Truth For Biased Language Detection

  • 1. University of Wuppertal
  • 2. University of Konstanz

Description

Reference texts such as encyclopedias and news articles can manifest biased language when objective reporting is substituted by subjective writing. Existing methods to detect linguistic cues of bias mostly rely on annotated data to train machine learning models. However, low annotator agreement and comparability is a substantial drawback in available media bias corpora. To improve available datasets, we collect and compare labels obtained from two popular crowdsourcing platforms. Our results demonstrate the existing crowdsourcing approaches' lack of data quality, underlining the need for a trained expert framework to gather a more reliable dataset. Improving the agreement from Krippendorff's \(\alpha\) = 0.144 (crowdsourcing labels) to \(\alpha\) = 0.419 (expert labels), we assume that trained annotators' linguistic knowledge increases data quality improving the performance of existing bias detection systems.

The expert annotations are meant to be used to enrich the dataset MBIC A Media Bias Annotation Dataset Including Annotator Characteristics available at https://zenodo.org/record/4474336#.YBHO6xYxmK8.

Files

annotation_guidelines.pdf

Files (2.5 MB)

Name Size Download all
md5:a5ad7b3c526645c37d123ff312880bfe
347.3 kB Preview Download
md5:37779ef5be35cf2e558657f9d8ed003b
380.2 kB Download
md5:dea915f49a4e015309f7ab45a839ec75
377.6 kB Download
md5:29d07169732f50c310f7e98dc796784a
1.4 MB Preview Download