There is a newer version of this record available.

Conference paper Open Access

MBIC – A Media Bias Annotation Dataset Including Annotator Characteristics

Spinde, Timo and Rudnitckaia, Lada and Sinha, Kanishka, and Hamborg, Felix and and Gipp, Bela and Donnay, Karsten

Many people consider news articles to be a reliable source of information on current events. However, due to the range of factors influencing news agencies, such coverage may not always be impartial. Media bias, or slanted news coverage, can have a substantial impact on public perception of events, and, accordingly, can potentially alter the beliefs and views of the public. The main data gap in current research on media bias detection is a robust, representative, and diverse dataset containing annotations of biased words and sentences. In particular, existing datasets do not control for the individual background of annotators, which may affect their assessment and, thus, represents critical information for contextualizing their annotations. In this poster, we present a matrix-based methodology to crowdsource such data using a self-developed annotation platform. We also present MBIC (Media Bias Including Characteristics) - the first sample of 1,700 statements representing various media bias instances. The statements were reviewed by ten annotators each and contain labels for media bias identification both on the word and sentence level. MBIC is the first available dataset about media bias reporting detailed information on annotator characteristics and their individual background. The current dataset already significantly extends existing data in this domain providing unique and more reliable insights into the perception of bias. In future, we will further extend it both with respect to the number of articles and annotators per article.

Files (4.9 MB)
Name Size
2.7 MB Download
194.4 kB Download
2.0 MB Download
All versions This version
Views 526406
Downloads 837703
Data volume 979.6 MB874.1 MB
Unique views 400344
Unique downloads 538490


Cite as