What do Twitter Comments Tell About News Article Bias? Assessing the Impact of News Article Bias on its Perception on Twitter
Authors/Creators
- 1. University of Konstanz
- 2. University of Zurich
Description
This is the repository of "What do Twitter Comments Tell About News Article Bias? Assessing the Impact of News Article Bias on its Perception on Twitter"
News stories circulating online, especially on social media platforms, are nowadays a primary source of information. Given the nature of social media, news no longer are just news, but they are embedded in the conversations of users interacting with them. This is particularly relevant for inaccurate information or even outright misinformation because user interaction has crucial impact on whether information gets uncritically disseminated or not. Biased coverage has been shown to affect personal decision-making, but it remains an open question whether users are aware of the biased reporting they encounter and how they react to it. The latter is particularly relevant given that user reactions help contextualize reporting for other users and can thus help mitigate but may also exacerbate the impact of biased media coverage.
This paper approaches the question from a measurement point of view, examining whether Twitter comments on articles can serve as bias indicators, i.e., whether user comments are indicative of the actual level of bias in a given article. We first give an overview of research on media bias, and then discuss key concepts related to how individuals engage with online content, focusing on the sentiment (or valance) of comments and outright hate speech. We then present the first dataset connecting reliable human-made media bias classifications of news articles with the reactions these articles had upon publication on Twitter. We call our dataset BAT - Bias And Twitter. BAT covers 2,800 (bias-rated) news articles from 255 different English-speaking news outlets. Additionally, BAT includes 175,807 comments and retweets referring to the articles.
Based on BAT, we conduct a multi-feature analysis to identify comment characteristics and analyze whether the Twitter reactions correlate with an article's bias. First, we fine-tune and apply two XLNet-based classifiers for hate speech detection and for sentiment analysis. Second, we relate the results of the classifiers to the article bias annotations within a multi-level regression. The results show that the comments made on an article are indeed an indicator for its bias, and vice-versa. With a regression coefficient of 0.703 (p < 0.01), we present evidence that Twitter reactions to biased articles are significantly more hateful. Even more, our analysis shows that the news outlet's individual stance reinforces the hate-bias relationship. In future work, we will extend the dataset and analysis, including additional concepts related to media bias.