Published July 7, 2021 | Version v2
Dataset Open

Data for manuscript: "Prevalence of prejudice denoting words in news media discourse: a chronological analysis"

  • 1. Otago Polytechnic
  • 2. Columbia University
  • 3. University of Otago

Description

This data set contains frequency counts of target words in 27 million news and opinion articles from 47 popular news media outlets in the United States. The target words are listed in the associated manuscript and are mostly words that denote some type of prejudice. A few additional words not denoting prejudice are also available since they are used in the manuscript for illustration purposes.

The textual content of news and opinion articles from the outlets listed in Figure 4 of the main manuscript is available in the outlet's online domains and/or public cache repositories such as Google cache (https://webcache.googleusercontent.com), The Internet Wayback Machine (https://archive.org/web/web.php), and Common Crawl (https://commoncrawl.org). We used derived word frequency counts from these sources. Textual content included in our analysis is circumscribed to articles headlines and main body of text of the articles and does not include other article elements such as figure captions.

Targeted textual content was located in HTML raw data using outlet specific xpath expressions. Tokens were lowercased prior to estimating frequency counts. To prevent outlets with sparse text content for a year from distorting aggregate frequency counts, we only include outlet frequency counts from years for which there is at least 1.25 million words of article content from an outlet. This threshold was chosen to maximize inclusion in our analysis of outlets with sparse amounts of articles text per year such as Reason, Alternet or The American Spectator. 

Yearly frequency usage of a target word in an outlet in any given year was estimated by dividing the total number of occurrences of the target word in all articles of a given year by the number of all words in all articles of that year. This method of estimating frequency accounts for variable volume of total article output over time.

The list of compressed files in this data set is listed next:

-analysisScripts.rar contains the analysis scripts used in the main manuscript 

-articlesContainingTargetWords.rar contains counts of target words in outlets articles as well as total counts of words in articles

-cableNews.rar contains prevalence of target words in TV cable news. Data is from Stanford Cable TV News Analyzer (https://tvnews.stanford.edu/)

-surveyData.rar contains longitudinal survey data used in the manuscript and links to original sources

Usage Notes

In a small percentage of articles, outlet specific XPath expressions failed to properly capture the content of the article due to the heterogeneity of HTML elements and CSS styling combinations with which articles text content is arranged in outlets online domains. As a result, the total and target word counts metrics for a small subset of articles are not precise. In a random sample of articles and outlets, manual estimation of target words counts overlapped with the automatically derived counts for over 90% of the articles.

Most of the incorrect frequency counts were minor deviations from the actual counts such as for instance counting the word "Facebook" in an article footnote encouraging article readers to follow the journalist’s Facebook profile and that the XPath expression mistakenly included as the content of the article main text. Some additional outlet-specific inaccuracies that we could identify occurred in "The Hill" and "Newsmax" news outlets where XPath expressions had some shortfalls at precisely capturing articles’ content. For "The Hill", in years 2007-2009, XPath expressions failed to capture the complete text of the article in about 40% of the articles. This does not necessarily result in incorrect frequency counts for that outlet but in a sample of articles’ words that is about 40% smaller than the total population of articles words for those three years. In the case of "NewsMax", the issue was that for some articles, XPath expressions captured the entire text of the article twice. Notice that this does not result in incorrect frequency counts. If a word appears x times in an article with a total of y words, the same frequency count will still be derived when our scripts count the word 2x times in the version of the article with a total of 2y words. To conclude, in a data analysis of 27 million articles, we cannot manually check the correctness of frequency counts for every single article and hundred percent accuracy at capturing articles’ content is elusive due to the small number of difficult to detect boundary cases such as incorrect HTML markup syntax in online domains. Overall however, we are confident that our frequency metrics are representative of word prevalence in print news media content (see Figure 1 and Figure 2 of main manuscript for supporting evidence).

Files

Files (632.5 MB)

Name Size Download all
md5:feec7150325b8434c6e6ff30b475a417
4.9 MB Download
md5:ccd3e293921516891f5475cd65c11010
624.2 MB Download
md5:a0a9f4adb0a7f4f9aa9713f7b7d1f547
4.1 kB Download
md5:537205b0299690f6cbabc3ced0ea5f64
3.4 MB Download