There is a newer version of the record available.

Published August 19, 2020 | Version 1.1.0
Software Open

Factiva parser and NLP pipeline for news articles related to COVID-19

  • 1. University of Zurich - Institute of Biomedical Ethics and History of Medicine

Contributors

Project leader:

Project member:

  • 1. University of Zurich - Institute of Biomedical Ethics and History of Medicine
  • 2. Swiss Tropical and Public Health Institute

Description

The COVID-19 pandemic generated (and keeps generating) a huge corpus of news articles, easily retrievable in Factiva with very targeted queries. 

The aim of this software is to provide the means to analyze this material rapidly. 

Data are retrieved from Factiva and downloaded by hand(...) in RTF. The RTF files are then converted to TXT with unoconv in a unix environment.

 

Parser:

Takes as input files numerically ordered in a folder. This is not fundamental (in case of multiple retrieves from Factiva) because the parser orders the article by date using the date field contained in each of the articles. Nevertheless, it is important to reduce duplicates (because they increase the computational time needed for processing the corpus), so before adding new articles in the folder, be sure to retrieve them from a timepoint that does not overlap with the articles already retrieved.

In any case, in the last phase the dataframe is checked for duplicates, that are counted and removed, but still the articles are processed by the parser and this takes computational time.

The parser removes search summaries, segments the text, and cleans it using regex rules. The resulting text is exported in a complete dataframe as a CSV file; a subset containing only title and text is exported as TXT, ready to be fed to the NLP pipeline.

The parser is language agnostic; just change the path to the folder containing the documents to parse. Important: there is a regex rule mentioning languages ("header_leftover"). it lists EN, DE, FR and IT. In case you need to work with another language, remember to correct that rule.

 

NLP pipeline

The NLP pipeline imports the files generated by the parser (divided by month to put less load on the memory) and analyses them. It is not language agnostic: correct linguistic settings must be specified in "setting up", "NLP" and "additional rules".

First some additional rules for NER are defined. Some are general, some are language-specific, as specified in the relevant section.

The files are opened and preprocessed, then lemma frequency and NE frequency are calculated per each month and in the whole corpus. important: in case of empty months (so, when analyzing less than one year of data) remember to exclude them from the mean, otherwise the mean will be distorted by the empty months.

All the dataframes are exported as CSV files for further analysis or for data visualization.

This code is optimized for English, German, French and Italian. Nevertheless, being based on spaCy, which provides several other models ( https://spacy.io/models ) could easily be adapted to other languages.

The whole software is structured in Jupyter-lab notebooks, heavily commented for future reference.

 

This work is part of the PubliCo research project.

Notes

This work is part of the PubliCo research project, supported by the Swiss National Science Foundation (SNF). Project no. 31CA30_195905

Files

de-NLP.ipynb

Files (423.9 kB)

Name Size Download all
md5:99e190d37c32cd1bdfbd376d775a26f5
31.4 kB Preview Download
md5:472042adaf06e2f39cd3c56cf5582541
123.9 kB Preview Download
md5:e945156afaead01767fd719c20b8422b
121.5 kB Preview Download
md5:fac902e0ef7ca4bf565af4c3d7696178
119.0 kB Preview Download
md5:e6b4f2be1f2ce024538c5322fdd39900
28.2 kB Preview Download

Additional details

Related works

Compiles
Dataset: 10.5281/zenodo.4036071 (DOI)