10.5281/zenodo.3930903
https://zenodo.org/records/3930903
oai:zenodo.org:3930903
https://github.com/thepanacealab/covid19_twitter
Banda, Juan M.
Juan M.
Banda
0000-0001-8499-824X
Georgia State University
Tekumalla, Ramya
Ramya
Tekumalla
0000-0002-1606-4856
Georgia State University
Wang, Guanyu
Guanyu
Wang
University of Missouri
Yu, Jingyuan
Jingyuan
Yu
Universitat Autònoma de Barcelona
Liu, Tuo
Tuo
Liu
Carl von Ossietzky Universität Oldenburg
Ding, Yuning
Yuning
Ding
Universität Duisburg-Essen
Artemova, Katya
Katya
Artemova
NRU HSE
Tutubalina, Elena
Elena
Tutubalina
KFU
Chowell, Gerardo
Gerardo
Chowell
0000-0003-2194-2251
Georgia State University
A large-scale COVID-19 Twitter chatter dataset for open scientific research - an international collaboration
Zenodo
2020
social media
twitter
nlp
covid-19
covid19
2020-07-05
eng
http://www.panacealab.org/covid19/
https://arxiv.org/abs/2004.03688
10.5281/zenodo.3723939
https://zenodo.org/communities/covid-19
https://zenodo.org/communities/biohackathon
17.0
Other (Public Domain)
NEW in Version 17: Besides our regular update, we now have included the tweet identifiers and their respective tweet location place country code for the clean version of the dataset. This is found on the clean_place_country.tar.gz file, each file is identified by the two-character ISO country code as the file suffix.
Due to the relevance of the COVID-19 global pandemic, we are releasing our dataset of tweets acquired from the Twitter Stream related to COVID-19 chatter. Since our first release we have received additional data from our new collaborators, allowing this resource to grow to its current size. Dedicated data gathering started from March 11th yielding over 4 million tweets a day. We have added additional data provided by our new collaborators from January 27th to March 27th, to provide extra longitudinal coverage. Version 10 added ~1.5 million tweets in the Russian language collected between January 1st and May 8th, gracefully provided to us by: Katya Artemova (NRU HSE) and Elena Tutubalina (KFU). From version 12 we have included daily hashtags, mentions and emoijis and their frequencies the respective zip files. From version 14 we have included the tweet identifiers and their respective language for the clean version of the dataset. This is found on the clean_languages.tar.gz file, each file is identified by the two-character language code as the file suffix.
The data collected from the stream captures all languages, but the higher prevalence are: English, Spanish, and French. We release all tweets and retweets on the full_dataset.tsv file (468,169,539 unique tweets), and a cleaned version with no retweets on the full_dataset-clean.tsv file (115,262,201 unique tweets). There are several practical reasons for us to leave the retweets, tracing important tweets and their dissemination is one of them. For NLP tasks we provide the top 1000 frequent terms in frequent_terms.csv, the top 1000 bigrams in frequent_bigrams.csv, and the top 1000 trigrams in frequent_trigrams.csv. Some general statistics per day are included for both datasets in the statistics-full_dataset.tsv and statistics-full_dataset-clean.tsv files. For more statistics and some visualizations visit: http://www.panacealab.org/covid19/
More details can be found (and will be updated faster at: https://github.com/thepanacealab/covid19_twitter) and our pre-print about the dataset (https://arxiv.org/abs/2004.03688)
As always, the tweets distributed here are only tweet identifiers (with date and time added) due to the terms and conditions of Twitter to re-distribute Twitter data ONLY for research purposes. The need to be hydrated to be used.
This dataset will be updated bi-weekly at least with additional tweets, look at the github repo for these updates.
Release: We have standardized the name of the resource to match our pre-print manuscript and to not have to update it every week.