A large-scale COVID-19 Twitter chatter dataset for open scientific research - an international collaboration
Creators
- 1. Georgia State University
- 2. University of Missouri
- 3. Universitat Autònoma de Barcelona
- 4. Carl von Ossietzky Universität Oldenburg
- 5. Universität Duisburg-Essen
- 6. NRU HSE
- 7. KFU
Description
Celebrating version 20 of the dataset, we have refactored the full_dataset.tsv and full_dataset_clean.tsv files to include two additional columns: language and place country code (when available). This change now includes language and country code for ALL the tweets in the dataset, not only clean tweets. With this change we have removed the clean_place_country.tar.gz and clean_languages.tar.gz files. With our refactoring of the dataset generating code we also found a small bug that made some of the retweets not be counted properly, hence the extra increase on tweets available.
Due to the relevance of the COVID-19 global pandemic, we are releasing our dataset of tweets acquired from the Twitter Stream related to COVID-19 chatter. Since our first release we have received additional data from our new collaborators, allowing this resource to grow to its current size. Dedicated data gathering started from March 11th yielding over 4 million tweets a day. We have added additional data provided by our new collaborators from January 27th to March 27th, to provide extra longitudinal coverage. Version 10 added ~1.5 million tweets in the Russian language collected between January 1st and May 8th, gracefully provided to us by: Katya Artemova (NRU HSE) and Elena Tutubalina (KFU). From version 12 we have included daily hashtags, mentions and emoijis and their frequencies the respective zip files. From version 14 we have included the tweet identifiers and their respective language for the clean version of the dataset. Since version 20 we have included language and place location for all tweets.
The data collected from the stream captures all languages, but the higher prevalence are: English, Spanish, and French. We release all tweets and retweets on the full_dataset.tsv file (563,343,636 unique tweets), and a cleaned version with no retweets on the full_dataset-clean.tsv file (132,577,810 unique tweets). There are several practical reasons for us to leave the retweets, tracing important tweets and their dissemination is one of them. For NLP tasks we provide the top 1000 frequent terms in frequent_terms.csv, the top 1000 bigrams in frequent_bigrams.csv, and the top 1000 trigrams in frequent_trigrams.csv. Some general statistics per day are included for both datasets in the full_dataset-statistics.tsv and full_dataset-clean-statistics.tsv files. For more statistics and some visualizations visit: http://www.panacealab.org/covid19/
More details can be found (and will be updated faster at: https://github.com/thepanacealab/covid19_twitter) and our pre-print about the dataset (https://arxiv.org/abs/2004.03688)
As always, the tweets distributed here are only tweet identifiers (with date and time added) due to the terms and conditions of Twitter to re-distribute Twitter data ONLY for research purposes. They need to be hydrated to be used.
Notes
Files
emoji.zip
Files
(6.0 GB)
Name | Size | Download all |
---|---|---|
md5:044253dbd278c4a56e1826c446d13ada
|
3.2 MB | Preview Download |
md5:23f90af5bf291ccb64d67e62b5e262cc
|
18.7 kB | Preview Download |
md5:4322bba64118aefb4a08e9aa01800db7
|
12.3 kB | Preview Download |
md5:3f8e33fd61bd299d3a3e919a897f0ed2
|
24.7 kB | Preview Download |
md5:852a07816b4407c648803f256f1d7ad0
|
3.8 kB | Download |
md5:49bccecaa85cbed1fb4c3ef67e9dc658
|
4.6 GB | Download |
md5:6836b810811568e954ffbd5190b9e5fc
|
3.7 kB | Download |
md5:b8fa9dc04e5e46edb5b247fe0f5ab6cc
|
1.2 GB | Download |
md5:cc862c506471610ba5a1609966affd9f
|
65.9 MB | Preview Download |
md5:4518a724c06e444987e766e32f6a91f2
|
117.1 MB | Preview Download |
Additional details
Identifiers
Related works
- Is continued by
- Other: http://www.panacealab.org/covid19/ (URL)
- Is supplement to
- Preprint: https://arxiv.org/abs/2004.03688 (URL)