Journal article Open Access
As user-generated content (UGC) is entering the news cycle alongside content captured by news professionals, it is important to detect misleading content as early as possible and avoid disseminating it. The purpose of this paper is to present an annotated dataset of 380 user-generated videos (UGVs), 200 debunked and 180 verified, along with 5,195 near-duplicate reposted versions of them, and a set of automatic verification experiments aimed to serve as a baseline for future comparisons. The dataset was formed using a systematic process combining text search and near-duplicate video retrieval, followed by manual annotation using a set of journalism-inspired guidelines. Following the formation of the dataset, the automatic verification step was carried out using machine learning over a set of well-established features. Analysis of the dataset shows distinctive patterns in the spread of verified vs debunked videos, and the application of state-of-the-art machine learning models shows that the dataset poses a particularly challenging problem to automatic methods. Practical limitations constrained the current collection to three platforms: YouTube, Facebook and Twitter. Furthermore, there exists a wealth of information that can be drawn from the dataset analysis, which goes beyond the constraints of a single paper. Extension to other platforms and further analysis will be the object of subsequent research. The dataset analysis indicates directions for future automatic video verification algorithms, and the dataset itself provides a challenging benchmark. Having a carefully collected and labelled dataset of debunked and verified videos is an important resource both for developing effective disinformation-countering tools and for supporting media literacy activities. Besides its importance as a unique benchmark for research in automatic verification, the analysis also allows a glimpse into the dissemination patterns of UGC, and possible telltale differences between fake and real content.