10.5281/zenodo.5530410
https://zenodo.org/records/5530410
oai:zenodo.org:5530410
Potthast, Martin
Martin
Potthast
0000-0003-2451-0665
Bauhaus-Universität Weimar
Gollub, Tim
Tim
Gollub
0000-0003-1737-6517
Bauhaus-Universität Weimar
Wiegmann, Matti
Matti
Wiegmann
0000-0002-3911-0456
Bauhaus-Universität Weimar
Stein, Benno
Benno
Stein
0000-0001-9033-2217
Bauhaus-Universität Weimar
Hagen, Matthias
Matthias
Hagen
0000-0002-9733-2890
Bauhaus-Universität Weimar
Komlossy, Kristof
Kristof
Komlossy
Bauhaus-Universität Weimar
Schuster, Sebstian
Sebstian
Schuster
Bauhaus-Universität Weimar
Fernandez, Erika P. Garces
Erika P. Garces
Fernandez
Bauhaus-Universität Weimar
Webis Clickbait Corpus 2017 (Webis-Clickbait-17)
Zenodo
2018
clickbait
2018-06-11
eng
10.5281/zenodo.3346490
https://zenodo.org/communities/webis
Creative Commons Attribution 4.0 International
The Webis Clickbait Corpus 2017 (Webis-Clickbait-17) comprises a total of 38,517 Twitter posts from 27 major US news publishers. In addition to the posts, information about the articles linked in the posts are included. The posts had been published between November 2016 and June 2017. To avoid publisher and topical biases, a maximum of ten posts per day and publisher were sampled. All posts were annotated on a 4-point scale [not click baiting (0.0), slightly click baiting (0.33), considerably click baiting (0.66), heavily click baiting (1.0)] by five annotators from Amazon Mechanical Turk. A total of 9,276 posts are considered clickbait by the majority of annotators. In terms of its size, this corpus outranges the Webis Clickbait Corpus 2016 by one order of magnitude. The corpus is divided into two logical parts, a training and a test dataset. The training dataset has been released in the course of the Clickbait Challenge and a download link is provided below. To allow for an objective evaulatuion of clickbait detection systems, the test dataset is available only through the Evaluation-as-a-Service platform TIRA at the moment. On TIRA, developers can deploy clickbait detection systems and execute them against the test dataset. The performance of the submitted systems can be viewed on the TIRA page of the Clickbait Challenge.
To make working with the Webis Clickbait Corpus 2017 convenient, and to allow for its validation and replication, we are developing and sharing a number of software tags:
Corpus Viewer. Our Django web service for exploring corpora. For importing the Webis Clickbait Corpus 2017 into the corpus viewer, we provide an appropriate configuration file.
MTurk Manager. Our Django web service for conducting sophisticated crowd sourcing tasks on Amazon Mechanical Turk. The service allows to manage projects, upload batches of HITS, apply custom reviewing interfaces, and more. To make the clickbait crowd-sourcing task replicable, we share the worker template that we used to instruct the workers and to display the tweets. Also shared is a reviewing template that can be used to accept/reject assignments and to assess the quality of the received annotations quickly.
Web Archiver. Software for archiving web pages as WARC files and reproducing them later on. This software can be used to open the WARC archives provided above.
In addition to the corpus "clickbait17-train-170630.zip", we provide the original WARC archives of the articles that are linked in the posts. They are split in 5 archives that can be extracted separately.