Published June 11, 2018 | Version v2
Dataset Open

Webis Clickbait Corpus 2017 (Webis-Clickbait-17)


The Webis Clickbait Corpus 2017 (Webis-Clickbait-17) comprises a total of 38,517 Twitter posts from 27 major US news publishers. In addition to the posts, information about the articles linked in the posts are included. The posts had been published between November 2016 and June 2017. To avoid publisher and topical biases, a maximum of ten posts per day and publisher were sampled. All posts were annotated on a 4-point scale [not click baiting (0.0), slightly click baiting (0.33), considerably click baiting (0.66), heavily click baiting (1.0)] by five annotators from Amazon Mechanical Turk. A total of 9,276 posts are considered clickbait by the majority of annotators. In terms of its size, this corpus outranges the Webis Clickbait Corpus 2016 by one order of magnitude. The corpus is divided into two logical parts, a training and a test dataset. The training dataset has been released in the course of the Clickbait Challenge and a download link is provided below. To allow for an objective evaulatuion of clickbait detection systems, the test dataset is available only through the Evaluation-as-a-Service platform TIRA at the moment. On TIRA, developers can deploy clickbait detection systems and execute them against the test dataset. The performance of the submitted systems can be viewed on the TIRA page of the Clickbait Challenge.

To make working with the Webis Clickbait Corpus 2017 convenient, and to allow for its validation and replication, we are developing and sharing a number of software tags:

  • Corpus Viewer. Our Django web service for exploring corpora. For importing the Webis Clickbait Corpus 2017 into the corpus viewer, we provide an appropriate configuration file.
  • MTurk Manager. Our Django web service for conducting sophisticated crowd sourcing tasks on Amazon Mechanical Turk. The service allows to manage projects, upload batches of HITS, apply custom reviewing interfaces, and more. To make the clickbait crowd-sourcing task replicable, we share the worker template that we used to instruct the workers and to display the tweets. Also shared is a reviewing template that can be used to accept/reject assignments and to assess the quality of the received annotations quickly.
  • Web Archiver. Software for archiving web pages as WARC files and reproducing them later on. This software can be used to open the WARC archives provided above.

In addition to the corpus "", we provide the original WARC archives of the articles that are linked in the posts. They are split in 5 archives that can be extracted separately.



Files (107.4 GB)

Name Size Download all
20.6 GB Preview Download
20.6 GB Preview Download
20.9 GB Preview Download
20.4 GB Preview Download
19.6 GB Preview Download
934.7 MB Preview Download
147.8 MB Preview Download
937.1 MB Preview Download
3.3 GB Preview Download