Dataset Open Access

Dataset for generating TL;DR

Syed, Shahbaz; Voelske, Michael; Potthast, Martin; Stein, Benno

Dublin Core Export

<?xml version='1.0' encoding='utf-8'?>
<oai_dc:dc xmlns:dc="" xmlns:oai_dc="" xmlns:xsi="" xsi:schemaLocation="">
  <dc:creator>Syed, Shahbaz</dc:creator>
  <dc:creator>Voelske, Michael</dc:creator>
  <dc:creator>Potthast, Martin</dc:creator>
  <dc:creator>Stein, Benno</dc:creator>
  <dc:description>This is the dataset for the TL;DR challenge containing posts from the Reddit corpus, suitable for abstractive summarization using deep learning. The format is a json file where each line is a JSON object representing a post. The schema of each post is shown below:

	author: string (nullable = true)
	body: string (nullable = true)
	normalizedBody: string (nullable = true)
	content: string (nullable = true)
	content_len: long (nullable = true)
	summary: string (nullable = true)
	summary_len: long (nullable = true)
	id: string (nullable = true)
	subreddit: string (nullable = true)
	subreddit_id: string (nullable = true)
	title: string (nullable = true)

Specifically, the content and summary fields can be directly used as inputs to a deep learning model (e.g. Sequence to Sequence model ). The dataset consists of 3,084,410 posts with an average length of 211 words for content, and 25 words for the summary.

Note : As this is the complete dataset for the challenge, it is up to the participants to split it into training and validation sets accordingly.</dc:description>
  <dc:subject>tl;dr challenge</dc:subject>
  <dc:subject>abstractive summarization</dc:subject>
  <dc:subject>social media</dc:subject>
  <dc:subject>user-generated content</dc:subject>
  <dc:title>Dataset for generating TL;DR</dc:title>
All versions This version
Views 1,3031,304
Downloads 892892
Data volume 1.9 TB1.9 TB
Unique views 1,1911,192
Unique downloads 714714


Cite as