Service Incident: New DOI registrations are working again. Re-registration of failed DOI registrations (~500) are still affected by the service incident at DataCite (our DOI registration agency).
Published February 8, 2018 | Version v1
Dataset Open

Dataset for generating TL;DR

  • 1. Bauhaus-Universität Weimar

Description

This is the dataset for the TL;DR challenge containing posts from the Reddit corpus, suitable for abstractive summarization using deep learning. The format is a json file where each line is a JSON object representing a post. The schema of each post is shown below:

  • author: string (nullable = true)
  • body: string (nullable = true)
  • normalizedBody: string (nullable = true)
  • content: string (nullable = true)
  • content_len: long (nullable = true)
  • summary: string (nullable = true)
  • summary_len: long (nullable = true)
  • id: string (nullable = true)
  • subreddit: string (nullable = true)
  • subreddit_id: string (nullable = true)
  • title: string (nullable = true)

Specifically, the content and summary fields can be directly used as inputs to a deep learning model (e.g. Sequence to Sequence model ). The dataset consists of 3,084,410 posts with an average length of 211 words for content, and 25 words for the summary.

Note : As this is the complete dataset for the challenge, it is up to the participants to split it into training and validation sets accordingly.

Files

tldr-challenge-dataset.zip

Files (2.2 GB)

Name Size Download all
md5:28951b6f3d5c6fd6f97e1f6314be3661
2.2 GB Preview Download

Additional details

Related works