Dataset Open Access

Webis-TLDR-17 Corpus

Syed, Shahbaz; Voelske, Michael; Potthast, Martin; Stein, Benno


MARC21 XML Export

<?xml version='1.0' encoding='UTF-8'?>
<record xmlns="http://www.loc.gov/MARC21/slim">
  <leader>00000nmm##2200000uu#4500</leader>
  <datafield tag="041" ind1=" " ind2=" ">
    <subfield code="a">eng</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">tl;dr</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">Abstractive Summarization</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">Social Media Dataset</subfield>
  </datafield>
  <controlfield tag="005">20200124192618.0</controlfield>
  <controlfield tag="001">1043504</controlfield>
  <datafield tag="711" ind1=" " ind2=" ">
    <subfield code="g">EMNLP 2017</subfield>
    <subfield code="a">EMNLP 2017 Workshop on New Frontiers in Summarization</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">Bauhaus-Universität Weimar</subfield>
    <subfield code="a">Voelske, Michael</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">Bauhaus-Universität Weimar</subfield>
    <subfield code="a">Potthast, Martin</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">Bauhaus-Universität Weimar</subfield>
    <subfield code="a">Stein, Benno</subfield>
  </datafield>
  <datafield tag="856" ind1="4" ind2=" ">
    <subfield code="s">3141854161</subfield>
    <subfield code="z">md5:e2fb1d5026cdb895ea640bdb134d0398</subfield>
    <subfield code="u">https://zenodo.org/record/1043504/files/corpus-webis-tldr-17.zip</subfield>
  </datafield>
  <datafield tag="542" ind1=" " ind2=" ">
    <subfield code="l">open</subfield>
  </datafield>
  <datafield tag="260" ind1=" " ind2=" ">
    <subfield code="c">2017-11-07</subfield>
  </datafield>
  <datafield tag="909" ind1="C" ind2="O">
    <subfield code="p">openaire_data</subfield>
    <subfield code="p">user-webis</subfield>
    <subfield code="o">oai:zenodo.org:1043504</subfield>
  </datafield>
  <datafield tag="100" ind1=" " ind2=" ">
    <subfield code="u">Bauhaus-Universität Weimar</subfield>
    <subfield code="a">Syed, Shahbaz</subfield>
  </datafield>
  <datafield tag="245" ind1=" " ind2=" ">
    <subfield code="a">Webis-TLDR-17 Corpus</subfield>
  </datafield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">user-webis</subfield>
  </datafield>
  <datafield tag="540" ind1=" " ind2=" ">
    <subfield code="u">https://creativecommons.org/licenses/by/4.0/legalcode</subfield>
    <subfield code="a">Creative Commons Attribution 4.0 International</subfield>
  </datafield>
  <datafield tag="650" ind1="1" ind2="7">
    <subfield code="a">cc-by</subfield>
    <subfield code="2">opendefinition.org</subfield>
  </datafield>
  <datafield tag="520" ind1=" " ind2=" ">
    <subfield code="a">&lt;p&gt;This corpus contains preprocessed posts from the Reddit dataset, suitable for abstractive summarization using deep learning. The format is a json file where each line is a JSON object representing a post. The schema of each post is shown below:&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;author: string (nullable = true)&lt;/li&gt;
	&lt;li&gt;body: string (nullable = true)&lt;/li&gt;
	&lt;li&gt;normalizedBody: string (nullable = true)&lt;/li&gt;
	&lt;li&gt;content: string (nullable = true)&lt;/li&gt;
	&lt;li&gt;content_len: long (nullable = true)&lt;/li&gt;
	&lt;li&gt;summary: string (nullable = true)&lt;/li&gt;
	&lt;li&gt;summary_len: long (nullable = true)&lt;/li&gt;
	&lt;li&gt;id: string (nullable = true)&lt;/li&gt;
	&lt;li&gt;subreddit: string (nullable = true)&lt;/li&gt;
	&lt;li&gt;subreddit_id: string (nullable = true)&lt;/li&gt;
	&lt;li&gt;title: string (nullable = true)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Specifically, the &lt;strong&gt;content&lt;/strong&gt; and &lt;strong&gt;summary&lt;/strong&gt; fields can be directly used as inputs to a deep learning model (e.g. Sequence to Sequence model ). The dataset consists of 3,848,330 posts with an average length of 270 words for content, and 28 words for the summary. The dataset is a combination of both the Submissions and Comments merged on the common schema. As a result, most of the comments which do not belong to any submission have &lt;strong&gt;null&lt;/strong&gt; as their title.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note :&amp;nbsp;&lt;/strong&gt;This corpus does not contain a separate test set. Thus it is up to the users to divide the corpus into appropriate training, validation and test sets.&lt;br&gt;
&lt;br&gt;
&amp;nbsp;&lt;/p&gt;</subfield>
  </datafield>
  <datafield tag="773" ind1=" " ind2=" ">
    <subfield code="n">url</subfield>
    <subfield code="i">isDocumentedBy</subfield>
    <subfield code="a">https://www.uni-weimar.de/en/media/chairs/computer-science-and-media/webis/corpora/corpus-webis-tldr-17/</subfield>
  </datafield>
  <datafield tag="773" ind1=" " ind2=" ">
    <subfield code="n">doi</subfield>
    <subfield code="i">isVersionOf</subfield>
    <subfield code="a">10.5281/zenodo.1043503</subfield>
  </datafield>
  <datafield tag="024" ind1=" " ind2=" ">
    <subfield code="a">10.5281/zenodo.1043504</subfield>
    <subfield code="2">doi</subfield>
  </datafield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">dataset</subfield>
  </datafield>
</record>
1,374
1,611
views
downloads
All versions This version
Views 1,3741,376
Downloads 1,6111,611
Data volume 5.1 TB5.1 TB
Unique views 1,2341,236
Unique downloads 1,2891,289

Share

Cite as