Dataset Open Access

Pilot study: Ranking of textual snippets based on the writing style

Andi Rexha; Mark Kröll; Hermann Ziak; Roman Kern

DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<resource xmlns:xsi="" xmlns="" xsi:schemaLocation="">
  <identifier identifierType="DOI">10.5281/zenodo.437461</identifier>
      <creatorName>Andi Rexha</creatorName>
      <affiliation>Research Assistant at TU Graz</affiliation>
      <creatorName>Mark Kröll</creatorName>
      <affiliation>Post Doc at Know-Center GmbH</affiliation>
      <creatorName>Hermann Ziak</creatorName>
      <affiliation>Research Assistant at Know-Center GmbH</affiliation>
      <creatorName>Roman Kern</creatorName>
      <affiliation>Head of Knowledge Discovery at Know-Center GmbH</affiliation>
    <title>Pilot study: Ranking of textual snippets based on the writing style</title>
    <subject>authorship attribution</subject>
    <date dateType="Issued">2017-03-22</date>
  <resourceType resourceTypeGeneral="Dataset"/>
    <alternateIdentifier alternateIdentifierType="url"></alternateIdentifier>
    <rights rightsURI="">Creative Commons Attribution 4.0 International</rights>
    <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
    <description descriptionType="Abstract">&lt;p&gt;In this pilot study, we tried to capture humans' behavior when identifying authorship of text snippets. At first, we selected textual snippets from the introduction of scientific articles written by single authors. Later, we presented to the evaluators a source and four target snippets, and then, ask them to rank the target snippets from the most to the least similar from the writing style.&lt;/p&gt;

&lt;p&gt;The dataset is composed by 66 experiments manually checked for not having any clear hint during the ranking for the evaluators. For each experiment, we have evaluations from three different evaluators.&lt;/p&gt;

&lt;p&gt;We present each experiment in a single line (in the CSV file), where, at first we present the metadata of the Source-Article (Journal, Title, Authorship, Snippet), and the metadata for the 4 target snippets (Journal, Title, Authorship, Snippet, Written From the same Author, Published in the same Journal) and the ranking given by each evaluator. This task was performed in the open source platform, Crowd Flower. &lt;/p&gt;

&lt;p&gt;The headers of the CSV are self-explained. In the TXT file, you can find a human-readable version of the experiment. &lt;/p&gt;

&lt;p&gt;For more information about the extraction of the data, please consider reading our paper: "Extending Scientific Literature Search by Including the Author’s Writing Style" @BIR: &lt;/p&gt;</description>
    <description descriptionType="Other">Acknowledgements:
The Know-Center is funded within the Austrian COMET Program under the auspices of the Austrian Ministry of Transport, Innovation and Technology, the Austrian Ministry of Economics and Labour and by the State of Styria. COMET is managed by the Austrian Research Promotion Agency FFG.</description>
All versions This version
Views 734735
Downloads 225225
Data volume 53.3 MB53.3 MB
Unique views 723724
Unique downloads 198198


Cite as