Dataset Open Access

# Pilot study: Ranking of textual snippets based on the writing style

Andi Rexha; Mark Kröll; Hermann Ziak; Roman Kern

### DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<identifier identifierType="DOI">10.5281/zenodo.437461</identifier>
<creators>
<creator>
<creatorName>Andi Rexha</creatorName>
<affiliation>Research Assistant at TU Graz</affiliation>
</creator>
<creator>
<creatorName>Mark Kröll</creatorName>
<affiliation>Post Doc at Know-Center GmbH</affiliation>
</creator>
<creator>
<creatorName>Hermann Ziak</creatorName>
<affiliation>Research Assistant at Know-Center GmbH</affiliation>
</creator>
<creator>
<creatorName>Roman Kern</creatorName>
<affiliation>Head of Knowledge Discovery at Know-Center GmbH</affiliation>
</creator>
</creators>
<titles>
<title>Pilot study: Ranking of textual snippets based on the writing style</title>
</titles>
<publisher>Zenodo</publisher>
<publicationYear>2017</publicationYear>
<subjects>
<subject>stylometry</subject>
</subjects>
<dates>
<date dateType="Issued">2017-03-22</date>
</dates>
<resourceType resourceTypeGeneral="Dataset"/>
<alternateIdentifiers>
<alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/437461</alternateIdentifier>
</alternateIdentifiers>
<rightsList>
<rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
</rightsList>
<descriptions>
<description descriptionType="Abstract">&lt;p&gt;In this pilot study, we tried to capture humans' behavior when identifying authorship of text snippets. At first, we selected textual snippets from the introduction of scientific articles written by single authors. Later, we presented to the evaluators a source and four target snippets, and then, ask them to rank the target snippets from the most to the least similar from the writing style.&lt;/p&gt;

&lt;p&gt;The dataset is composed by 66 experiments manually checked for not having any clear hint during the ranking for the evaluators. For each experiment, we have evaluations from three different evaluators.&lt;/p&gt;

&lt;p&gt;We present each experiment in a single line (in the CSV file), where, at first we present the metadata of the Source-Article (Journal, Title, Authorship, Snippet), and the metadata for the 4 target snippets (Journal, Title, Authorship, Snippet, Written From the same Author, Published in the same Journal) and the ranking given by each evaluator. This task was performed in the open source platform, Crowd Flower. &lt;/p&gt;

&lt;p&gt;The headers of the CSV are self-explained. In the TXT file, you can find a human-readable version of the experiment. &lt;/p&gt;

<description descriptionType="Other">Acknowledgements:
The Know-Center is funded within the Austrian COMET Program under the auspices of the Austrian Ministry of Transport, Innovation and Technology, the Austrian Ministry of Economics and Labour and by the State of Styria. COMET is managed by the Austrian Research Promotion Agency FFG.</description>
</descriptions>
</resource>

734
225
views