Pilot study: Ranking of textual snippets based on the writing style
Creators
- 1. Research Assistant at TU Graz
- 2. Post Doc at Know-Center GmbH
- 3. Research Assistant at Know-Center GmbH
- 4. Head of Knowledge Discovery at Know-Center GmbH
Description
In this pilot study, we tried to capture humans' behavior when identifying authorship of text snippets. At first, we selected textual snippets from the introduction of scientific articles written by single authors. Later, we presented to the evaluators a source and four target snippets, and then, ask them to rank the target snippets from the most to the least similar from the writing style.
The dataset is composed by 66 experiments manually checked for not having any clear hint during the ranking for the evaluators. For each experiment, we have evaluations from three different evaluators.
We present each experiment in a single line (in the CSV file), where, at first we present the metadata of the Source-Article (Journal, Title, Authorship, Snippet), and the metadata for the 4 target snippets (Journal, Title, Authorship, Snippet, Written From the same Author, Published in the same Journal) and the ranking given by each evaluator. This task was performed in the open source platform, Crowd Flower.
The headers of the CSV are self-explained. In the TXT file, you can find a human-readable version of the experiment.
For more information about the extraction of the data, please consider reading our paper: "Extending Scientific Literature Search by Including the Author’s Writing Style" @BIR: http://www.gesis.org/en/services/events/events-archive/conferences/ecir-workshops/ecir-workshop-2017
Notes
Files
result-pilot-study.csv
Files
(510.8 kB)
Name | Size | Download all |
---|---|---|
md5:3b2932a6c36c025878b2c1c252f83d20
|
225.6 kB | Preview Download |
md5:21a724476e9101a492e51c45fdfcc6c1
|
285.3 kB | Preview Download |