Dataset Restricted Access

PAN19 Authorship Analysis: Style Change Detection

Zangerle, Eva; Tschuggnall, Michael; Specht, Günther; Potthast, Martin; Stein, Benno

Many approaches have been proposed recently to identify the author of a given document. Thereby, one fact is often silently assumed: i.e., that the given document is indeed written by only author. For a realistic author identification system it is therefore crucial to at first determine whether a document is single- or multiauthored.

To this end, previous PAN editions aimed to analyze multi-authored documents. As it has been shown that it is a hard problem to reliably identify individual authors and their contribution within a single document (Author Diarization, 2016Style Breach Detection, 2017), last year's task substantially relaxed the problem by asking only for binary decision (single- or multi-authored). Considering the promising results achieved by the submitted approaches (see the overview paper for details), we continue last year's task and additionally ask participants to predict the number of involved authors.

Given a document, participants thus should apply intrinsic style analyses to hierarchically answer the following questions:

  1. Is the document written by one or more authors, i.e., do style changes exist or not?
  2. If it is multi-authored, how many authors have collaborated?

All documents are provided in English and may contain zero up to arbitrarily many style changes, resulting from arbitrarily many authors.

The training set: contains 50% of the whole dataset and includes solutions. Use this set to feed/train your models.

Like last year, the whole data set is based on user posts from various sites of the StackExchange network, covering different topics and containing approximately 300 to 2000 tokens per document.

For each problem instance X, two files are provided:

  • problem-X.txt contains the actual text
  • problem-X.truth contains the ground truth, i.e., the correct solution in JSON format:
{ "authors": number_of_authors, "structure": [author_segment_1, ..., author_segment_3], "switches": [ character_pos_switch_segment_1, ..., character_pos_switch_segment_n, ] }

An example for a multi-author document could look as follows:

{ "authors": 4, "structure": ["A1", "A2", "A4", "A2", "A4", "A2", "A3", "A2", "A4"], "switches": [805, 1552, 2827, 3584, 4340, 5489, 7564, 8714] }

whereas a single-author document would have exactly the following form:

{ "authors": 1, "structure": ["A1"], "switches": [] }

Note that authors within the structure correspond only to the respective document, i.e., they are not the same over the whole dataset. For example, author A1 in document 1 is most likely not the same author as A1 in document 2 (it could be, but as there are hundreds of authors the chances are very small that this is the case). Further, please consider that the structure and the switches are provided only as additional resources for the development of your algorithms, i.e., they are not expected to be predicted.

To tackle the problem, you can develop novel approaches, extend existing algorithms from last year's task or adapt approaches from related problems such as intrinsic plagiarism detection or text segmentation. You are also free to additionally evaluate your approaches on last year's training/validation/test dataset (for the number of authors use the corresponding meta data).

Version 2.0: added validation set
Restricted Access

You may request access to the files in this upload, provided that you fulfil the conditions below. The decision whether to grant/deny access is solely under the responsibility of the record owner.


Please request access to the data with a short statement on how you want to use it. Thanks!

We would like to point out that you can register on pan.webis.de to be part of the PAN community.


  • Eva Zangerle, Michael Tschuggnall, Günther Specht, Martin Potthast, and Benno Stein. Overview of the Style Change Detection Task at PAN 2019. In Linda Cappellato, Nicola Ferro, David E. Losada, and Henning Müller, editors, CLEF 2019 Labs and Workshops, Notebook Papers, September 2019. CEUR-WS.org.

159
10
views
downloads
All versions This version
Views 159137
Downloads 1010
Data volume 68.2 MB68.2 MB
Unique views 120111
Unique downloads 55

Share

Cite as