There is a newer version of the record available.

Published February 10, 2020 | Version v1
Dataset Restricted

PAN20 Authorship Analysis: Style Change Detection

  • 1. Universität Innsbruck
  • 2. University Leipzig
  • 3. Bauhaus-Universität Weimar

Description

This is the data set for the Style Change Detection task of PAN 2020.

Tasks

The goal of the style change detection task is to identify text positions within a given multi-author document at which the author switches. Detecting these positions is a crucial part of the authorship identification process, and for multi-author document analysis in general. Note that, for this task, we make the assumption that a change in writing style always signifies a change in author.

Previous editions of the Style Change Detection task aim at e.g., detecting whether a document is single- or multi-authored (2018) or the actual number of authors within a document (2019). Considering the promising results achieved by the submitted approaches, we aim to steer the task back to its original goal: detecting the exact position of authorship changes. Therefore, the task for PAN'20 is to detect whether a document was authored by one or multiple authors and to find the positions of style changes at the paragraph-level. For each pair of consecutive paragraphs of a document, we ask participants to estimate whether there is indeed a style change between those two paragraphs. 

Tasks

Given a document, we ask participants to answer the following two questions:

  • Was the given document written by multiple authors? (task 1)
  • For each pair of consecutive paragraphs in the given document: is there a style change between these paragraphs? (task 2)

In other words, the goal is to determine whether the given document contains style changes and if it indeed does, we aim to find the position of the change in the document (between paragraphs).

All documents are provided in English and may contain zero up to ten style changes, resulting from at most three different authors. However, style changes may only occur between paragraphs (i.e., a single paragraph is always authored by a single author and does not contain any style changes).

Data

To develop and then test your algorithms, two data sets including ground truth information are provided. Those data sets differ in their topical breadth (i.e., the number of different topics that are covered in the documents contained). dataset-narrow contains texts from a relatively narrow set of subjects matters (all related to technology), whereas dataset-wide adds additional subject areas to that (travel, philosophy, economics, history, etc.).

Both of those data sets are split into three parts:

  1. training set: Contains 50% of the whole data set and includes ground truth data. Use this set to develop and train your models.
  2. validation set: Contains 25% of the whole data set and includes ground truth data. Use this set to evaluate and optimize your models.
  3. test set: Contains 25% of the whole data set. For the documents on the test set, you are not given ground truth data. This set is used for evaluation (see later).

You are free to use additional external data for training your models. However, we ask you to make the additional data utilized freely available under a suitable license.

Please cite the following paper when using the provided dataset:

@InProceedings{zangerle:2020,
  author = {Eva Zangerle, Maximilian Mayerl, G{\"u}nther Specht, Martin Potthast, Benno Stein},
  booktitle = {{CLEF 2020 Labs and Workshops, Notebook Papers}},
  editor = {Linda Cappellato and Carsten Eickhoff and Nicola Ferro and Aur{\'e}lie N{\'e}v{\'e}ol},
  month =   sep,
  publisher = {CEUR-WS.org},
  title = {{Overview of the Style Change Detection Task at PAN 2020}},
  year = 2020
}

Input Format

Both dataset-narrow and dataset-wide are based on user posts from various sites of the StackExchange network, covering different topics. We refer to each input problem (i.e., the document for which to detect style changes) by an ID, which is subsequently also used to identify the submitted solution to this input problem.

The structure of the provided datasets is as follows:

                    
train/
    dataset-narrow/
    dataset-wide/
validation/
    dataset-narrow/
    dataset-wide/
test/
    dataset-narrow/
    dataset-wide/
            

For each problem instance X (i.e., each input document), two files are provided:

  1. problem-X.txt contains the actual text, where paragraphs are denoted by \n\n.
  2. truth-problem-X.json contains the ground truth, i.e., the correct solution in JSON format:
    {
        "authors": NUMBER_OF_AUTHORS,
        "structure": ORDER_OF_AUTHORS,
        "site": SOURCE_SITE,
        "multi-author": RESULT_TASK1,
        "changes": RESULT_ARRAY_TASK2
    }

    The result for task 1 (key "multi-author") is a binary value (1 if the document is multi-authored, 0 if the document is single-authored). The result for task 2 (key "changes") is represented as an array, holding a binary for each pair of consecutive paragraphs within the document (0 if there was no style change, 1 if there was a style change). If the document is single-authored, the solution to task 2 is an array filled with 0s. Furthermore, we provide the order of authors contained in the document (e.g., [A1, A2, A1] for a two-author document), the total number of authors and the Stackoverflow site the texts were extracted from (i.e., topic).

    An example of a multi-author document, where there was a style change between the third and fourth paragraph could look as follows (we only list the two relevant key/value pairs here): 

    {
        "multi-author": 1,
        "changes": [0,0,1,...]
    }

    A single-author document would have the following form (again, only listing the two relevant key/value pairs):

    {
        "multi-author": 0,
        "changes": [0,0,0,...]
    }

Output Format

To evaluate the solutions for the two tasks, the classification results have to be stored in a single file for each of the input documents. Please note that we require a solution file to be generated for each input problem. The data structure during the evaluation phase will be similar to that in the training phase, with the exception that the ground truth files are missing.

For each given problem problem-X.txt, your software should output the missing solution file solution-problem-X.json, containing a JSON object with two properties, one for each task. The actual solution for task 1 is a binary value (0 or 1). For task 2, the solution is an array containing a binary value for each pair of consecutive paragraphs.

An example solution file for a multi-authored document is featured in the following:

{
    "multi-author": 1,
    "changes": [0,0,1,...]
}

For a single-authored document the solution file may look as follows:

{
    "multi-author": 0,
    "changes": [0,0,0,...]
}

We provide you with a script to check the validity of the solution files [verifier][tests].

Evaluation

Submissions are evaluated by the F1-score measure for each document. The two tasks are evaluated independently based on the obtained accuracy measures. For task 1, we compute the average F1-score value across all documents and for task 2, we use the micro-averaged F1-score across all documents. The submissions for the two datasets will be evaluated independently and the resulting F1-scores for the two tasks will be averaged across the two datasets.

We provide you with a script to compute those measures based on the produced output-files [code][tests].

Submission

Once you finished tuning your approach on the validation set, your software will be tested on the test set. During the competition, the test set will not be released publicly. Instead, we ask you to submit your software for evaluation at our site as described below.

We ask you to prepare your software so that it can be executed via command line calls. The command shall take as input (i) an absolute path to the directory of the test corpus and (ii) an absolute path to an empty output directory:

mySoftware -i INPUT-DIRECTORY -o OUTPUT-DIRECTORY

Within OUTPUT-DIRECTORY, we require two subfolders: dataset-narrow and dataset-wide, holding the solutions for the two datasets, respectively. As the provided output directory is guaranteed to be empty, your software needs to create those subfolders.

Within INPUT-DIRECTORY, you will find one folder for each dataset, holding a set of problem instances (i.e., problem-[id].txt files). For each problem instance you should produce the solution file solution-problem-[id].json in the OUTPUT-DIRECTORY. For instance, you read INPUT-DIRECTORY/dataset-narrow/problem-12.txt, process it and write your results to OUTPUT-DIRECTORY/dataset-narrow/solution-problem-12.json.

In general, this task follows PAN's software submission strategy described here.

Note: By submitting your software you retain full copyrights. You agree to grant us usage rights only for the purpose of the PAN competition. We agree not to share your software with a third party or use it for other purposes than the PAN competition.

Related Work

  • PAN@CLEF'19 (Overview of the Style Change Detection Task at PAN-2019)
  • PAN@CLEF'18 (Overview of the Author Identification Task at PAN-2018: Cross-domain Authorship Attribution and Style Change Detection)
  • PAN@CLEF'17 (Overview of the Author Identification Task at PAN-2017 and Style Breach Detection section)
  • PAN@CLEF'16 (Clustering by Authorship Within and Across Documents and Author Diarization section)
  • J. Cardoso and R. Sousa. Measuring the performance of ordinal classification. International Journal of Pattern Recognition and Artificial Intelligence 25.08, pp. 1173-1195, 2011
  • Benno Stein, Nedim Lipka and Peter Prettenhofer. Intrinsic Plagiarism Analysis. In Language Resources and Evaluation, Volume 45, Issue 1, pages 63-82, 2011.
  • Efstathios Stamatatos. A Survey of Modern Authorship Attribution Methods. Journal of the American Society for Information Science and Technology, Volume 60, Issue 3, pages 538-556, March 2009.

Files

Restricted

The record is publicly accessible, but files are restricted to users with access.

Request access

If you would like to request access to these files, please fill out the form below.

You need to satisfy these conditions in order for this request to be accepted:

.

You are currently not logged in. Do you have an account? Log in here