Published September 17, 2012 | Version v1
Dataset Open

PAN12 Author Identification: Attribution



We provide you with a training corpus that comprises several different common attribution and clustering scenarios.

In last year’s competition, the corpus consisted of several thousand relatively small documents, with distractor sets consisting of hundreds of authors. This was considered to create impracticalities for many participants, especially those that relied upon machine-aided instead of fully automatic analysis. We have instead focused on a smaller group of larger documents, perhaps more typical of the type of cases usually analyzed by “traditional” close reading.

Last year’s corpus was taken from the Enron email corpus; this years instead was collected from the free fiction collection published by, including both classic fiction that is now out-of-copyright as well as (fiction, represented by the site). This of course introduces the standard issue of analysis-by-Google, but that’s a very difficult problem to avoid short of generating content to order.


Files (30.4 MB)

Name Size Download all
30.4 MB Preview Download

Additional details


  • Patrick Juola. An Overview of the Traditional Authorship Attribution Subtask. In Pamela Forner, Jussi Karlgren, and Christa Womser-Hacker, editors, CLEF 2012 Evaluation Labs and Workshop – Working Notes Papers, 17-20 September, Rome, Italy, September 2012. ISBN 978-88-904810-3-1. ISSN 2038-4963.