Dataset Restricted Access
We provide you with a training corpus that comprises several different common attribution and clustering scenarios.
In last year’s competition, the corpus consisted of several thousand relatively small documents, with distractor sets consisting of hundreds of authors. This was considered to create impracticalities for many participants, especially those that relied upon machine-aided instead of fully automatic analysis. We have instead focused on a smaller group of larger documents, perhaps more typical of the type of cases usually analyzed by “traditional” close reading.
Last year’s corpus was taken from the Enron email corpus; this years instead was collected from the free fiction collection published by Feedbooks.com, including both classic fiction that is now out-of-copyright as well as (fiction, represented by the Feedbooks.com site). This of course introduces the standard issue of analysis-by-Google, but that’s a very difficult problem to avoid short of generating content to order.
You may request access to the files in this upload, provided that you fulfil the conditions below. The decision whether to grant/deny access is solely under the responsibility of the record owner.
Please request access to the data with a short statement on how you want to use it. Thanks!
We would like to point out that you can register on pan.webis.de to be part of the PAN community.
Patrick Juola. An Overview of the Traditional Authorship Attribution Subtask. In Pamela Forner, Jussi Karlgren, and Christa Womser-Hacker, editors, CLEF 2012 Evaluation Labs and Workshop – Working Notes Papers, 17-20 September, Rome, Italy, September 2012. CEUR-WS.org. ISBN 978-88-904810-3-1. ISSN 2038-4963.