Published June 14, 2020 | Version v1
Conference paper Open

Replication and Computational Literary Studies

  • 1. Trier University, Germany
  • 2. Huyghens ING and University of Amsterdam, The Netherlands
  • 3. Cornell University, USA
  • 4. University of Würzburg, Germany

Description

The "replication crisis" that has been raging in fields like Psychology (Open Science Collaboration 2015) or Medicine (Ioannidis 2005) for years has recently reached the field of Artificial Intelligence (Barber 2019). One of the key conferences in the field, NeurIPS, has reacted by appointing 'reproducibility chairs' in their organizing committee 1. In the Digital Humanities, and particularly in Computational Literary Studies (CLS), there is an increasing awareness of the crucial role played by replication in evidence-based research. Relevant disciplinary developments include the increased importance of evaluation in text analysis and the increased interest in making research transparent through publicly accessible data and code (open source, open data). Specific impulses include Geoffrey Rockwell and Stéfan Sinclair's re-enactments of pre-digital studies (Sinclair and Rockwell 2015) or the recent replication study by Nan Z. Da (Da 2019). The paper has been met by an avalanche of responses that pushed back several of its key claims, including its rather sweeping condemnation of the replicated papers. However, an important point got buried in the process: that replication is indeed a valuable goal and practice. 2 As stated in the Open Science Collaboration paper: "Replication can increase certainty when findings are reproduced and promote innovation when they are not" (Open Science Collaboration 2015, 943).

As a consequence, the panel aims to raise a number of issues regarding the place, types, challenges and affordances, both on a practical and on a policy or community level, of replication in CLS. Several impulse papers will address key aspects of the issue: recent experience with attempts at replication of specific papers; policies dealing with replication in fields with more experience in the issue; conceptual and terminological clarification with regard to replication studies; and proposals for a way forward with replication as a community task or a policy issue.

Bibliography

Barber, Gregory. 2019. “Artificial Intelligence Confronts a ‘Reproducibility’ Crisis.” Wired, 2019. https://www.wired.com/story/artificial-intelligence-confronts-reproducibility-crisis/.

Da, Nan Z. 2019. “The Computational Case Against Computational Literary Studies.” Critical Inquiry 45 (3): 601–39. https://www.journals.uchicago.edu/doi/abs/10.1086/702594.

Ioannidis, J.A. 2005. “Contradicted and Initially Stronger Effects in Highly Cited Clinical Research.” JAMA, no. 294/2: 218–28. https://doi.org/https://doi:10.1001/jama.294.2.218.

Open Science Collaboration. 2015. “Estimating the Reproducibility of Psychological Science.” Science, no. 349 (6251). https://science.sciencemag.org/content/349/6251/aac4716.

Sinclair, Stéfan, and Geoffrey Rockwell. 2015. “Epistemologica.” 2015. https://github.com/sgsinclair/epistemologica.

Notes

1. See: https://nips.cc/Conferences/2019/Committees.

2. For a selection of responses, see relevant contributions to Cultural Analytics.

Files

replication-and-computational-literary-studies-figure-1.jpg