There is a newer version of the record available.

Published November 7, 2020 | Version 0.3.11
Software Open

oolong: An R package for validating automated content analysis tools

  • 1. Mannheimer Zentrum für Europäische Sozialforschung, Universität Mannheim

Description

Intended to create standard human-in-the-loop validity tests for typical automated content analysis such as topic modeling and dictionary-based methods. This package offers a standard workflow with functions to prepare, administer and evaluate a human-in-the-loop validity test. This package provides functions for validating topic models using word intrusion and Topic intrusion tests, as described in Chang et al. (2009) <https://papers.nips.cc/paper/3700-reading-tea-leaves-how-humans-interpret-topic-models>. This package also provides functions for generating gold-standard data which are useful for validating dictionary-based methods. The default settings of all generated tests match those suggested in Chang et al. (2009) and Song et al. (2020) <doi:10.1080/10584609.2020.1723752>.

Files

chainsawriot/oolong-0.3.11.zip

Files (13.7 MB)

Name Size Download all
md5:201ef9167645aee691b8aac46b85cef2
13.7 MB Preview Download

Additional details

Related works