Conference paper Open Access
Norman, Christopher; Leeflang, Mariska; Zweigenbaum, Pierre; Névéol, Aurélie
Current approaches to document discovery for systematic reviews in biomedicine rely on exhaustive manual screening. We evaluate the performance of classifier based article discovery using different definitions of inclusion criteria. We test a logistic regressor on two datasets created from existing systematic reviews on clinical NLP and drug efficacy, using different criteria to generate positive and negative examples. The classification and ranking achieves an average AUC of 0.769 when relying on gold standard decisions based on title and abstracts of articles, and an AUC of 0.835 when relying on decisions based on full text. Results suggest that inclusion based on title and abstract generalizes to inclusion based on full text, so that references excluded in earlier stages are important for classification, and that common-off-the-shelves algorithms can partially automate the process.