Published June 30, 2017 | Version v1
Conference paper Open

Tri Automatique de la Littérature pour les Revues Systématiques

  • 1. LIMSI, CNRS, Université Paris Saclay, 91405 Orsay, France
  • 2. Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands


Current approaches to document discovery for systematic reviews in biomedicine rely on exhaustive manual screening. We evaluate the performance of classifier based article discovery using different definitions of inclusion criteria. We test a logistic regressor on two datasets created from existing systematic reviews on clinical NLP and drug efficacy, using different criteria to generate positive and negative examples. The classification and ranking achieves an average AUC of 0.769 when relying on gold standard decisions based on title and abstracts of articles, and an AUC of 0.835 when relying on decisions based on full text. Results suggest that inclusion based on title and abstract generalizes to inclusion based on full text, so that references excluded in earlier stages are important for classification, and that common-off-the-shelves algorithms can partially automate the process.



Files (405.1 kB)

Name Size Download all
405.1 kB Preview Download

Additional details


MIROR – Methods in Research on Research 676207
European Commission