Poster Open Access
Bittremieux, Wout; Meysman, Pieter; Martens, Lennart; Valkenborg, Dirk; Laukens, Kris
Automatic quality assessment of mass spectrometry experiments by multivariate quality control metrics and the relation between low-quality experiments and identifications
Despite the many recent technological and computational advances, performing a mass spectrometry experiment remains a complex activity and its results are subject to a large variability. To understand and evaluate how technical variability affects an experiment, several computationally derived quality control (QC) metrics have been introduced. However, despite the availability of QC metrics covering a wide range of qualitative information, a systematic approach to quality control is often still lacking.
We will illustrate how QC metrics can be used to automatically discriminate between low-quality and high-quality experiments. A special emphasis will be laid on the interpretability of the results, as in order to systematically integrate QC practices into existing workflows, interpretability of the qualitative information is paramount.
We use unsupervised outlier detection based on identification-free QC metrics to detect deviating experiments with a low performance. By identifying outliers based on their local neighborhood we are able to directly apply our technique to data acquired on different instrument types.
Furthermore, it is insufficient to only know that an experiment is an outlier, it is also of vital importance to know why. To explain why an experiment is an outlier, the subspace of QC metrics in which it can be differentiated from the other experiments is identified, which aids the interpretation of the low-quality experiments. Furthermore, the subspaces for the low-quality experiments can be used to discover which QC metrics have an influence on the identification performance.
Preliminary Data or Plenary Speakers Abstract
We used a public dataset of standard QC LC-MS runs performed on different instruments at the Pacific Northwest National Laboratory. For this data the quality of each run has been manually annotated by expert instrument operators as being ‘good’ or ‘poor’. Using these quality labels as the ground truth our outlier detection strategy was validated, showing that we are successfully able to discriminate high-quality experiments from low-quality outlying experiments. Furthermore, because our approach is fully unsupervised, no training phase is required. This means that, although manually obtained quality labels were used to validate our outlier detection method, such manual annotations are not required by the algorithm, and instead it can be applied directly on experiments of unknown quality with differing characteristics.
Furthermore, interpreting the detected low-quality experiments through the subspace of QC metrics in which they can be differentiated from the high-quality experiments yields actionable intelligence that can be used by domain experts to improve the experimental set-up. Next, by combining the explanatory subspaces for all individual outliers, it is possible to get a general view on which QC metrics are most relevant when detecting deviating experiments. We used frequent itemset mining to identify the QC metrics that frequently co-occur in the outliers’ subspaces, and related these frequent outlier subspaces to the identification performance. We observed that not all QC metrics necessarily result in a diminished identification performance. Instead, metrics detailing the chromatographic performance, the TIC accumulation, and the precursor ionization mostly indicate a significantly lower number of spectrum identifications. Indeed, the efficacy of these QC metrics was previously and independently noted as well.
To conclude, we present a powerful technique to automatically discriminate low-quality experiments from high-quality experiments based on computationally derived QC metrics, which can be applied for different instrument types, and for sample contents with varying complexity.
We automatically discriminate low-quality from high-quality experiments, provide information to explain the diminished performance, and link to the identification performance.