Conference paper Open Access

Evaluation and Comparison of CNN Visual Explanations for Histopathology

Mara Graziani; Thomas Lompech; Henning Müller; Vincent Andrearczyk

Visualization methods for Convolutional Neural Net-works (CNNs) are spreading within the medical com-munity to obtain explainable AI (XAI). The sole quali-tative assessment of the explanations is subject to a riskof confirmation bias. This paper proposes a methodol-ogy for the quantitative evaluation of common visual-ization approaches for histopathology images, i.e. ClassActivation Mapping and Local-Interpretable Model-Agnostic Explanations. In our evaluation, we proposeto assess four main points, namely the alignment withclinical factors, the agreement between XAI methods,the consistency and repeatability of the explanations. Todo so, we compare the intersection over union of multi-ple visualizations of the CNN attention with the seman-tic annotation of functionally different nuclei types. Theexperimental results do not show stronger attributions tothe multiple nuclei types than those of a randomly ini-tialized CNN. The visualizations hardly agree on salientareas and LIME outputs have particularly unstable re-peatability and consistency. The qualitative evaluationalone is thus not sufficient to establish the appropriate-ness and reliability of the visualization tools. The codeis available on GitHub atbit.ly/2K48HKz.

Files (5.7 MB)
Name Size
XAI_workshop_mara.pdf
md5:1438994d98ba7d9fc4897b37a39091d1
5.7 MB Download
21
19
views
downloads
All versions This version
Views 2121
Downloads 1919
Data volume 108.5 MB108.5 MB
Unique views 2121
Unique downloads 1818

Share

Cite as