Published August 1, 2023 | Version v1
Conference paper Open

Saliency Maps as an Explainable AI Method in Medical Imaging: A Case Study on Brain Tumor Classification

  • 1. General Directorate of Health Information Systems, Ministry of Health, Türkiye
  • 2. Department of Biomechanics, Dokuz Eylül University, Türkiye
  • 3. School of Medicine, Department of Neurosurgery, Ankara Yıldırım Beyazıt University, Türkiye
  • 4. ADAPT Research Centre, School of Computer Science, University of Galway, Ireland

Description

Explainable Artificial Intelligence (XAI) plays a crucial role in the field of medical imaging, where AI systems are used for clinical decision support and diagnostic processes. XAI aims to develop approaches that make machine learning (ML) models more transparent and interpretable, facilitating human-AI collaboration and improving trust. In medical imaging, early prediction of anomalies is vital, and understanding AI's decision-making process is crucial. Saliency maps are used to highlight important regions in an image and have been found a user-friendly explanation method for deep learning-based imaging tasks. They are widely used in many applications across various domains. There are different methods for generating saliency maps depending on the analysis and temporal occurrence. Ad-hoc methods are model-specific, while ante-hoc and post-hoc methods are independent of the model architecture. Post-hoc methods, such as activation-based, perturbation-based, and gradient-based methods, are commonly used for generating saliency maps. In this case study, we focus on the application of gradient-based saliency maps using Magnetic Resonance Imaging (MRI) images to provide insights into brain tumor classification. To achieve this, we implemented a convolutional neural network (CNN) model on a benchmark brain MRI dataset and generated saliency maps. The results reveal that the tumor and its surrounding pixels play a significant role in the classification of brain MRIs, highlighting the importance of tumor shape in the classification process. Understanding these underlying mechanisms enhances the robustness, reliability, and accountability of AI systems used in brain tumor detection and classification.

Files

IMVIP2023_AyseKeles_paper.pdf

Files (565.8 kB)

Name Size Download all
md5:63389d6e93cbfb4084e65b2fb2582bc5
565.8 kB Preview Download