Published August 1, 2024 | Version v1
Journal article Open

Explaining transfer learning models for the detection of COVID-19 on X-ray lung images

  • 1. American International University
  • 2. Jordan University of Science and Technology

Description

Amidst the coronavirus disease 2019 (COVID-19) pandemic, researchers are exploring innovative approaches to enhance diagnostic accuracy. One avenue is utilizing deep learning models to analyze lung X-ray images for COVID-19 diagnosis, complementing existing tests like reverse transcription polymerase chain reaction (RT-PCR). However, trusting these models, often viewed as black boxes, presents a challenge. To address this, six explainable artificial intelligence (XAI) techniques: local interpretable model agnostic explanations (LIME), Shapley additive explanations (SHAP), integrated gradients, smooth-grad, gradient-weighted class activation mapping (Grad- CAM), and Layer-CAM are applied to interpret four transfer learning models. These models: VGG16, ResNet50, InceptionV3, and DenseNet121 are analyzed to understand their workings and the rationale behind their predictions. Validating the results with medical experts poses difficulties due to time and resource constraints, alongside the scarcity of annotated X-ray datasets. To address this, a voting mechanism employing different XAI methods across various models is proposed. This approach highlights regions of lung infection, potentially reducing individual model biases stemming from their structures. If successful, this research could pave the way for an automated system for annotating infection regions, bolstering confidence in predictions and aiding in the development of more effective diagnostic tools for COVID-19.

Files

89 35407 IJECE 9% DDV.pdf

Files (551.8 kB)

Name Size Download all
md5:0e1b2a790724700d22e0a733fe8e9d7b
551.8 kB Preview Download