Published May 8, 2023 | Version v3
Dataset Open

Explainable AI for Retinoblastoma Diagnosis: Interpreting Deep Learning Models with LIME and SHAP

  • 1. Department of Information Systems, College of Computer and Information Sciences, Jouf University, Sakaka 72388, Saudi Arabia
  • 2. School of Computer Science, SCS, Taylor's University, Subang Jaya 47500, Malaysia
  • 3. Department of Information Systems, College of Computer and Information Sciences, Jouf University
  • 4. Department of Information System, Sakaka 72388, Saudi Arabias, College of Computer and Information Sciences, Jouf University

Description

Retinoblastoma is a rare and aggressive form of childhood eye cancer that requires prompt diagnosis and treatment to prevent vision loss and even death. Deep learning models have shown promising results in detecting retinoblastoma from fundus images, but their decision-making process is often considered a "black box" that lacks transparency and interpretability. In this project, we explore the use of LIME and SHAP, two popular explainable AI techniques, to generate local and global explanations for a deep learning model based on InceptionV3 architecture trained on retinoblastoma and non-retinoblastoma fundus images. We collected and labeled a dataset of 400 retinoblastoma and 400 non-retinoblastoma images, split it into training, validation, and test sets, and trained the model using transfer learning from the pre-trained InceptionV3 model. We then applied LIME and SHAP to generate explanations for the model's predictions on the validation and test sets. Our results demonstrate that LIME and SHAP can effectively identify the regions and features in the input images that contribute the most to the model's predictions, providing valuable insights into the decision-making process of the deep learning model. In addition, the use of InceptionV3 architecture with spatial attention mechanism achieved high accuracy of 97\% on the test set, indicating the potential of combining deep learning and explainable AI for improving retinoblastoma diagnosis and treatment.

Files

Files (8.3 MB)

Name Size Download all
md5:339d1fae69203a091e1efc61d4feefcc
8.2 MB Download
md5:c2812e4d2fda042c1e2db89c5056aa77
16.9 kB Download

Additional details

References

  • Humayun, M., Ashfaq, F., Jhanjhi, N. Z., & Alsadun, M. K. (2022). Traffic management: Multi-scale vehicle detection in varying weather conditions using yolov4 and spatial pyramid pooling network. Electronics, 11(17), 2748.
  • Ashfaq, F., Ghoniem, R. M., Jhanjhi, N. Z., Khan, N. A., & Algarni, A. D. (2023). Using Dual Attention BiLSTM to Predict Vehicle Lane Changing Maneuvers on Highway Dataset. Systems, 11(4), 196.
  • Aldughayfiq, B., Ashfaq, F., Jhanjhi, N. Z., & Humayun, M. (2023, April). YOLO-Based Deep Learning Model for Pressure Ulcer Detection and Classification. In Healthcare (Vol. 11, No. 9, p. 1222). MDPI.