Explainable Deep Learning for Lung Disease Detection on Chest X-ray Images Using Local Interpretable Model-Agnostic Explanations (LIME)
Authors/Creators
- 1. Informatics Engineering Study Program, Faculty of Science and Technology, Universitas Islam Negeri Sultan Syarif Kasim, Pekanbaru-Riau, Indonesia.
Description
Artificial Intelligence (AI) is increasingly being applied in the healthcare field through Machine Learning (ML) and Deep Learning (DL) models. However, the complexity of modern black-box models creates a need for transparent interpretation methods. Explainable AI (XAI) emerges to bridge this gap by providing better understanding of model performance. This study implements the Local Interpretable Model-agnostic Explanations (LIME) method to visualize the classification results of a DL model based on the ResNet18 architecture on Chest X-ray (CXR) images across three classes: normal, COVID-19, and pneumonia. The model achieved a precision of 97%, recall of 97%, and F1-score of 97%, with an accuracy of 98%. LIME visualizations highlight the image regions that significantly contribute to the classification and effectively distinguish among the three classes. The results of this study demonstrate that applying XAI specifically LIME with a ResNet18-based DL model can provide interpretability in CXR image classification tasks.
Files
18.pdf
Files
(462.9 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:a9c17c431c391f81710febdbf7f7165e
|
462.9 kB | Preview Download |