Published September 30, 2025 | Version v1
Journal article Open

Explainable Artificial Intelligence (XAI): A Comprehensive Review of Methods, Applications, and Open Issues

  • 1. ROR icon Dennis Osadebay University

Contributors

Data collector:

  • 1. ROR icon Dennis Osadebay University

Description

Artificial Intelligence (AI) has achieved remarkable breakthroughs across multiple domains, yet the increasing reliance on complex black-box models has raised concerns about trust, transparency, and accountability. Explainable Artificial Intelligence (XAI) has emerged as a critical paradigm aimed at making AI models more interpretable and understandable without compromising performance. This paper presents a comprehensive review of XAI, beginning with its foundations, historical evolution, and core principles such as interpretability, transparency, fairness, causality, and usability. It examines major methodological approaches, including model-specific versus model-agnostic techniques, intrinsic versus post-hoc explanations, and local versus global perspectives, while analyzing widely used methods such as SHAP, LIME, surrogate models, visualization tools, counterfactuals, and example-based explanations. The paper further highlights applications of XAI in healthcare, finance, autonomous systems, cybersecurity, governance, education, and recommender systems, demonstrating its relevance in real-world decision-making. Evaluation metrics including fidelity, human-centered usability, robustness, and trade-offs between explainability and performance, are discussed to frame the challenges of measuring explanation quality. Despite advancements, open issues such as lack of standardization, scalability, ethical and legal implications, and adoption barriers persist. Future directions emphasize human-centered and interactive explanations, hybrid symbolic-statistical models, standardized evaluation frameworks, applications in emerging fields, and stronger policy integration. Overall, XAI is positioned as a cornerstone for building trustworthy, sustainable, and ethical AI systems.

Files

Adeoye et al. v2.pdf

Files (966.1 kB)

Name Size Download all
md5:18d6280dce92e636ac00efe59d4d5986
966.1 kB Preview Download

Additional details