Published December 20, 2025
| Version v1
Conference paper
Open
Explainable Artificial Intelligence for Transparent and Trustworthy AI
Authors/Creators
Description
Explainable Artificial Intelligence (XAI) has
emerged as a crucial area of research, addressing the opaque
nature of deep learning models, which is particularly problematic
in high-stakes fields that necessitate interpretability and trust,
such as healthcare, finance, and autonomous systems. This
review delineates the progression of XAI, with an emphasis
on recent advancements, as well as the distinctions between
model-specific and model-agnostic methodologies, while critically
examining the challenges inherent in reconciling accuracy with
transparency. Prominent XAI techniques are systematically
discussed, encompassing feature attribution, visual explanations,
and both local and global interpretability strategies. A
comparative analysis of the applicability and limitations
of these techniques within deep learning architectures is
provided. Moreover, this paper evaluates training strategies
and architectural modifications that are intended to enhance
interpretability in neural networks without compromising their
performance metrics. A thorough overview of contemporary
applications illustrates the integral function of XAI in promoting
ethical AI practices and ensuring compliance with regulatory
standards. Ultimately, this review aspires to inform future
research initiatives by highlighting promising avenues for the
development of AI systems that are not only interpretable and
robust but also socially responsible.
Files
mypaper1.pdf
Files
(1.1 MB)
| Name | Size | Download all |
|---|---|---|
|
md5:bc596da3ad91bee440c0ef63c02fb89d
|
1.1 MB | Preview Download |