Explainable Artificial Intelligence 101: Techniques, Applications and Challenges [preprint]
Creators
Description
Artificial Intelligence (AI) systems have grown commonplace in modern life, with various applications from customized suggestions to self-driving vehicles. As these systems get more complicated, the necessity for transparency in their decision-making processes becomes more critical. Explainability refers to an AI system’s ability to explain how and why it made a certain judgement or prediction. Recently, there has been a surge of interest in constructing explainable AI (XAI) systems that give insights into the decision-making processes of machine learning models. This paper discloses and elaborates upon a selection of XAI techniques, identifies current challenges and possible future directions in XAI research.
---
Disclaimer:
This is a preprint version of the article.
The content here is for view-only purposes. This is not the final published version and may differ from the version of record.
Please refer to the official version for citation and authoritative use.
Files
ZENODO__Explainable_Artificial_Intelligence_101__Techniques__Applications_and_Challenges__pv_-3.pdf
Files
(120.9 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:2cd7303f637d4614e83dfd7c410919c4
|
120.9 kB | Preview Download |