Towards Quality Measures for xAI algorithms: Explanation Stability [preprint]
Description
The domain of Artificial Intelligence has become ubiquitous across a wide plethora of domains and is
now an integral part of the daily life of the ordinary citizen. While the need for increased transparency
of the highly accurate black-box model is an important and very active area of research, the produced
explanations themselves might not always be accurate. The measures to assess the quality of explanations are an important research topic. In this paper, a set of extensive experiments is performed to evaluate the stability of SHAP explanations under conditions of different noise types and different noise intensities as an effort to build a formal way of assessing the quality of explanations provided by the SHAP algorithm. The experiments are performed on four different datasets, with three different noise types at four different strength levels. The impact of the scenarios on SHAP explanations is reported, the implications for the evaluation of explainability methods are elaborated upon, along with the significance of the results for the SHAP method of explanations. The future directions are laid out thereafter.
---
Disclaimer:
This is a preprint version of the article.
The content here is for view-only purposes. This is not the final published version and may differ from the version of record.
Please refer to the official version for citation and authoritative use.
Files
ZENODO__Towards_Quality_Measures_for_xAI_algorithms__Explanation_Stability__pv_.pdf
Files
(312.3 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:b14abef8ec9297a330478f587f88006a
|
312.3 kB | Preview Download |