Published September 12, 2023 | Version v1
Dataset Open

Replication Package - From Research to Practice: A Survey of XAI Process Frameworks

Authors/Creators

  • 1. Anonymous

Description

Replication package for the ICSE NIER Submission "From Research to Practice: A Survey of XAI Process Frameworks." Provides details about the methods and data used in our analysis. For more information, please take a look at the README.

Files

Files (29.5 MB)

Name Size Download all
md5:3acf1aca42db247d236df932ec9fa122
7.5 kB Download
md5:a9e84f71f1725a16f3df65f51a3f8f8d
5.5 MB Download
md5:963e571685022f3b1af806dc52be018c
20.8 MB Download
md5:cbd1d9973c3caaf9ca10c152636a244e
3.0 MB Download
md5:27fcc314b8a5c35e4f7c9ebf21850cc5
73.3 kB Download

Additional details

References

  • Garrick Cabour, Andrés Morales, Élise Ledoux, and Samuel Bassetto. 2021. Towards an Explanation Space to Align Humans and Explainable-AI Teamwork. https://doi.org/10.48550/arXiv.2106.01503 arXiv:2106.01503 [cs].
  • Haomin Chen, Catalina Gomez, Chien-Ming Huang, and Mathias Unberath. 2022. Explainable Medical Imaging AI Needs Human-Centered Design: Guidelines and Evidence from a Systematic Review. https://doi.org/10.48550/arXiv.2112.12596 arXiv:2112.12596 [cs, eess].
  • Douglas Cirqueira, Dietmar Nedbal, Markus Helfert, and Marija Bezbradica. 2020. Scenario-Based Requirements Elicitation for User-Centric Explainable AI. In Machine Learning and Knowledge Extraction, Andreas Holzinger, Peter Kieseberg, A Min Tjoa, and Edgar Weippl (Eds.). Springer International Publishing, Cham, 321–341.
  • Upol Ehsan, Q. Vera Liao, Samir Passi, Mark O. Riedl, and Hal Daume III. 2022. Seamful XAI: Operationalizing Seamful Design in Explainable AI. https://doi.org/10.48550/arXiv.2211.06753 arXiv:2211.06753 [cs].
  • Malin Eiband, Hanna Schneider, Mark Bilandzic, Julian Fazekas-Con, Mareike Haug, and Heinrich Hussmann. 2018. Bringing Transparency Design into Practice. In 23rd International Conference on Intelligent User Interfaces (IUI'18). Association for Computing Machinery, New York, NY, USA, 211–223. https://doi.org/10.1145/3172944.3172961
  • Umm-e Habiba, Justus Bogner, and Stefan Wagner. 2022. Can Requirements Engineering Support Explainable Artificial Intelligence? Towards a User-Centric Approach for Explainability Requirements. https://doi.org/10.48550/arXiv.2206. 01507 arXiv:2206.01507 [cs]
  • Weina Jin, Jianyu Fan, Diane Gromala, Philippe Pasquier, and Ghassan Hamarneh. 2022. EUCA: the End-User-Centered Explainable AI Framework. https://doi.org/10.48550/arXiv.2102.02437 arXiv:2102.02437 [cs]
  • Q. Vera Liao, Milena Pribić, Jaesik Han, Sarah Miller, and Daby Sow. 2021. Question-Driven Design Process for Explainable AI User Experiences. https://doi.org/10.48550/arXiv.2104.03483 arXiv:2104.03483 [cs]
  • Sina Mohseni, Niloofar Zarei, and Eric D. Ragan. 2020. A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems. https://doi.org/10.48550/arXiv.1811.11839 arXiv:1811.11839 [cs]
  • Gesina Schwalbe and Bettina Finzel. 2023. A Comprehensive Taxonomy for Explainable Artificial Intelligence: A Systematic Survey of Surveys on Methods and Concepts. Data Mining and Knowledge Discovery (Jan. 2023). https://doi.org/10.1007/s10618-022-00867-8 arXiv:2105.07190 [cs]