The White Box Paradox in Medicine, Its Discontents and Possible Solutions
Creators
- 1. University of Milano-Bicocca, IRCCS Istituto Ortopedico Galeazzi
- 2. University of Milan, Vita-Salute University San Raffaele
Description
As cases of worsened automation bias after being presented with system explanations have been reported in specialized literature, the optimal timing of explanation provision during the human-AI interaction should be carefully designed and investigated in order to avoid anchor effects, which hinder an independent evaluation of the case at hand. In our experimental design, we took into consideration whether AI advice and explanation were provided either before the human decision (so called “ram” protocols) or after (“hound” protocols). We applied these two interaction protocols in an empirical study on the task of abnormality detection in knee magnetic resonance imaging, augmented by relevance map. In that occasion, we experimentally observed that visual explanations may paradoxically make users less confident in the final decision. We also described the white box paradox, whereby explaining the decisions of a black-box algorithm may make it a less useful or more harmful support. We therefore propose a series of design solutions to remedy these drawbacks by adopting the anti-agential perspective of adjunction: the machine is marginalized in the loop of human action and decision, with the aim of providing support to humans without affecting their autonomy, agency, and accountability towards those impacted by their decisions.
Files
F_Cabitza_C_Natali_Whitebox_Paradox(2).pdf
Files
(110.6 kB)
Name | Size | Download all |
---|---|---|
md5:5e25e0505d8025b819764a8a16e88aa8
|
110.6 kB | Preview Download |
Additional details
References
- High-Level Expert Group on AI. Ethics guidelines for trustworthy AI. European Commission and Directorate-General for Communications Networks, Content and Technology; 2019.
- Markus AF, Kors JA, Rijnbeek PR. The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies. Journal of Biomedical Informatics. 2021;113:103655.
- OECD. The OECD Artificial Intelligence (AI) Principles. oecd.ai;. [Online; accessed 2022-06-21].
- Ghassemi M, Oakden-Rayner L, Beam AL. The false hope of current approaches to explainable artificial intelligence in health care. The Lancet Digital Health. 2021;3(11):e745-50.
- Capone L, Bertolaso M. A Philosophical Approach for a Human-centered Explainable AI. In: XAI. it@ AI* IA; 2020. p. 80-6
- Tomsett R, Preece A, Braines D, Cerutti F, Chakraborty S, Srivastava M, et al. Rapid trust calibration through interpretable and uncertainty-aware AI. Patterns. 2020;1(4):100049.
- Schemmer M, K¨uhl N, Benz C, Satzger G. On the Influence of Explainable AI on Automation Bias. arXiv preprint arXiv:220408859. 2022.
- Kaur H, Nori H, Jenkins S, Caruana R, Wallach H, Wortman Vaughan J. Interpreting interpretability: understanding data scientists' use of interpretability tools for machine learning. In: Proceedings of the 2020 CHI conference on human factors in computing systems; 2020. p. 1-14
- Bertrand A, Belloum R, Eagan JR, Maxwell W. How Cognitive Biases Affect XAI-assisted Decision-making: A Systematic Review. In: Proc. of the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society. AIES. New York, NY: ACM; 2022. To appear.
- Elmore JG, Lee CI. Artificial Intelligence in Medical Imaging—Learning From Past Mistakes in Mammography. In: JAMA Health Forum. vol. 3. American Medical Association; 2022. p. e215207-7.
- Bansal G, Wu T, Zhou J, Fok R, Nushi B, Kamar E, et al. Does the whole exceed its parts? the effect of ai explanations on complementary team performance. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems; 2021. p. 1-16.
- Englich B, Mussweiler T, Strack F. Playing dice with criminal sentences: The influence of irrelevant anchors on experts' judicial decision making. Personality and Social Psychology Bulletin. 2006;32(2):188-200.
- Wang D, Yang Q, Abdul A, Lim BY. Designing theory-driven user-centric explainable AI. In: Proceedings of the 2019 CHI conference on human factors in computing systems; 2019. p. 1-15.
- Lipton ZC. The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue. 2018;16(3):31-57.
- The Royal Society. Explainable AI: The Basics; 2019. Available from: https://royalsociety.org/-/media/policy/projects/explainable-ai/AI-and-interpretability-policy-briefing.pdf.
- Bussone A, Stumpf S, O'Sullivan D. The role of explanations on trust and reliance in clinical decision support systems. In: 2015 international conference on healthcare informatics. IEEE; 2015. p. 160-9.
- Stumpf S, Rajaram V, Li L, Wong WK, Burnett M, Dietterich T, et al. Interacting meaningfully with machine learning systems: Three experiments. International journal of human-computer studies. 2009;67(8):639-62.
- Cabitza F, Campagner A, Simone C. The need to move away from agential AI: Empirical investigations, useful concepts and open issues. International Journal of Human-Computer Studies. 2021;155:102696.
- Cabitza F. Cobra AI: Exploring Some Unintended Consequences. Machines We Trust: Perspectives on Dependable AI. 2021:87.
- Cabitza F, Natali C. Open, Multiple, Adjunct. Decision Support at the Time of Relational AI. In: Proceedings of the First International Conference on Hybrid Human-Machine Intelligence. Frontiers of AI. Amsterdam, Netherlands: IOS Press; 2022.
- Shneiderman B. Human-centered artificial intelligence: three fresh ideas. AIS Transactions on Human-Computer Interaction. 2020;12(3):109-24.
- Srinivasan A, Bain M, Coiera E. One-way Explainability Isn't The Message. arXiv preprint arXiv:220508954. 2022.
- Pierce J. Undesigning technology: considering the negation of design by design. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems; 2012. p. 957-66.
- Gandouz M, Holzmann H, Heider D. Machine learning with asymmetric abstention for biomedical decision-making. BMC medical informatics and decision making. 2021;21(1):1-11
- Campagner A, Cabitza F, Ciucci D. Three-way decision for handling uncertainty in machine learning: A narrative review. In: International Joint Conference on Rough Sets. Springer; 2020. p. 137-52.
- Tschandl P, Rinner C, Apalla Z, Argenziano G, Codella N, Halpern A, et al. Human–computer collaboration for skin cancer recognition. Nature Medicine. 2020;26(8):1229-34
- Bu¸cinca Z, Malaya MB, Gajos KZ. To trust or to think: cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making. Proceedings of the ACM on Human-Computer Interaction. 2021;5(CSCW1):1-21.
- Cabitza F, Rasoini R, Gensini GF. Unintended consequences of machine learning in medicine. Jama. 2017;318(6):517-8.