There is a newer version of the record available.

Published July 16, 2022 | Version v1
Conference paper Open

Open, multiple, adjunct. Decision support at the time of Relational AI

  • 1. University of Milano-Bicocca, IRCCS Istituto Ortopedico Galeazzi
  • 2. University of Milan, Vita-Salute University San Raffaele

Description

In this paper, we consider some key characteristics that relational AI should exhibit to enable decision hybrid agencies that include subject-matter experts and their AI-enabled decision aids, especially when these latter ones have been developed by following a machine learning approach. We will hint at the design requirements of guaranteeing that AI tools are: open, multiple, continuous, cautious, vague, analogical and, most importantly, adjunct with respect to decision making practices. We will argue that especially adjunction is an important condition to design for. Adjunction entails the design and evaluation of human-AI interaction protocols aimed at improving AI usability, that is decision effectiveness and efficiency, while also guaranteeing user satisfaction and human and social sustainability, as well as mitigating the risk of automation bias, technology over-reliance and user deskilling. These high-level aims are compatible with the tenets of a relational approach to the design of AI tools to support decision making and collaborative practices.

Files

F. Cabitza, C. Natali (2022). Open, Multiple, Adjunct_Decision Support at the Time of Relational AI.pdf

Additional details

References

  • De Michelis G. Aperto, molteplice, continuo: gli artefatti alla fine del Novecento. Zanichelli, Milano; 1998.
  • Zenisek J, Holzinger F, Affenzeller M. Machine learning based concept drift detection for predictive maintenance. Computers & Industrial Engineering. 2019;137:106031.
  • Lu D, Tao C, Chen J, Li F, Guo F, Carin L. Reconsidering generative objectives for counterfactual reasoning. Advances in Neural Information Processing Systems. 2020;33:21539-53.
  • Goddard K, Roudsari A, Wyatt JC. Automation bias: a systematic review of frequency, effect mediators, and mitigators. Journal of the American Medical Informatics Association. 2012;19(1):121-7.
  • Gabora L. Reframing convergent and divergent thought for the 21st century. arXiv preprint arXiv:181104512. 2018.
  • Wu J, Quian X, Wang MY. Advances in generative design. Computer-Aided Design. 2019;116:102733
  • Chac ́on JC, Nimi HM, Kloss B, Kenta O. Towards the Development of AI Based Generative Design Tools and Applications. In: International Conference on Design, Learning, and Innovation. Springer; 2020. p. 63-73.
  • Tschandl P, Rinner C, Apalla Z, Argenziano G, Codella N, Halpern A, et al. Human–computer collabo- ration for skin cancer recognition. Nature Medicine. 2020;26(8):1229-34.
  • Rundo L, Pirrone R, Vitabile S, Sala E, Gambino O. Recent advances of HCI in decision-making tasks for optimized clinical workflows and precision medicine. Journal of biomedical informatics. 2020;108:103479.
  • Verma S, Dickerson J, Hines K. Counterfactual explanations for machine learning: A review. arXiv preprint arXiv:201010596. 2020.
  • Campagner A, Cabitza F, Ciucci D. Three–way classification: Ambiguity and abstention in machine learning. In: International Joint Conference on Rough Sets. Springer; 2019. p. 280-94.
  • Vovk V, Gammerman A, Shafer G. Algorithmic learning in a random world. Springer Science & Business Media; 2005.
  • Campagner A, Cabitza F, Berjano P, Ciucci D. Three-way decision and conformal prediction: Isomor- phisms, differences and theoretical properties of cautious learning approaches. Information Sciences. 2021;579:347-67.
  • Keane M. Analogical mechanisms. Artificial Intelligence Review. 1988;2(4):229-51.
  • Baselli G, Codari M, Sardanelli F. Opening the black box of machine learning in radiology: can the proximity of annotated cases be a way? European Radiology Experimental. 2020;4(1):1-7.
  • Cornelissen NAJ, van Eerdt RJM, Schraffenberger HK, Haselager WFG. Reflection machines: increas- ing meaningful human control over Decision Support Systems. Ethics and Information Technology. 2022;19(24).
  • Cabitza F, Campagner A, Sconfienza LM. Studying human-AI collaboration protocols: the case of the Kasparov's law in radiological double reading. Health Information Science and Systems. 2021;9(1):1- 20.
  • Kasparov G, Greengard M. Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins. Millennium Series. John Murray Press; 2017. Available from: https://books.google.it/ books?id=ffYZDQAAQBAJ.
  • Cabitza F, Campagner A, Simone C. The need to move away from agential-AI: Empirical investigations, useful concepts and open issues. International Journal of Human-Computer Studies. 2021;155:102696.
  • Tsamados A, Aggarwal N, Cowls J, Morley J, Roberts H, Taddeo M, et al. The ethics of algorithms: key problems and solutions. AI & SOCIETY. 2021:1-16.
  • Grote T, Berens P. On the ethics of algorithmic decision-making in healthcare. Journal of medical ethics. 2020;46(3):205-11.
  • Cabitza F, Rasoini R, Gensini GF. Unintended consequences of machine learning in medicine. Jama. 2017;318(6):517-8.
  • Bond RR, Novotny T, Andrsova I, Koc L, Sisakova M, Finlay D, et al. Automation bias in medicine: The influence of automated diagnoses on interpreter accuracy and uncertainty when reading electrocar- diograms. Journal of electrocardiology. 2018;51(6):S6-S11.
  • Spitzer M. Outsourcing the mental? From knowledge-on-demand to Morbus Google. Trends in Neuro- science and Education. 2016;5(1):34-9.
  • Carr NG. The Shallows: What the Internet is Doing to Our Brains. W.W. Norton; 2010. Available from: https://books.google.it/books?id=9-8jnjgYrgYC.
  • Farindon P. Cabitza, Federico. In: Pelillo M, Scantamburlo T, editors. Cobra AI: Exploring Some Unintended Consequences. MIT Press; 2021. p. 87-104.
  • Malone TW. Superminds: The Surprising Power of People and Computers Thinking Together. Little Brown; 2018. Available from: https://books.google.it/books?id=Qe0zDwAAQBAJ
  • Pierce J. Undesigning interaction. Interactions. 2014;21(4):36-9.
  • Cabitza F. Biases affecting human decision making in AI-supported second opinion settings. In: Inter- national Conference on Modeling Decisions for Artificial Intelligence. Springer; 2019. p. 283-94.
  • Boni M. The ethical dimension of human–artificial intelligence collaboration. European View. 2021;20(2):182-90.
  • Taddeo M, Floridi L. How AI can be a force for good. Science. 2018;361(6404):751-2.
  • Floridi L. Establishing the rules for building trustworthy AI. Nature Machine Intelligence. 2019;1(6):261-2.
  • Methnani L, Aler Tubella A, Dignum V, Theodorou A. Let Me Take Over: Variable Autonomy for Meaningful Human Control. Frontiers in Artificial Intelligence. 2021;4. Available from: https:// www.frontiersin.org/article/10.3389/frai.2021.737072.
  • The Council of Europe's Ad-Hoc Committee on AI /CAHAI). Towards Regulation of AI Systems. Council of Europe; 2020.
  • High Level Expert Grouèp on Artificial Intelligence (HLEG AI). thics Guidelines for Trustworthy Artificial Intelligence. European Commission; 2019.
  • Shahriari K, Shahriari M. IEEE standard review—Ethically aligned design: A vision for prioritizing hu- man wellbeing with artificial intelligence and autonomous systems. In: 2017 IEEE Canada International Humanitarian Technology Conference (IHTC). IEEE; 2017. p. 197-201
  • European Commission, Centre JR, Nativi S, De Nigris S. AI Watch, AI standardisation landscape state of play and link to the EC proposal for an AI regulatory framework. Publications Office; 2021
  • Smuha NA. The EU approach to ethics guidelines for trustworthy artificial intelligence. Computer Law Review International. 2019;20(4):97-106.
  • Meza Mart ́ınez MA, Nadj M, Maedche A. Towards an integrative theoretical framework of interactive machine learning systems. 2019.
  • Wiethof C, Bittner EA. Hybrid Intelligence–Combining the Human in the Loop with the Computer in the Loop: A Systematic Literature Review. 2021.
  • Xu W, Dainoff MJ, Ge L, Gao Z. From Human-Computer Interaction to Human-AI Interaction: New Challenges and Opportunities for Enabling Human-Centered AI. ArXiv. 2021;abs/2105.05424.
  • Holzinger A. Interactive machine learning for health informatics: when do we need the human-in-the- loop? Brain Informatics. 2016;3(2):119-31
  • Parasuraman R, Manzey DH. Complacency and bias in human use of automation: An attentional inte- gration. Human factors. 2010;52(3):381-410.
  • Shneiderman B. Human-centered artificial intelligence: three fresh ideas. AIS Transactions on Human- Computer Interaction. 2020;12(3):109-24.
  • Skitka LJ, Mosier K, Burdick MD. Accountability and automation bias. International Journal of Human- Computer Studies. 2000;52(4):701-17.
  • Sloman S, Fernbach P. The Knowledge Illusion: Why We Never Think Alone. Penguin; 2017.
  • Frischmann B, Selinger E. Re-engineering humanity. Cambridge University Press; 2018.
  • Scalera L, Gallina P, Gasparetto A, Seriani S. Anti-Hedonistic Machines. Int J Mech Control. 2017;18:9- 16.
  • Cabitza F, Campagner A, Ciucci D, Seveso A. Programmed inefficiencies in DSS-supported human de- cision making. In: International Conference on Modeling Decisions for Artificial Intelligence. Springer; 2019. p. 201-12.
  • Ohm P, Frankle J. Desirable inefficiency. Fla L Rev. 2018;70:777.
  • Tenner E. The Efficiency Paradox: What Big Data Can't Do. Knopf Doubleday Publishing Group; 2018. Available from: https://books.google.it/books?id=PgAtDwAAQBAJ.
  • Hildebrandt M. Privacy as protection of the incomputable self: From agnostic to agonistic machine learning. Theoretical Inquiries in Law. 2019;20(1):83-121.
  • Sadin ́E. L'intelligence artificielle, ou, L'enjeu du si`ecle: anatomie d'un antihumanisme radical. Collec- tion Pour en finir avec. L' ́Echapp ́ee; 2018. Available from: https://books.google.it/books?id= yJ1uvQEACAAJ.
  • Muller JZ. The Tyranny of Metrics. Princeton University Press; 2018. Available from: https:// books.google.it/books?id=J3GYDwAAQBAJ.
  • Crawford K. The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press; 2021. Available from: https://books.google.it/books?id=KfodEAAAQBAJ.
  • Buc ̧inca Z, Malaya MB, Gajos KZ. To trust or to think: cognitive forcing functions can reduce overre- liance on AI in AI-assisted decision-making. Proceedings of the ACM on Human-Computer Interaction. 2021;5(CSCW1):1-21.
  • Christakis NA. Blueprint: The Evolutionary Origins of a Good Society. Little, Brown Spark; 2019.
  • Hildebrandt M. Algorithmic regulation and the rule of law. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 2018;376(2128):20170355.
  • Park JS, Barber R, Kirlik A, Karahalios K. A Slow Algorithm Improves Users' Assessments of the Algorithm's Accuracy. Proceedings of the ACM on Human-Computer Interaction. 2019;3(CSCW):1-15.
  • Holzinger AT, Muller H. Toward Human–AI Interfaces to Support Explainability and Causability in Medical AI. Computer. 2021;54(10):78-86.
  • Kaur S, Sharma R. Emotion AI: Integrating Emotional Intelligence with Artificial Intelligence in the Digital Workplace. In: Singh PK, Polkowski Z, Tanwar S, Pandey SK, Matei G, Pirvu D, editors. In- novations in Information and Communication Technologies (IICT-2020). Cham: Springer International Publishing; 2021. p. 337-43.
  • Jankuloski F, Bozinovski A, Pacovski V. Artificial Intelligence: Simulating Human Emotion and Sur- passing Human Intelligence. 2020.
  • Montemayor C, Halpern J, Fairweather A. In principle obstacles for empathic AI: why we can't replace human empathy in healthcare. Ai & Society. 2021:1-7.
  • Nass C, Steuer J, Tauber E, Reeder H. Anthropomorphism, agency, and ethopoeia: computers as social actors. In: INTERACT'93 and CHI'93 conference companion on Human factors in computing systems; 1993. p. 111-2.
  • Hamacher A, Bianchi-Berthouze N, Pipe AG, Eder K. Believing in BERT: Using expressive communi- cation to enhance trust and counteract operational error in physical Human-robot interaction. In: 2016 25th IEEE international symposium on robot and human interactive communication (RO-MAN). IEEE; 2016. p. 493-500.
  • Shneiderman B. Human-centered AI. Issues in Science and Technology. 2021;37(2):56-61.
  • Hildebrandt M. New animism in policing: re-animating the rule of law. The SAGE handbook of global policing. 2016:406-28.
  • Atkinson C, Brooks L. In the Age of the Humanchine. ICIS 2005 Proceedings. 2005:11.
  • IEEE. Ethically Aligned Design (Version 1). The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. 2016.
  • Turing AM. Computing Machinery and Intelligence. Mind. 1950;59(October):433-60.
  • Ghassemi M, Oakden-Rayner L, Beam AL. The false hope of current approaches to explainable artificial intelligence in health care. The Lancet Digital Health. 2021;3(11):e745-50.
  • Adadi A, Berrada M. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access. 2018;6:52138-60.
  • Cabitza F, Alderighi C, Rasoini R, Gensini GF. "Handle with care": about the potential unintended consequences of oracular artificial intelligence systems in medicine. Recenti progressi in medicina. 2017;108(10):397-401.