Published September 5, 2024 | Version v1
Conference paper Open

A Robot's Moral Advice Is Not Appreciated Neither in Functional nor in Social Communication

  • 1. ROR icon Bielefeld University
  • 2. ROR icon Ruhr West University of Applied Sciences
  • 3. ROR icon TU Dresden

Description

This study (N = 317) investigated the influence of verbal communication (social vs. functional) on the acceptance of robot recommendations in non-moral, somewhat moral or very moral decision-making situations. The robot’s communication style had no impact on the participants (1) being confident in their decision, (2) perceiving the robot’s recommendation as helpful, and (3) making a decision dependent on the robot’s recommendation. However, all three aspects were strongly influenced by the morality of the decision situation demonstrating higher algorithm aversion in moral contexts.

Files

Arlinghaus_et_al_2024_ROMAN_LBR_Pepper.pdf

Files (776.8 kB)

Name Size Download all
md5:34528abc9b38fa352c467f718e9fa151
776.8 kB Preview Download

Additional details

Related works

Is part of
Preprint: 10.31219/osf.io/bufjh (DOI)

References

  • B. J. Dietvorst, J. P. Simmons, and C. Massey, Algorithm aversion: People erroneously avoid algorithms after seeing them err, Journal of Experimental Psychology: General, vol. 144, no. 1, pp. 114–126, 2015. https://doi.org/10.1037/xge0000033
  • C. Larkin, C. Drummmond Otten, and J. Arvai, Paging Dr. JARVIS! ´ Will people accept advice from artificial intelligence for consequential risk management decisions?, Journal of Risk Research, vol. 25, no. 4, pp. 407-422, 2022. https://doi.org/10.1080/13669877.2021.1958047
  • C. Longoni, A. Bonezzi, C. K. Morewedge, Resistance to medical artificial intelligence, Journal of Consumer Research, vol. 46, no. 4, pp. 629–650, 2019. https://doi.org/10.1093/jcr/ucz013
  • Y. E. Bigman, and K. Gray, People are averse to machines making moral decisions, Cognition, vol. 181, pp. 21–34, 2018. https://doi.org/10.1016/j.cognition.2018.08.003
  • L. Kunold, and L. Onnasch, A framework to study and design communication with social robots, Robotics, vol. 11, no. 6, 129, 2022. https://doi.org/10.3390/robotics11060129
  • G. Maggi, E. Dell'Aquila, I. Cucciniello, and S. Rossi, S., "Don't get distracted!": The role of social robots' interaction style on users' cognitive performance, acceptance, and non-compliant behavior, International Journal of Social Robotics, vol. 13, no. 8, pp. 2057–2069, 2021. https://doi.org/10.1007/s12369-020-00702-4
  • C. S. Arlinghaus, A. Dix, C. Straßmann, S. A. Pertuz, A. Podlubne, D. Göhringer, and S. Pannasch, Influence of robot's social communication on robot's evaluation and human decision making behavior in nonmoral, somewhat moral, and very moral decision situations, 2022. https://doi.org/10.17605/OSF.IO/7G8K2
  • C. S. Arlinghaus, C. Straßmann, and A. Dix, Increased morality through social communication or decision situation worsens the acceptance of robo-advisors. Preprint. https://doi.org/10.31219/osf.io/bufjh
  • N. Lee, J. Kim, E. Kim, and O. Kwon, The influence of politeness behavior on user compliance with social robots in a healthcare service setting, International Journal of Social Robotics, vol. 9, pp. 727–743, 2017. https://doi.org/10.1007/s12369-017-0420-0
  • S. Saunderson, and G. Nejat, Robots asking for favors: The effects of directness and familiarity on persuasive HRI, IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 1793-1800, 2021. https://doi.org/10.1109/LRA.2021.3060369
  • A. C. Horstmann, N. Bock, E. Linhuber, J. M. Szczuka, C. Straßmann, and N. C. Kramer, Do a robot's social skills and its objection discour- ¨ age interactants from switching the robot off?, PLoS ONE, vol. 13, no. 7, e0201581, 2018. https://doi.org/10.1371/journal.pone.0201581
  • D. L. Johanson, H. S. Ahn, and E. Broadbent, Improving interactions with healthcare robots: A review of communication behaviours in social and healthcare contexts, International Journal of Social Robotics, vol. 13, no. 8, pp. 1835–1850, 2021. https://doi.org/10.1007/s12369- 020-00719-9
  • I. Saltik, D. Erdil, and B. A. Urgen, Mind perception and social robots: The role of agent appearance and action types, in Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction (HRI 2021), 2021, pp. 210-214. https://doi.org/10.1145/3434074.3447161
  • P. Formosa, and M. Ryan, Making moral machines: Why we need artificial moral agents, AI & Society, vol. 36, no. 3, pp. 839–851, 2021. https://doi.org/10.1007/s00146-020-01089-6
  • S. Saunderson, and G. Nejat, Investigating strategies for robot persuasion in social human–robot interaction, IEEE Transactions on Cybernetics, vol. 52, no. 1, pp. 641–653, 2022. https://doi.org/10.1109/tcyb.2020.2987463
  • P. H. Kahn, T. Kanda, H. Ishiguro, B. T. Gill, J. H. Ruckert, S. Shen, H. E. Gary, A. L. Reichert, N. G. Freier, and R. L. Severson, Do people hold a humanoid robot morally accountable for the harm it causes?, in Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction (HRI 2012), 2012, pp. 33-40. https://doi.org/10.1145/2157689.2157696
  • Y. Takahashi, Y. Kayukawa, K. Terada, and H. Inoue, Emotional expressions of real humanoid robots and their influence on human decision-making in a finite iterated prisoner's dilemma game, International Journal of Social Robotics, vol. 13, pp. 1777–1786, 2021. https://doi.org/10.1007/s12369-021-00758-w
  • S. Aboulenine, Persuasion in president Biden's inauguration speech, Traduction et Languages, vol. 20, no. 1, pp. 186-208, 2021.
  • H. Pfaff, and J. Braithwaite, A Parsonian approach to patient safety: Transformational leadership and social capital as preconditions for clinical risk management — the GI factor, International Journal of Environmental Research and Public Health, vol. 17, no. 11, 3989, 2020. https://doi.org/10.3390/ijerph17113989
  • M. R. Fraune, S. Sabanovic, and E. R. Smith, Some are more equal than others, Interaction Studies, vol. 21, no. 3, pp. 303–328, 2020. https://doi.org/10.1075/is.18043.fra
  • C. Dautzenberg, G. M. I. Voß, S. Ladwid, and A. M. Rosenthalvon der Putten, Investigation of different communication strategies for ¨ a delivery robot: The positive effects of humanlike communication styles, in Proceedings of the 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN 2021), 2021, pp. 356-361. https://doi.org/10.1109/RO-MAN50785.2021.9515547
  • G. Stephan, Amortisationsrechnung in der Radiologie, Radiologen WirtschaftsForum, vol. 9, no. 7-8, 2021.
  • N. D. Starr, B. Malle, and T. Williams, T., I need your advice. . . Human perceptions of robot moral advising behaviors, ArXiv, 2021. https://doi.org/10.48550/arXiv.2104.06963
  • J. Savulescu, Good reasons to vaccinate: mandatory or payment for risk?, Journal of Medical Ethics, vol. 47, no. 2, pp. 78–85, 2021. https://doi.org/10.1136/medethics-2020-106821
  • J. Savulescu, Good reasons to vaccinate: mandatory or payment for risk?, Journal of Medical Ethics, vol. 47, no. 2, pp. 78–85, 2021. https://doi.org/10.1136/medethics-2020-106821
  • W. Buckwalter, and A. C. Peterson, Public attitudes toward allocating scarce resources in the COVID-19 pandemic. PLoS ONE, vol. 15, no. 11, e0240651, 2020. https://doi.org/10.1371/journal.pone.0240651
  • S. Saunderson, and G. Nejat, G., It would make me happy if you used my guess: Comparing robot persuasive strategies in social human–robot interaction, IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 1707–1714, 2019. https://doi.org/10.1109/lra.2019.2897143
  • L. Masjutin, J. Laing, and G. W. Maier, Why do we follow robots? An experimental investigation of conformity with robot, human, and hybrid majorities, in Proceedings of the 2022 ACM/ IEEE International Conference on Human-Robot Interaction (HRI 2022), 2022, pp. 139-146. https://doi.org/10.1109/HRI53351.2022.9889675