Published July 4, 2022 | Version v1
Conference paper Open

Ex(plainable) Machina: how social-implicit XAI affects complex human-robot teaming tasks

  • 1. Italian Institute of Technology/University of Genoa
  • 2. Italian Institute of Technology

Description

In this paper, we investigated how shared experience-based counterfactual explanations affected people's performance and robots' persuasiveness during a decision-making task in a social HRI context. We used the Connect 4 game as a complex decision-making task where participants and the robot had to play as a team against the computer. We compared two strategies of explanation generation (classical vs shared experience-based) and investigated their differences in terms of team performance, the robot's persuasive power, and participants' perception of the robot and self. Our results showed that the two explanation strategies led to comparable performances. Moreover, shared experience-based explanations - based on the team's previous games - gave higher persuasiveness to the robot's suggestions than classical ones. Finally, we noted that low-performers tend to follow the robot more than high-performers, providing insights into the potential danger for non-expert users interacting with expert explainable robots.

Files

ICRA2023_Ex(plainable) Machina how social-implicit XAI affects complex.pdf

Files (445.3 kB)

Additional details

Funding

wHiSPER – investigating Human Shared PErception with Robots 804388
European Commission