Ex(plainable) Machina: how social-implicit XAI affects complex human-robot teaming tasks
In this paper, we investigated how shared experience-based counterfactual explanations affected people's performance and robots' persuasiveness during a decision-making task in a social HRI context. We used the Connect 4 game as a complex decision-making task where participants and the robot had to play as a team against the computer. We compared two strategies of explanation generation (classical vs shared experience-based) and investigated their differences in terms of team performance, the robot's persuasive power, and participants' perception of the robot and self. Our results showed that the two explanation strategies led to comparable performances. Moreover, shared experience-based explanations - based on the team's previous games - gave higher persuasiveness to the robot's suggestions than classical ones. Finally, we noted that low-performers tend to follow the robot more than high-performers, providing insights into the potential danger for non-expert users interacting with expert explainable robots.
ICRA2023_Ex(plainable) Machina how social-implicit XAI affects complex.pdf