Published August 12, 2022 | Version v1
Journal article Open

Comfortability Recognition from Visual Non-verbal Cues

  • 1. Italian Institute of Technology
  • 2. University of Trento

Description

As social agents, we experience situations in which sometimes we enjoy being involved and others where we desire to withdraw from. Being aware of others' "comfort towards the interaction" help us enhance our communications, thus this becomes a fundamental skill for any interactive agent (either a robot or an Embodied Conversational Agent (ECA)). For this reason, the current paper considers Comfortability, the internal state that focuses on the person's desire to maintain or withdraw from an interaction, exploring whether it is possible to recognize it from human non-verbal behaviour. To this aim, videos collected during real Human-Robot Interactions (HRI) were segmented, manually annotated and used to train four standard classifiers. Concretely, different combinations of various facial and upper-body movements (i.e., \Action Units, Head Pose, Upper-body Pose and Gaze) were fed to the following feature-based Machine Learning (ML) algorithms: Naive Bayes, Neural Networks, Random Forest and Support Vector Machines.
The results indicate that the best model, obtaining a 75\% recognition accuracy, is trained with all the aforementioned cues together and based on Random Forest. These findings indicate, for the first time, that Comfortability can be automatically recognized, paving the way to its future integration into interactive agents.

Files

ComfortabilityRecognition_ICMI2022_cameraready.pdf

Files (810.8 kB)

Additional details

Funding

wHiSPER – investigating Human Shared PErception with Robots 804388
European Commission