Published March 24, 2021 | Version v2
Dataset Open

PE-HRI: A Multimodal Dataset for the study of Productive Engagement in a robot mediated Collaborative Educational Setting

  • 1. École Polytechnique Fédérale de Lausanne (EPFL)
  • 2. Sorbonne Université

Description

This data set consists of engagement related multi-modal team behaviors & learning outcomes collected in the context of a robot mediated collaborative and constructivist learning activity called JUSThink [1,2]. The data set can be useful for those looking to explore/validate theoretical models of engagement. The dataset is inspired by our efforts of critically assessing engagement modelling in educational HRI contexts which eventually lead us to proposing the concept of 'Productive Engagement'. More on this can be found in [3,4,5].

The JUSThink platform consists of two screens and a QTrobot acting as a guide and a mediator. The platform aims to (1) improve the computational skills of children by imparting intuitive knowledge about minimum-spanning-tree problems and (2) promote collaboration among the team via its design. As an experimental setup for HRI studies, it also serves as a platform for designing and evaluating robot behaviors that are effective for the pedagogical goals or in general HRI problems such as trust, robot perception, engagement, collaboration. The minimum-spanning-tree problem is introduced through a gold mining scenario based on a map of Switzerland, where mountains represent gold mines labelled with Swiss cities names.

The features in the dataset are grounded and motivated by the engagement literature in HRI and Intelligent Tutoring Systems. The dataset consists of team level data collected from 34 teams of two (68 children) where the children are aged between 9 and 12. More specifically, it contains: 

  • PE-HRI:behavioral.cvs: This file consists of team level multi-modal behavioral data namely log data that captures interaction with the setup, speech behavior, affective states, and gaze patterns. The definition for the each feature is given below: 

    • T_add: The number of times a team added an edge on the map.

    • T_remove: The number of times a team removed an edge from the map.

    • T_ratio_add_del: The ratio of addition of edges over deletion of edges by a team.

    • T_action: The total number of actions taken by a team (add, delete, submit, presses on the screen).

    • T_hist: The number of times a team opened the sub-window with history of their previous solutions.

    • T_help: The number of times a team opened the instructions manual. Please note that the robot initially gives all the instructions before the game-play while a video is played for demonstration of the functionality of the game. 

    • T1_T1_rem: The number of times a team, either member, followed the pattern consecutively: I add an edge, I then delete it.

    • T1_T1_add: The number of times a team, either member, followed the pattern consecutively: I delete an edge, I add it back.

    • T1_T2_rem: The number of times a team, either member, followed the pattern consecutively: I add an edge, you then delete it. 

    • T1_T2_add: The number of times a team, either member, followed the pattern consecutively: I delete an edge, you add it back.

    • redundant_exist: The number of times the team had redundant edges in their map.

    • positive_valence: The average value of positive valence for the team.

    • negative_valence: The average value of negative valence for the team.

    • mean_pos_minus_neg_valence: The difference of the average value of positive and negative valence for the team.

    • arousal: The average value of arousal for the team.

    • smile_count: The average percentage of time of a team smiling.

    • at_partner: The average percentage of time a team has a team member looking at their partner.

    • at_robot: The average percentage of time a team is looking at the robot.

    • other: The average percentage of time a team is looking in the direction opposite to the robot.

    • screen_left: The average percentage of time a team is looking at the left side of the screen.

    • screen_right: The average percentage of time a team is looking at the right side of the screen.

    • screen_right_left_ratio: The ratio of looking at the right side of the screen over the left side.

    • voice_activity: The average percentage of time a team is speaking over the entire duration of the task.

    • silence: The average percentage of time a team is silent over the entire duration of the task.

    • short_pauses: The average percentage of time a team pauses briefly (0.15 sec) over their speech activity.

    • long_pauses: The average percentage of time a team makes long pauses (1.5 sec) over their speech activity.

    • overlap: The average percentage of time the speech of the team members overlaps over the entire duration of the task. 

    • overlap_to_speech_ratio: The ratio of the speech overlap over the speech activity (voice_activity) of the team. 

  • PE-HRI:learning_and_performance.csv: This file consists of the team level performance and learning metrics which are defined below: 

    • last_error: This is the error of the last submitted solution. Note that if a team has found an optimal solution (error = 0) the game stops, therefore making last error = 0. This is a metric for performance in the task. 

    • T_LG_absolute: It is a team-level learning outcome that we calculate by taking the average of the two individual absolute learning gains of the team members. The individual absolute gain is the difference between a participant’s post-test and pre-test score, divided by the maximum score that can be achieved (10), which grasps how much the participant learned of all the knowledge available.

    • T_LG_relative: It is a team-level learning outcome that we calculate by taking the average of the two individual relative learning gains of the team members. The individual relative gain is the difference between a participant’s post-test and pre-test score, divided by the difference between the maximum score that can be achieved and the pre-test score. This grasps how much the participant learned of the knowledge that he/she didn’t possess before the activity. 

    • T_LG_joint_abs: It is a team-level learning outcome defined as the difference between the number of questions that both of the team members answer correctly in the post-test and in the pre-test, which grasps the amount of knowledge acquired together by the team members during the activity.

More details on the JUSThink learning activity can be found in the linked identifiers (present in a tab on the right side of the page). Lastly, a temporal version of this data is also available [6].

Files

PE-HRI_behavioral.csv

Files (23.9 kB)

Name Size Download all
md5:8006f72c1f581bdd3488a76372b66786
13.0 kB Preview Download
md5:527144cd9d158ee9062eb685a0e23ed8
2.3 kB Preview Download
md5:ef76e7c4c6d92ea23f9316baf6e2fa72
8.6 kB Preview Download

Additional details

Related works

References
Conference paper: 10.1109/RO-MAN47096.2020.9223343 (DOI)
Conference paper: https://infoscience.epfl.ch/record/280176?ln=en (URL)

Funding

ANIMATAS – Advancing intuitive human-machine interaction with human-like social capabilities for education in schools 765955
European Commission

References

  • [1] J. Nasir, U. Norman, B. Bruno, and P. Dillenbourg, "You Tell, I Do, and We Swap until we Connect All the Gold Mines!," ERCIM News, vol. 2020, no. 120, 2020, [Online]. Available: [https://ercim-news.ercim.eu/en120/special/you-tell-i-do-and-we-swap-until-we-connect-all-the-gold-mines](https://ercim-news.ercim.eu/en120/special/you-tell-i-do-and-we-swap-until-we-connect-all-the-gold-mines).
  • [2] J. Nasir*, U. Norman*, B. Bruno, and P. Dillenbourg, "When Positive Perception of the Robot Has No Effect on Learning," in 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Aug. 2020, pp. 313–320, doi: [10.1109/RO-MAN47096.2020.9223343](https://doi.org/10.1109/RO-MAN47096.2020.9223343).
  • [3] J. Nasir, B. Bruno, P. Dillenbourg, "Is there 'one way' of learning? a data-driven approach", in: Companion Publication of the 2020 International Confer-ence on Multimodal Interaction, Association for Computing Machinery, New York,NY, USA, ICMI '20 Companion, p 388–391, DOI 10.1145/3395035.3425200.
  • [4] J. Nasir, B. Bruno, M. Chetouani, P. Dillenbourg , "What if social robots look for productive engagement?", in International Journal of Social Robotics DOI https://doi.org/10.1007/s12369-021-00766-w
  • [5] Nasir, J., Kothiyal, A., Bruno, B. et al. Many are the ways to learn identifying multi-modal behavioral profiles of collaborative learning in constructivist activities. Intern. J. Comput.-Support. Collab. Learn 16, 485–523 (2021). https://doi.org/10.1007/s11412-021-09358-2
  • [6] Jauwairia Nasir, Barbara Bruno, & Pierre Dillenbourg. (2021). PE-HRI-temporal: A Multimodal Temporal Dataset in a robot mediated Collaborative Educational Setting [Data set]. Zenodo. https://doi.org/10.5281/zenodo.5576058