Bernd Kiefer
Ivana Kruijff-Korbayova
Anna Welker
Rifca Peters
Sarah McLeod
2019-09-18
<p>The present report describes the work carried out in the fourth project year regarding Natural Multimodal Interaction. It summarises the Deliverable D4.4: \Natural multimodal interaction nal prototype". Most eorts in year 4 are targeted towards the nal integrated system for the experiments over an extended time range. New activities and modules have been added, many already existing ones been extended, most of these additions or extensions also affecting in some way or other the multimodal interaction. The dialogue policies and the linguistic resources have been adapted accordingly. In addition, some new functionalities were at the core of human-system interaction, namely a new module for rst time use of the system, and an<br>
integrated Off-Activity-Talk prototype, consisting of robot self disclosure and a social talk part.<br>
The Episodic Memory has been extended, and interaction modules for the following activities have been added: Standalone Break & Sort, new Dance activity, the so-called Tip of the Day and the Task Suggestions, and for the Explainable AI module.<br>
Furthermore, support for an experiment with different interaction styles, using modulated gestures, has been added, and the VOnDA compiler and run-time system has been heavily improved. Version 2.0 of the framework has been released on GitHub. For less experienced users, a graphical editor and compiler for hierarchical state machines has been implemented and is now ready for use.</p>
https://doi.org/10.5281/zenodo.3443670
oai:zenodo.org:3443670
Zenodo
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.3443669
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
multimodal interaction
human-system interaction
robot
social talk
DR 4.4: Natural Multimodal Interaction Final Pro- totype
info:eu-repo/semantics/report