Conference paper Open Access
Dario Pasquali; Jonas Gonzalez-Billandon; Francesco Rea; Giulio Sandini; Alessandra Sciutti
<?xml version='1.0' encoding='utf-8'?> <rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:adms="http://www.w3.org/ns/adms#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dct="http://purl.org/dc/terms/" xmlns:dctype="http://purl.org/dc/dcmitype/" xmlns:dcat="http://www.w3.org/ns/dcat#" xmlns:duv="http://www.w3.org/ns/duv#" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:frapo="http://purl.org/cerif/frapo/" xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#" xmlns:gsp="http://www.opengis.net/ont/geosparql#" xmlns:locn="http://www.w3.org/ns/locn#" xmlns:org="http://www.w3.org/ns/org#" xmlns:owl="http://www.w3.org/2002/07/owl#" xmlns:prov="http://www.w3.org/ns/prov#" xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#" xmlns:schema="http://schema.org/" xmlns:skos="http://www.w3.org/2004/02/skos/core#" xmlns:vcard="http://www.w3.org/2006/vcard/ns#" xmlns:wdrs="http://www.w3.org/2007/05/powder-s#"> <rdf:Description rdf:about="https://zenodo.org/record/4405851"> <dct:identifier rdf:datatype="http://www.w3.org/2001/XMLSchema#anyURI">https://zenodo.org/record/4405851</dct:identifier> <foaf:page rdf:resource="https://zenodo.org/record/4405851"/> <dct:creator> <rdf:Description> <rdf:type rdf:resource="http://xmlns.com/foaf/0.1/Agent"/> <foaf:name>Dario Pasquali</foaf:name> <org:memberOf> <foaf:Organization> <foaf:name>Robotics, Brain and Cognitive Science (RBCS), Istituto Italiano di Tecnologia (IIT), DIBRIS</foaf:name> </foaf:Organization> </org:memberOf> </rdf:Description> </dct:creator> <dct:creator> <rdf:Description> <rdf:type rdf:resource="http://xmlns.com/foaf/0.1/Agent"/> <foaf:name>Jonas Gonzalez-Billandon</foaf:name> <org:memberOf> <foaf:Organization> <foaf:name>Robotics, Brain and Cognitive Science (RBCS), Istituto Italiano di Tecnologia (IIT), DIBRIS</foaf:name> </foaf:Organization> </org:memberOf> </rdf:Description> </dct:creator> <dct:creator> <rdf:Description> <rdf:type rdf:resource="http://xmlns.com/foaf/0.1/Agent"/> <foaf:name>Francesco Rea</foaf:name> <org:memberOf> <foaf:Organization> <foaf:name>Robotics, Brain and Cognitive Science (RBCS), Istituto Italiano di Tecnologia (IIT)</foaf:name> </foaf:Organization> </org:memberOf> </rdf:Description> </dct:creator> <dct:creator> <rdf:Description> <rdf:type rdf:resource="http://xmlns.com/foaf/0.1/Agent"/> <foaf:name>Giulio Sandini</foaf:name> <org:memberOf> <foaf:Organization> <foaf:name>Robotics, Brain and Cognitive Science (RBCS), Istituto Italiano di Tecnologia (IIT)</foaf:name> </foaf:Organization> </org:memberOf> </rdf:Description> </dct:creator> <dct:creator> <rdf:Description> <rdf:type rdf:resource="http://xmlns.com/foaf/0.1/Agent"/> <foaf:name>Alessandra Sciutti</foaf:name> <org:memberOf> <foaf:Organization> <foaf:name>COgNiTive Architecture for Collaborative Technologies (CONTACT), Istituto Italiano di Tecnologia (IIT)</foaf:name> </foaf:Organization> </org:memberOf> </rdf:Description> </dct:creator> <dct:title>Magic iCub: a Humanoid Robot Autonomously Catching Your Lies in a Card Game</dct:title> <dct:publisher> <foaf:Agent> <foaf:name>Zenodo</foaf:name> </foaf:Agent> </dct:publisher> <dct:issued rdf:datatype="http://www.w3.org/2001/XMLSchema#gYear">2021</dct:issued> <dcat:keyword>Entertainment, magic, human-robot interaction, pupillometry, cognitive load</dcat:keyword> <frapo:isFundedBy rdf:resource="info:eu-repo/grantAgreement/EC/H2020/804388/"/> <schema:funder> <foaf:Organization> <dct:identifier rdf:datatype="http://www.w3.org/2001/XMLSchema#string">10.13039/100010661</dct:identifier> <foaf:name>European Commission</foaf:name> </foaf:Organization> </schema:funder> <dct:issued rdf:datatype="http://www.w3.org/2001/XMLSchema#date">2021-03-08</dct:issued> <dct:language rdf:resource="http://publications.europa.eu/resource/authority/language/ENG"/> <owl:sameAs rdf:resource="https://zenodo.org/record/4405851"/> <adms:identifier> <adms:Identifier> <skos:notation rdf:datatype="http://www.w3.org/2001/XMLSchema#anyURI">https://zenodo.org/record/4405851</skos:notation> <adms:schemeAgency>url</adms:schemeAgency> </adms:Identifier> </adms:identifier> <owl:sameAs rdf:resource="https://doi.org/10.1145/3434073.3444682"/> <dct:description><p>Games are often used to foster human partners&rsquo; engagement and natural behavior, even when they are played with or against robots. Therefore, beyond their entertainment value, games represent ideal interaction paradigms where to investigate natural human-robot interaction and to foster robots&rsquo; diffusion in the society. However, most of the state-of-the-art games involving robots, are driven with a Wizard of Oz approach. To address this limitation, we present an end-to-end (E2E) architecture to enable the iCub robotic platform to autonomously lead an entertaining magic card trick with human partners. We demonstrate that with this architecture a robot is capable of autonomously directing the game from beginning to end. In particular, the robot could detect in real-time when the players lied in the description of one card in their hands (the <em>secret card).</em> In a validation experiment, the robot achieved an accuracy of 88.2% (against a chance level of 16.6%) in detecting the <em>secret card</em> while the social interaction naturally unfolded. The results demonstrate the feasibility of our approach and its effectiveness in maintaining engagement of the players and entertaining the participants. Additionally, we provide evidence on the possibility to detect important measures of the human partner`s inner state such as cognitive load related to lie creation with pupillometry in a short and ecological game-like interaction with a robot.</p></dct:description> <dct:accessRights rdf:resource="http://publications.europa.eu/resource/authority/access-right/PUBLIC"/> <dct:accessRights> <dct:RightsStatement rdf:about="info:eu-repo/semantics/openAccess"> <rdfs:label>Open Access</rdfs:label> </dct:RightsStatement> </dct:accessRights> <dct:license rdf:resource="https://creativecommons.org/licenses/by/4.0/legalcode"/> <dcat:distribution> <dcat:Distribution> <dcat:accessURL rdf:resource="https://doi.org/10.1145/3434073.3444682"/> <dcat:byteSize>787642</dcat:byteSize> <dcat:downloadURL rdf:resource="https://zenodo.org/record/4405851/files/HRI_camera_ready_final_5.pdf"/> <dcat:mediaType>application/pdf</dcat:mediaType> </dcat:Distribution> </dcat:distribution> </rdf:Description> <foaf:Project rdf:about="info:eu-repo/grantAgreement/EC/H2020/804388/"> <dct:identifier rdf:datatype="http://www.w3.org/2001/XMLSchema#string">804388</dct:identifier> <dct:title>investigating Human Shared PErception with Robots</dct:title> <frapo:isAwardedBy> <foaf:Organization> <dct:identifier rdf:datatype="http://www.w3.org/2001/XMLSchema#string">10.13039/100010661</dct:identifier> <foaf:name>European Commission</foaf:name> </foaf:Organization> </frapo:isAwardedBy> </foaf:Project> </rdf:RDF>
Views | 154 |
Downloads | 218 |
Data volume | 171.7 MB |
Unique views | 128 |
Unique downloads | 203 |