Journal article Open Access

Affective Robots: Evaluation of Automatic Emotion Recognition Approaches on a Humanoid Robot towards Emotionally Intelligent Machines

Silvia Santano Guillén; Luigi Lo Iacono; Christian Meder


DCAT Export

<?xml version='1.0' encoding='utf-8'?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:adms="http://www.w3.org/ns/adms#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dct="http://purl.org/dc/terms/" xmlns:dctype="http://purl.org/dc/dcmitype/" xmlns:dcat="http://www.w3.org/ns/dcat#" xmlns:duv="http://www.w3.org/ns/duv#" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:frapo="http://purl.org/cerif/frapo/" xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#" xmlns:gsp="http://www.opengis.net/ont/geosparql#" xmlns:locn="http://www.w3.org/ns/locn#" xmlns:org="http://www.w3.org/ns/org#" xmlns:owl="http://www.w3.org/2002/07/owl#" xmlns:prov="http://www.w3.org/ns/prov#" xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#" xmlns:schema="http://schema.org/" xmlns:skos="http://www.w3.org/2004/02/skos/core#" xmlns:vcard="http://www.w3.org/2006/vcard/ns#" xmlns:wdrs="http://www.w3.org/2007/05/powder-s#">
  <rdf:Description rdf:about="https://doi.org/10.5281/zenodo.1316752">
    <dct:identifier rdf:datatype="http://www.w3.org/2001/XMLSchema#anyURI">https://doi.org/10.5281/zenodo.1316752</dct:identifier>
    <foaf:page rdf:resource="https://doi.org/10.5281/zenodo.1316752"/>
    <dct:creator>
      <rdf:Description>
        <rdf:type rdf:resource="http://xmlns.com/foaf/0.1/Agent"/>
        <foaf:name>Silvia Santano Guillén</foaf:name>
      </rdf:Description>
    </dct:creator>
    <dct:creator>
      <rdf:Description>
        <rdf:type rdf:resource="http://xmlns.com/foaf/0.1/Agent"/>
        <foaf:name>Luigi Lo Iacono</foaf:name>
      </rdf:Description>
    </dct:creator>
    <dct:creator>
      <rdf:Description>
        <rdf:type rdf:resource="http://xmlns.com/foaf/0.1/Agent"/>
        <foaf:name>Christian Meder</foaf:name>
      </rdf:Description>
    </dct:creator>
    <dct:title>Affective Robots: Evaluation of Automatic Emotion Recognition Approaches on a Humanoid Robot towards Emotionally Intelligent Machines</dct:title>
    <dct:publisher>
      <foaf:Agent>
        <foaf:name>Zenodo</foaf:name>
      </foaf:Agent>
    </dct:publisher>
    <dct:issued rdf:datatype="http://www.w3.org/2001/XMLSchema#gYear">2018</dct:issued>
    <dcat:keyword>Affective computing</dcat:keyword>
    <dcat:keyword>emotion recognition</dcat:keyword>
    <dcat:keyword>humanoid robot</dcat:keyword>
    <dcat:keyword>Human-Robot-Interaction (HRI)</dcat:keyword>
    <dcat:keyword>social robots.</dcat:keyword>
    <dct:issued rdf:datatype="http://www.w3.org/2001/XMLSchema#date">2018-04-04</dct:issued>
    <dct:language rdf:resource="http://publications.europa.eu/resource/authority/language/ENG"/>
    <owl:sameAs rdf:resource="https://zenodo.org/record/1316752"/>
    <adms:identifier>
      <adms:Identifier>
        <skos:notation rdf:datatype="http://www.w3.org/2001/XMLSchema#anyURI">https://zenodo.org/record/1316752</skos:notation>
        <adms:schemeAgency>url</adms:schemeAgency>
      </adms:Identifier>
    </adms:identifier>
    <dct:isVersionOf rdf:resource="https://doi.org/10.5281/zenodo.1316751"/>
    <owl:versionInfo>10009027</owl:versionInfo>
    <dct:description>One of the main aims of current social robotic research&lt;br&gt; is to improve the robots&amp;rsquo; abilities to interact with humans. In order&lt;br&gt; to achieve an interaction similar to that among humans, robots&lt;br&gt; should be able to communicate in an intuitive and natural way&lt;br&gt; and appropriately interpret human affects during social interactions.&lt;br&gt; Similarly to how humans are able to recognize emotions in other&lt;br&gt; humans, machines are capable of extracting information from the&lt;br&gt; various ways humans convey emotions&amp;mdash;including facial expression,&lt;br&gt; speech, gesture or text&amp;mdash;and using this information for improved&lt;br&gt; human computer interaction. This can be described as Affective&lt;br&gt; Computing, an interdisciplinary field that expands into otherwise&lt;br&gt; unrelated fields like psychology and cognitive science and involves&lt;br&gt; the research and development of systems that can recognize and&lt;br&gt; interpret human affects. To leverage these emotional capabilities&lt;br&gt; by embedding them in humanoid robots is the foundation of&lt;br&gt; the concept Affective Robots, which has the objective of making&lt;br&gt; robots capable of sensing the user&amp;rsquo;s current mood and personality&lt;br&gt; traits and adapt their behavior in the most appropriate manner&lt;br&gt; based on that. In this paper, the emotion recognition capabilities&lt;br&gt; of the humanoid robot Pepper are experimentally explored, based&lt;br&gt; on the facial expressions for the so-called basic emotions, as&lt;br&gt; well as how it performs in contrast to other state-of-the-art&lt;br&gt; approaches with both expression databases compiled in academic&lt;br&gt; environments and real subjects showing posed expressions as well&lt;br&gt; as spontaneous emotional reactions. The experiments&amp;rsquo; results show&lt;br&gt; that the detection accuracy amongst the evaluated approaches differs&lt;br&gt; substantially. The introduced experiments offer a general structure&lt;br&gt; and approach for conducting such experimental evaluations. The&lt;br&gt; paper further suggests that the most meaningful results are obtained&lt;br&gt; by conducting experiments with real subjects expressing the emotions&lt;br&gt; as spontaneous reactions.</dct:description>
    <dct:description>{"references": ["Mayer JD, Salovey P, Caruso DR, Mayer Salovey Caruso Emotional\nIntelligence Test (MSCEIT) users manual, 2.0. Toronto, Canada: MHS\nPublishers, 2002.", "J. Liu, A. Harris, N. Kanwisher, Stages of processing in face perception:\nan meg study, Nat Neurosci, vol. 5, pp. 910916, 09 2002.", "Klaus R. Scherer, Mayer Salovey Caruso Emotional Intelligence Test\n(MSCEIT) users manual, v. 44, 695-729 Social Science Information,\n2005.", "E. Kennedy-Moore, J. Watson, Expressing Emotion: Myths, Realities, and\nTherapeutic Strategies. Emotions and social behavior, Guilford Press,\n1999.", "P. Ekman, Universals and Cultural Differences in Facial Expressions of\nEmotion. University of Nebraska Press, 1971.", "P. Ekman, W. V. Friesen, and J. C. Hager, The facial action coding system,\nin Research Nexus eBook, 2002.", "P. Lucey, J. F. Cohn, T. Kanade, J. M. Saragih, Z. Ambadar, and I. A.\nMatthews, The extended cohn-kanade dataset (CK+): A complete dataset\nfor action unit and emotion-specified expression, in IEEE Conference on\nComputer Vision and Pattern Recognition, CVPR Workshops 2010, San\nFrancisco, CA, USA, 13-18 June, 2010, pp. 94\u2013101, 2010.", "Challenges in representation learning: Facial expression recognition\nchallenge, https://www.kaggle.com/c/challenges-in-representationlearning-\nfacial-expression-recognition-challenge (Last accessed: in April\n2018)", "M. J. Lyons, S. Akamatsu, M. Kamachi, and J. Gyoba, Coding facial\nexpressions with gabor wavelets in 3rd International Conference on Face\n&amp; Gesture Recognition (FG'98), Nara, Japan, pp. 200\u2013205, 1998.\n[10] The Third Emotion Recognition in The Wild (EmotiW) 2015 Grand\nChallenge, http://cs.anu.edu.au/few/emotiw2015.html (Last accessed:\nApril 2018)\n[11] Z. Yu and C. Zhang, Image based static facial expression recognition\nwith multiple deep network learning in ICMI' 15 Proceedings of the 2015\nACM on International Conference on Multimodal Interaction, Seattle,\nWA, USA, pp. 435\u2013442, 2015.\n[12] B.-K. Kim, J. Roh, S.-Y. Dong, and S.-Y. Lee, Hierarchical committee\nof deep convolutional neural networks for robust facial expression\nrecognition in J. Multimodal User Interfaces, vol. 10, no. 2, pp. 173\u2013189,\n2016.\n[13] G. Levi and T. Hassner, Emotion recognition in the wild via\nconvolutional neural networks and mapped binary patterns in ICMI' 15\nProceedings of the 2015 ACM on International Conference on Multimodal\nInteraction, Seattle, WA, USA, pp. 503\u2013510, 2015.\n[14] Y. Lv, Z. Feng, and C. Xu, Facial expression recognition via deep\nlearning, in SMARTCOMP, IEEE Computer Society, pp. 303\u2013308, 2014.\n[15] T. Ahsan, T. Jabid, and U.-P. Chong, Facial expression recognition using\nlocal transitional pattern on gabor filtered facial images, IETE Technical\nReview, vol. 30, no. 1, pp. 47\u201352, 2013.\n[16] A. Gudi, Recognizing semantic features in faces using deep learning,\nCoRR, vol. abs/1512.00743, 2015.\n[17] E. Correa, A. Jonker, M. Ozo, and R. Stolk, Emotion recognition using\ndeep convolutional neural networks., 2016.\n[18] M. Hashemian, H. Moradi, and M. S. Mirian, How is his/her mood:\nA question that a companion robot may be able to answer, in Social\nRobotics: 8th International Conference, ICSR 2016, Kansas City, MO,\nUSA, November 1-3, 2016 Proceedings (A. Agah, J.-J. Cabibihan, A. M.\nHoward, M. A. Salichs, and H. He, eds.), pp. 274\u2013284, Springer\nInternational Publishing, 2016.\n[19] M. M. A. de Graaf, S. Ben Allouch, and J. A. G. M. van Dijk,\nWhat makes robots social?: A user's perspective on characteristics\nfor social human-robot interaction, in Proceedings of Social Robotics:\n7th International Conference, ICSR 2015, Paris, France, pp. 184\u2013193,\nSpringer International Publishing, 2015.\n[20] A. Meghdari, M. Alemi, A. G. Pour, and A. Taheri, Spontaneous\nhuman-robot emotional interaction through facial expressions, in Social\nRobotics: 8th International Conference, ICSR 2016, Kansas City, MO,\nUSA, November 1-3, 2016 Proceedings (A. Agah, J.-J. Cabibihan, A. M.\nHoward, M. A. Salichs, and H. He, eds.), (Cham), pp. 351\u2013361, Springer\nInternational Publishing, 2016.\n[21] U. Hess and R. E. Kleck, Differentiating emotion elicited and deliberate\nemotional facial expressions, European Journal of Social Psychology,\nvol. 20, no. 5, pp. 369\u2013385, 1990.\n[22] M. Hirose, T. Takenaka, H. Gomi and N. Ozawa, Humanoid robot,\nJournal of the Robotics Society of Japan, vol. 15, no. 7, pp. 983\u2013985,\n1997.\n[23] K. Hirai, M. Hirose, Y. Haikawa and T. Takenaka, The Honda\nhumanoid robot: development and future perspective, Industrial Robot:\nAn International Journal, vol. 26, no. 4, pp. 260\u2013266, 1999.\n[24] P. Ekman, J.C. Hager, W.V. Friesen, The symmetry of emotional and\ndeliberate facial actions, Psychophysiology, 18: 101-106, 1981.\n[25] A. Schaefer, F. Nils, X. Sanchez, and P. Philippot, Assessing the\neffectiveness of a large database of emotion-eliciting films: A new\ntool for emotion researchers, Cognition and Emotion, vol. 24, no. 7,\npp. 1153\u20131172, 2010.\n[26] Aldebaran (Softbank Robotics), Pepper robot,\nhttps://www.ald.softbankrobotics.com/en/robots/pepper (Last accessed:\nApril 2018)\n[27] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin,\nS. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga,\nS. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden,\nM. Wicke, Y. Yu, and X. Zheng, Tensorflow: A system for large-scale\nmachine learning in 12th USENIX Symposium on Operating Systems\nDesign and Implementation (OSDI 16), pp. 265\u2013283, 2016. [28] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li and L. Fei-Fei, ImageNet:\nA Large-Scale Hierarchical Image Database in IEEE Computer Vision\nand Pattern Recognition (CVPR), 2009.\n[29] Softbank Robotics, ALMood Module,\nhttp://doc.aldebaran.com/2-4/naoqi/core/almood.html (Last accessed:\nApril 2018)\n[30] OMRON Corporation, Facial Expression Estimation Technology,\nhttps://www.omron.com/media/press/2012/10/e1023.html (Last accessed:\nApril 2018)\n[31] Google Inc., Cloud Vision API, https://cloud.google.com/vision/ (Last\naccessed: April 2018)\n[32] Microsoft Corporation, Emotion API,\nhttps://azure.microsoft.com/en-us/services/cognitive-services/emotion/\n(Last accessed: April 2018)\n[33] Kairos AR, Inc.,Human Analytics, https://www.kairos.com/features (Last\naccessed: April 2018)"]}</dct:description>
    <dct:accessRights rdf:resource="http://publications.europa.eu/resource/authority/access-right/PUBLIC"/>
    <dct:accessRights>
      <dct:RightsStatement rdf:about="info:eu-repo/semantics/openAccess">
        <rdfs:label>Open Access</rdfs:label>
      </dct:RightsStatement>
    </dct:accessRights>
    <dct:license rdf:resource="https://creativecommons.org/licenses/by/4.0/legalcode"/>
    <dcat:distribution>
      <dcat:Distribution>
        <dcat:accessURL rdf:resource="https://doi.org/10.5281/zenodo.1316752"/>
        <dcat:byteSize>191296</dcat:byteSize>
        <dcat:downloadURL rdf:resource="https://zenodo.org/record/1316752/files/10009027.pdf"/>
        <dcat:mediaType>application/pdf</dcat:mediaType>
      </dcat:Distribution>
    </dcat:distribution>
  </rdf:Description>
</rdf:RDF>
172
105
views
downloads
All versions This version
Views 172173
Downloads 105105
Data volume 20.1 MB20.1 MB
Unique views 161162
Unique downloads 9999

Share

Cite as