Journal article Open Access

Affective Robots: Evaluation of Automatic Emotion Recognition Approaches on a Humanoid Robot towards Emotionally Intelligent Machines

Silvia Santano Guillén; Luigi Lo Iacono; Christian Meder

JSON Export

  "files": [
      "links": {
        "self": ""
      "checksum": "md5:3555c0ecb6c15d489a3483145ca50864", 
      "bucket": "e0a91a38-f5c1-448a-ac99-584bd759a912", 
      "key": "10009027.pdf", 
      "type": "pdf", 
      "size": 191296
  "owners": [
  "doi": "10.5281/zenodo.1316752", 
  "stats": {
    "version_unique_downloads": 99.0, 
    "unique_views": 162.0, 
    "views": 173.0, 
    "version_views": 172.0, 
    "unique_downloads": 99.0, 
    "version_unique_views": 161.0, 
    "volume": 20086080.0, 
    "version_downloads": 105.0, 
    "downloads": 105.0, 
    "version_volume": 20086080.0
  "links": {
    "doi": "", 
    "conceptdoi": "", 
    "bucket": "", 
    "conceptbadge": "", 
    "html": "", 
    "latest_html": "", 
    "badge": "", 
    "latest": ""
  "conceptdoi": "10.5281/zenodo.1316751", 
  "created": "2018-07-19T17:15:23.600505+00:00", 
  "updated": "2020-01-20T17:38:03.997087+00:00", 
  "conceptrecid": "1316751", 
  "revision": 5, 
  "id": 1316752, 
  "metadata": {
    "access_right_category": "success", 
    "doi": "10.5281/zenodo.1316752", 
    "description": "One of the main aims of current social robotic research<br>\nis to improve the robots&rsquo; abilities to interact with humans. In order<br>\nto achieve an interaction similar to that among humans, robots<br>\nshould be able to communicate in an intuitive and natural way<br>\nand appropriately interpret human affects during social interactions.<br>\nSimilarly to how humans are able to recognize emotions in other<br>\nhumans, machines are capable of extracting information from the<br>\nvarious ways humans convey emotions&mdash;including facial expression,<br>\nspeech, gesture or text&mdash;and using this information for improved<br>\nhuman computer interaction. This can be described as Affective<br>\nComputing, an interdisciplinary field that expands into otherwise<br>\nunrelated fields like psychology and cognitive science and involves<br>\nthe research and development of systems that can recognize and<br>\ninterpret human affects. To leverage these emotional capabilities<br>\nby embedding them in humanoid robots is the foundation of<br>\nthe concept Affective Robots, which has the objective of making<br>\nrobots capable of sensing the user&rsquo;s current mood and personality<br>\ntraits and adapt their behavior in the most appropriate manner<br>\nbased on that. In this paper, the emotion recognition capabilities<br>\nof the humanoid robot Pepper are experimentally explored, based<br>\non the facial expressions for the so-called basic emotions, as<br>\nwell as how it performs in contrast to other state-of-the-art<br>\napproaches with both expression databases compiled in academic<br>\nenvironments and real subjects showing posed expressions as well<br>\nas spontaneous emotional reactions. The experiments&rsquo; results show<br>\nthat the detection accuracy amongst the evaluated approaches differs<br>\nsubstantially. The introduced experiments offer a general structure<br>\nand approach for conducting such experimental evaluations. The<br>\npaper further suggests that the most meaningful results are obtained<br>\nby conducting experiments with real subjects expressing the emotions<br>\nas spontaneous reactions.", 
    "language": "eng", 
    "title": "Affective Robots: Evaluation of Automatic Emotion Recognition Approaches on a Humanoid Robot towards Emotionally Intelligent Machines", 
    "license": {
      "id": "CC-BY-4.0"
    "journal": {
      "volume": "11.0", 
      "issue": "6", 
      "title": "International Journal of Mechanical, Industrial and Aerospace Sciences"
    "relations": {
      "version": [
          "count": 1, 
          "index": 0, 
          "parent": {
            "pid_type": "recid", 
            "pid_value": "1316751"
          "is_last": true, 
          "last_child": {
            "pid_type": "recid", 
            "pid_value": "1316752"
    "version": "10009027", 
    "references": [
      "Mayer JD, Salovey P, Caruso DR, Mayer Salovey Caruso Emotional\nIntelligence Test (MSCEIT) users manual, 2.0. Toronto, Canada: MHS\nPublishers, 2002.", 
      "J. Liu, A. Harris, N. Kanwisher, Stages of processing in face perception:\nan meg study, Nat Neurosci, vol. 5, pp. 910916, 09 2002.", 
      "Klaus R. Scherer, Mayer Salovey Caruso Emotional Intelligence Test\n(MSCEIT) users manual, v. 44, 695-729 Social Science Information,\n2005.", 
      "E. Kennedy-Moore, J. Watson, Expressing Emotion: Myths, Realities, and\nTherapeutic Strategies. Emotions and social behavior, Guilford Press,\n1999.", 
      "P. Ekman, Universals and Cultural Differences in Facial Expressions of\nEmotion. University of Nebraska Press, 1971.", 
      "P. Ekman, W. V. Friesen, and J. C. Hager, The facial action coding system,\nin Research Nexus eBook, 2002.", 
      "P. Lucey, J. F. Cohn, T. Kanade, J. M. Saragih, Z. Ambadar, and I. A.\nMatthews, The extended cohn-kanade dataset (CK+): A complete dataset\nfor action unit and emotion-specified expression, in IEEE Conference on\nComputer Vision and Pattern Recognition, CVPR Workshops 2010, San\nFrancisco, CA, USA, 13-18 June, 2010, pp. 94\u2013101, 2010.", 
      "Challenges in representation learning: Facial expression recognition\nchallenge,\nfacial-expression-recognition-challenge (Last accessed: in April\n2018)", 
      "M. J. Lyons, S. Akamatsu, M. Kamachi, and J. Gyoba, Coding facial\nexpressions with gabor wavelets in 3rd International Conference on Face\n& Gesture Recognition (FG'98), Nara, Japan, pp. 200\u2013205, 1998.\n[10] The Third Emotion Recognition in The Wild (EmotiW) 2015 Grand\nChallenge, (Last accessed:\nApril 2018)\n[11] Z. Yu and C. Zhang, Image based static facial expression recognition\nwith multiple deep network learning in ICMI' 15 Proceedings of the 2015\nACM on International Conference on Multimodal Interaction, Seattle,\nWA, USA, pp. 435\u2013442, 2015.\n[12] B.-K. Kim, J. Roh, S.-Y. Dong, and S.-Y. Lee, Hierarchical committee\nof deep convolutional neural networks for robust facial expression\nrecognition in J. Multimodal User Interfaces, vol. 10, no. 2, pp. 173\u2013189,\n2016.\n[13] G. Levi and T. Hassner, Emotion recognition in the wild via\nconvolutional neural networks and mapped binary patterns in ICMI' 15\nProceedings of the 2015 ACM on International Conference on Multimodal\nInteraction, Seattle, WA, USA, pp. 503\u2013510, 2015.\n[14] Y. Lv, Z. Feng, and C. Xu, Facial expression recognition via deep\nlearning, in SMARTCOMP, IEEE Computer Society, pp. 303\u2013308, 2014.\n[15] T. Ahsan, T. Jabid, and U.-P. Chong, Facial expression recognition using\nlocal transitional pattern on gabor filtered facial images, IETE Technical\nReview, vol. 30, no. 1, pp. 47\u201352, 2013.\n[16] A. Gudi, Recognizing semantic features in faces using deep learning,\nCoRR, vol. abs/1512.00743, 2015.\n[17] E. Correa, A. Jonker, M. Ozo, and R. Stolk, Emotion recognition using\ndeep convolutional neural networks., 2016.\n[18] M. Hashemian, H. Moradi, and M. S. Mirian, How is his/her mood:\nA question that a companion robot may be able to answer, in Social\nRobotics: 8th International Conference, ICSR 2016, Kansas City, MO,\nUSA, November 1-3, 2016 Proceedings (A. Agah, J.-J. Cabibihan, A. M.\nHoward, M. A. Salichs, and H. He, eds.), pp. 274\u2013284, Springer\nInternational Publishing, 2016.\n[19] M. M. A. de Graaf, S. Ben Allouch, and J. A. G. M. van Dijk,\nWhat makes robots social?: A user's perspective on characteristics\nfor social human-robot interaction, in Proceedings of Social Robotics:\n7th International Conference, ICSR 2015, Paris, France, pp. 184\u2013193,\nSpringer International Publishing, 2015.\n[20] A. Meghdari, M. Alemi, A. G. Pour, and A. Taheri, Spontaneous\nhuman-robot emotional interaction through facial expressions, in Social\nRobotics: 8th International Conference, ICSR 2016, Kansas City, MO,\nUSA, November 1-3, 2016 Proceedings (A. Agah, J.-J. Cabibihan, A. M.\nHoward, M. A. Salichs, and H. He, eds.), (Cham), pp. 351\u2013361, Springer\nInternational Publishing, 2016.\n[21] U. Hess and R. E. Kleck, Differentiating emotion elicited and deliberate\nemotional facial expressions, European Journal of Social Psychology,\nvol. 20, no. 5, pp. 369\u2013385, 1990.\n[22] M. Hirose, T. Takenaka, H. Gomi and N. Ozawa, Humanoid robot,\nJournal of the Robotics Society of Japan, vol. 15, no. 7, pp. 983\u2013985,\n1997.\n[23] K. Hirai, M. Hirose, Y. Haikawa and T. Takenaka, The Honda\nhumanoid robot: development and future perspective, Industrial Robot:\nAn International Journal, vol. 26, no. 4, pp. 260\u2013266, 1999.\n[24] P. Ekman, J.C. Hager, W.V. Friesen, The symmetry of emotional and\ndeliberate facial actions, Psychophysiology, 18: 101-106, 1981.\n[25] A. Schaefer, F. Nils, X. Sanchez, and P. Philippot, Assessing the\neffectiveness of a large database of emotion-eliciting films: A new\ntool for emotion researchers, Cognition and Emotion, vol. 24, no. 7,\npp. 1153\u20131172, 2010.\n[26] Aldebaran (Softbank Robotics), Pepper robot,\n (Last accessed:\nApril 2018)\n[27] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin,\nS. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga,\nS. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden,\nM. Wicke, Y. Yu, and X. Zheng, Tensorflow: A system for large-scale\nmachine learning in 12th USENIX Symposium on Operating Systems\nDesign and Implementation (OSDI 16), pp. 265\u2013283, 2016. [28] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li and L. Fei-Fei, ImageNet:\nA Large-Scale Hierarchical Image Database in IEEE Computer Vision\nand Pattern Recognition (CVPR), 2009.\n[29] Softbank Robotics, ALMood Module,\n (Last accessed:\nApril 2018)\n[30] OMRON Corporation, Facial Expression Estimation Technology,\n (Last accessed:\nApril 2018)\n[31] Google Inc., Cloud Vision API, (Last\naccessed: April 2018)\n[32] Microsoft Corporation, Emotion API,\n\n(Last accessed: April 2018)\n[33] Kairos AR, Inc.,Human Analytics, (Last\naccessed: April 2018)"
    "keywords": [
      "Affective computing", 
      "emotion recognition", 
      "Human-Robot-Interaction (HRI)", 
      "social robots."
    "publication_date": "2018-04-04", 
    "creators": [
        "name": "Silvia Santano Guill\u00e9n"
        "name": "Luigi Lo Iacono"
        "name": "Christian Meder"
    "access_right": "open", 
    "resource_type": {
      "subtype": "article", 
      "type": "publication", 
      "title": "Journal article"
    "related_identifiers": [
        "scheme": "doi", 
        "identifier": "10.5281/zenodo.1316751", 
        "relation": "isVersionOf"
All versions This version
Views 172173
Downloads 105105
Data volume 20.1 MB20.1 MB
Unique views 161162
Unique downloads 9999


Cite as