Journal article Open Access

Affective Robots: Evaluation of Automatic Emotion Recognition Approaches on a Humanoid Robot towards Emotionally Intelligent Machines

Silvia Santano Guillén; Luigi Lo Iacono; Christian Meder

MARC21 XML Export

<?xml version='1.0' encoding='UTF-8'?>
<record xmlns="">
  <datafield tag="999" ind1="C" ind2="5">
    <subfield code="x">Mayer JD, Salovey P, Caruso DR, Mayer Salovey Caruso Emotional
Intelligence Test (MSCEIT) users manual, 2.0. Toronto, Canada: MHS
Publishers, 2002.</subfield>
  <datafield tag="999" ind1="C" ind2="5">
    <subfield code="x">J. Liu, A. Harris, N. Kanwisher, Stages of processing in face perception:
an meg study, Nat Neurosci, vol. 5, pp. 910916, 09 2002.</subfield>
  <datafield tag="999" ind1="C" ind2="5">
    <subfield code="x">Klaus R. Scherer, Mayer Salovey Caruso Emotional Intelligence Test
(MSCEIT) users manual, v. 44, 695-729 Social Science Information,
  <datafield tag="999" ind1="C" ind2="5">
    <subfield code="x">E. Kennedy-Moore, J. Watson, Expressing Emotion: Myths, Realities, and
Therapeutic Strategies. Emotions and social behavior, Guilford Press,
  <datafield tag="999" ind1="C" ind2="5">
    <subfield code="x">P. Ekman, Universals and Cultural Differences in Facial Expressions of
Emotion. University of Nebraska Press, 1971.</subfield>
  <datafield tag="999" ind1="C" ind2="5">
    <subfield code="x">P. Ekman, W. V. Friesen, and J. C. Hager, The facial action coding system,
in Research Nexus eBook, 2002.</subfield>
  <datafield tag="999" ind1="C" ind2="5">
    <subfield code="x">P. Lucey, J. F. Cohn, T. Kanade, J. M. Saragih, Z. Ambadar, and I. A.
Matthews, The extended cohn-kanade dataset (CK+): A complete dataset
for action unit and emotion-specified expression, in IEEE Conference on
Computer Vision and Pattern Recognition, CVPR Workshops 2010, San
Francisco, CA, USA, 13-18 June, 2010, pp. 94–101, 2010.</subfield>
  <datafield tag="999" ind1="C" ind2="5">
    <subfield code="x">Challenges in representation learning: Facial expression recognition
facial-expression-recognition-challenge (Last accessed: in April
  <datafield tag="999" ind1="C" ind2="5">
    <subfield code="x">M. J. Lyons, S. Akamatsu, M. Kamachi, and J. Gyoba, Coding facial
expressions with gabor wavelets in 3rd International Conference on Face
&amp; Gesture Recognition (FG'98), Nara, Japan, pp. 200–205, 1998.
[10] The Third Emotion Recognition in The Wild (EmotiW) 2015 Grand
Challenge, (Last accessed:
April 2018)
[11] Z. Yu and C. Zhang, Image based static facial expression recognition
with multiple deep network learning in ICMI' 15 Proceedings of the 2015
ACM on International Conference on Multimodal Interaction, Seattle,
WA, USA, pp. 435–442, 2015.
[12] B.-K. Kim, J. Roh, S.-Y. Dong, and S.-Y. Lee, Hierarchical committee
of deep convolutional neural networks for robust facial expression
recognition in J. Multimodal User Interfaces, vol. 10, no. 2, pp. 173–189,
[13] G. Levi and T. Hassner, Emotion recognition in the wild via
convolutional neural networks and mapped binary patterns in ICMI' 15
Proceedings of the 2015 ACM on International Conference on Multimodal
Interaction, Seattle, WA, USA, pp. 503–510, 2015.
[14] Y. Lv, Z. Feng, and C. Xu, Facial expression recognition via deep
learning, in SMARTCOMP, IEEE Computer Society, pp. 303–308, 2014.
[15] T. Ahsan, T. Jabid, and U.-P. Chong, Facial expression recognition using
local transitional pattern on gabor filtered facial images, IETE Technical
Review, vol. 30, no. 1, pp. 47–52, 2013.
[16] A. Gudi, Recognizing semantic features in faces using deep learning,
CoRR, vol. abs/1512.00743, 2015.
[17] E. Correa, A. Jonker, M. Ozo, and R. Stolk, Emotion recognition using
deep convolutional neural networks., 2016.
[18] M. Hashemian, H. Moradi, and M. S. Mirian, How is his/her mood:
A question that a companion robot may be able to answer, in Social
Robotics: 8th International Conference, ICSR 2016, Kansas City, MO,
USA, November 1-3, 2016 Proceedings (A. Agah, J.-J. Cabibihan, A. M.
Howard, M. A. Salichs, and H. He, eds.), pp. 274–284, Springer
International Publishing, 2016.
[19] M. M. A. de Graaf, S. Ben Allouch, and J. A. G. M. van Dijk,
What makes robots social?: A user's perspective on characteristics
for social human-robot interaction, in Proceedings of Social Robotics:
7th International Conference, ICSR 2015, Paris, France, pp. 184–193,
Springer International Publishing, 2015.
[20] A. Meghdari, M. Alemi, A. G. Pour, and A. Taheri, Spontaneous
human-robot emotional interaction through facial expressions, in Social
Robotics: 8th International Conference, ICSR 2016, Kansas City, MO,
USA, November 1-3, 2016 Proceedings (A. Agah, J.-J. Cabibihan, A. M.
Howard, M. A. Salichs, and H. He, eds.), (Cham), pp. 351–361, Springer
International Publishing, 2016.
[21] U. Hess and R. E. Kleck, Differentiating emotion elicited and deliberate
emotional facial expressions, European Journal of Social Psychology,
vol. 20, no. 5, pp. 369–385, 1990.
[22] M. Hirose, T. Takenaka, H. Gomi and N. Ozawa, Humanoid robot,
Journal of the Robotics Society of Japan, vol. 15, no. 7, pp. 983–985,
[23] K. Hirai, M. Hirose, Y. Haikawa and T. Takenaka, The Honda
humanoid robot: development and future perspective, Industrial Robot:
An International Journal, vol. 26, no. 4, pp. 260–266, 1999.
[24] P. Ekman, J.C. Hager, W.V. Friesen, The symmetry of emotional and
deliberate facial actions, Psychophysiology, 18: 101-106, 1981.
[25] A. Schaefer, F. Nils, X. Sanchez, and P. Philippot, Assessing the
effectiveness of a large database of emotion-eliciting films: A new
tool for emotion researchers, Cognition and Emotion, vol. 24, no. 7,
pp. 1153–1172, 2010.
[26] Aldebaran (Softbank Robotics), Pepper robot, (Last accessed:
April 2018)
[27] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin,
S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga,
S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden,
M. Wicke, Y. Yu, and X. Zheng, Tensorflow: A system for large-scale
machine learning in 12th USENIX Symposium on Operating Systems
Design and Implementation (OSDI 16), pp. 265–283, 2016. [28] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li and L. Fei-Fei, ImageNet:
A Large-Scale Hierarchical Image Database in IEEE Computer Vision
and Pattern Recognition (CVPR), 2009.
[29] Softbank Robotics, ALMood Module, (Last accessed:
April 2018)
[30] OMRON Corporation, Facial Expression Estimation Technology, (Last accessed:
April 2018)
[31] Google Inc., Cloud Vision API, (Last
accessed: April 2018)
[32] Microsoft Corporation, Emotion API,
(Last accessed: April 2018)
[33] Kairos AR, Inc.,Human Analytics, (Last
accessed: April 2018)</subfield>
  <datafield tag="041" ind1=" " ind2=" ">
    <subfield code="a">eng</subfield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">Affective computing</subfield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">emotion recognition</subfield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">humanoid
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">Human-Robot-Interaction (HRI)</subfield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">social robots.</subfield>
  <controlfield tag="005">20200120173803.0</controlfield>
  <controlfield tag="001">1316752</controlfield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="a">Luigi Lo Iacono</subfield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="a">Christian Meder</subfield>
  <datafield tag="856" ind1="4" ind2=" ">
    <subfield code="s">191296</subfield>
    <subfield code="z">md5:3555c0ecb6c15d489a3483145ca50864</subfield>
    <subfield code="u"></subfield>
  <datafield tag="542" ind1=" " ind2=" ">
    <subfield code="l">open</subfield>
  <datafield tag="260" ind1=" " ind2=" ">
    <subfield code="c">2018-04-04</subfield>
  <datafield tag="909" ind1="C" ind2="O">
    <subfield code="p">openaire</subfield>
    <subfield code="o"></subfield>
  <datafield tag="909" ind1="C" ind2="4">
    <subfield code="v">11.0</subfield>
    <subfield code="p">International Journal of Mechanical, Industrial and Aerospace Sciences</subfield>
    <subfield code="n">6</subfield>
  <datafield tag="100" ind1=" " ind2=" ">
    <subfield code="a">Silvia Santano Guillén</subfield>
  <datafield tag="245" ind1=" " ind2=" ">
    <subfield code="a">Affective Robots: Evaluation of Automatic Emotion Recognition Approaches on a Humanoid Robot towards Emotionally Intelligent Machines</subfield>
  <datafield tag="540" ind1=" " ind2=" ">
    <subfield code="u"></subfield>
    <subfield code="a">Creative Commons Attribution 4.0 International</subfield>
  <datafield tag="650" ind1="1" ind2="7">
    <subfield code="a">cc-by</subfield>
    <subfield code="2"></subfield>
  <datafield tag="520" ind1=" " ind2=" ">
    <subfield code="a">One of the main aims of current social robotic research&lt;br&gt;
is to improve the robots&amp;rsquo; abilities to interact with humans. In order&lt;br&gt;
to achieve an interaction similar to that among humans, robots&lt;br&gt;
should be able to communicate in an intuitive and natural way&lt;br&gt;
and appropriately interpret human affects during social interactions.&lt;br&gt;
Similarly to how humans are able to recognize emotions in other&lt;br&gt;
humans, machines are capable of extracting information from the&lt;br&gt;
various ways humans convey emotions&amp;mdash;including facial expression,&lt;br&gt;
speech, gesture or text&amp;mdash;and using this information for improved&lt;br&gt;
human computer interaction. This can be described as Affective&lt;br&gt;
Computing, an interdisciplinary field that expands into otherwise&lt;br&gt;
unrelated fields like psychology and cognitive science and involves&lt;br&gt;
the research and development of systems that can recognize and&lt;br&gt;
interpret human affects. To leverage these emotional capabilities&lt;br&gt;
by embedding them in humanoid robots is the foundation of&lt;br&gt;
the concept Affective Robots, which has the objective of making&lt;br&gt;
robots capable of sensing the user&amp;rsquo;s current mood and personality&lt;br&gt;
traits and adapt their behavior in the most appropriate manner&lt;br&gt;
based on that. In this paper, the emotion recognition capabilities&lt;br&gt;
of the humanoid robot Pepper are experimentally explored, based&lt;br&gt;
on the facial expressions for the so-called basic emotions, as&lt;br&gt;
well as how it performs in contrast to other state-of-the-art&lt;br&gt;
approaches with both expression databases compiled in academic&lt;br&gt;
environments and real subjects showing posed expressions as well&lt;br&gt;
as spontaneous emotional reactions. The experiments&amp;rsquo; results show&lt;br&gt;
that the detection accuracy amongst the evaluated approaches differs&lt;br&gt;
substantially. The introduced experiments offer a general structure&lt;br&gt;
and approach for conducting such experimental evaluations. The&lt;br&gt;
paper further suggests that the most meaningful results are obtained&lt;br&gt;
by conducting experiments with real subjects expressing the emotions&lt;br&gt;
as spontaneous reactions.</subfield>
  <datafield tag="773" ind1=" " ind2=" ">
    <subfield code="n">doi</subfield>
    <subfield code="i">isVersionOf</subfield>
    <subfield code="a">10.5281/zenodo.1316751</subfield>
  <datafield tag="024" ind1=" " ind2=" ">
    <subfield code="a">10.5281/zenodo.1316752</subfield>
    <subfield code="2">doi</subfield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">publication</subfield>
    <subfield code="b">article</subfield>
All versions This version
Views 172173
Downloads 105105
Data volume 20.1 MB20.1 MB
Unique views 161162
Unique downloads 9999


Cite as