Journal article Open Access

A Deep Generic to Specific Recognition Model for Group Membership Analysis using Non-verbal Cues

Mou, Wenxuan; Tzelepis, Christos; Mezaris, Vasileios; Gunes, Hatice; Patras, Ioannis

DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<resource xmlns:xsi="" xmlns="" xsi:schemaLocation="">
  <identifier identifierType="URL"></identifier>
      <creatorName>Mou, Wenxuan</creatorName>
      <affiliation>Queen Mary University of London, UK</affiliation>
      <creatorName>Tzelepis, Christos</creatorName>
      <affiliation>Information Technologies Institute/Centre for Research and Technology Hellas (CERTH), Greece</affiliation>
      <creatorName>Mezaris, Vasileios</creatorName>
      <affiliation>Information Technologies Institute/Centre for Research and Technology Hellas (CERTH), Greece</affiliation>
      <creatorName>Gunes, Hatice</creatorName>
      <affiliation>University of Cambridge, UK</affiliation>
      <creatorName>Patras, Ioannis</creatorName>
      <affiliation>Queen Mary University of London, UK</affiliation>
    <title>A Deep Generic to Specific Recognition Model for Group Membership Analysis using Non-verbal Cues</title>
    <subject>Non-verbal behavior analysis, Group membership, Automatic group analysis, Deep learning</subject>
    <date dateType="Issued">2018-10-03</date>
  <resourceType resourceTypeGeneral="JournalArticle"/>
    <alternateIdentifier alternateIdentifierType="url"></alternateIdentifier>
    <relatedIdentifier relatedIdentifierType="DOI" relationType="IsIdenticalTo">10.1016/j.imavis.2018.09.005</relatedIdentifier>
    <relatedIdentifier relatedIdentifierType="URL" relationType="IsPartOf"></relatedIdentifier>
    <rights rightsURI="">Creative Commons Attribution Non Commercial No Derivatives 4.0 International</rights>
    <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
    <description descriptionType="Abstract">&lt;p&gt;Automatic understanding and analysis of groups has attracted increasing attention in the vision and multimedia communities in recent years. However, little attention has been paid to the automatic analysis of the non-verbal behaviors and how this can be utilized for analysis of group membership, i.e., recognizing which group each individual is part of. This paper presents a novel Support Vector Machine (SVM) based Deep &lt;em&gt;Specific Recognition Model (DeepSRM)&lt;/em&gt; that is learned based on a &lt;em&gt;generic recognition model&lt;/em&gt;. The &lt;em&gt;generic recognition model&lt;/em&gt; refers to the model trained with data across different conditions, i.e., when people are watching movies of different types. Although the &lt;em&gt;generic recognition model&lt;/em&gt; can provide a baseline for the recognition model trained for each specific condition, the different behaviors people exhibit in different conditions limit the recognition performance of the generic model. Therefore, the &lt;em&gt;specific recognition model&lt;/em&gt; is proposed for each condition separately and built on top of the &lt;em&gt;generic recognition model&lt;/em&gt;. A number of experiments are conducted using a database aiming to study group analysis while each group (i.e., four participants together) were watching a number of long movie segments. Our experimental results show that the proposed &lt;em&gt;deep specific recognition model&lt;/em&gt; (44%) outperforms the &lt;em&gt;generic recognition model&lt;/em&gt; (26%). The recognition of group membership also indicates that the non-verbal behaviors of individuals within a group share commonalities.&lt;/p&gt;</description>
      <funderName>European Commission</funderName>
      <funderIdentifier funderIdentifierType="Crossref Funder ID">10.13039/100010661</funderIdentifier>
      <awardNumber awardURI="info:eu-repo/grantAgreement/EC/H2020/693092/">693092</awardNumber>
      <awardTitle>Training towards a society of data-savvy information professionals to enable open leadership innovation</awardTitle>
Views 126
Downloads 85
Data volume 227.4 MB
Unique views 110
Unique downloads 80


Cite as