Journal article Embargoed Access

# A Deep Generic to Specific Recognition Model for Group Membership Analysis using Non-verbal Cues

Mou, Wenxuan; Tzelepis, Christos; Mezaris, Vasileios; Gunes, Hatice; Patras, Ioannis

### DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<identifier identifierType="URL">https://zenodo.org/record/1464113</identifier>
<creators>
<creator>
<creatorName>Mou, Wenxuan</creatorName>
<givenName>Wenxuan</givenName>
<familyName>Mou</familyName>
<affiliation>Queen Mary University of London, UK</affiliation>
</creator>
<creator>
<creatorName>Tzelepis, Christos</creatorName>
<givenName>Christos</givenName>
<familyName>Tzelepis</familyName>
<affiliation>Information Technologies Institute/Centre for Research and Technology Hellas (CERTH), Greece</affiliation>
</creator>
<creator>
<creatorName>Mezaris, Vasileios</creatorName>
<givenName>Vasileios</givenName>
<familyName>Mezaris</familyName>
<affiliation>Information Technologies Institute/Centre for Research and Technology Hellas (CERTH), Greece</affiliation>
</creator>
<creator>
<creatorName>Gunes, Hatice</creatorName>
<givenName>Hatice</givenName>
<familyName>Gunes</familyName>
<affiliation>University of Cambridge, UK</affiliation>
</creator>
<creator>
<creatorName>Patras, Ioannis</creatorName>
<givenName>Ioannis</givenName>
<familyName>Patras</familyName>
<affiliation>Queen Mary University of London, UK</affiliation>
</creator>
</creators>
<titles>
<title>A Deep Generic to Specific Recognition Model for Group Membership Analysis using Non-verbal Cues</title>
</titles>
<publisher>Zenodo</publisher>
<publicationYear>2018</publicationYear>
<subjects>
<subject>Non-verbal behavior analysis, Group membership, Automatic group analysis, Deep learning</subject>
</subjects>
<dates>
<date dateType="Available">2019-10-03</date>
<date dateType="Accepted">2018-10-03</date>
</dates>
<resourceType resourceTypeGeneral="Text">Journal article</resourceType>
<alternateIdentifiers>
<alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/1464113</alternateIdentifier>
</alternateIdentifiers>
<relatedIdentifiers>
<relatedIdentifier relatedIdentifierType="DOI" relationType="IsIdenticalTo">10.1016/j.imavis.2018.09.005</relatedIdentifier>
<relatedIdentifier relatedIdentifierType="URL" relationType="IsPartOf">https://zenodo.org/communities/moving-h2020</relatedIdentifier>
</relatedIdentifiers>
<rightsList>
<rights rightsURI="info:eu-repo/semantics/embargoedAccess">Embargoed Access</rights>
</rightsList>
<descriptions>
<description descriptionType="Abstract">&lt;p&gt;Automatic understanding and analysis of groups has attracted increasing attention in the vision and multimedia communities in recent years. However, little attention has been paid to the automatic analysis of the non-verbal behaviors and how this can be utilized for analysis of group membership, i.e., recognizing which group each individual is part of. This paper presents a novel Support Vector Machine (SVM) based Deep &lt;em&gt;Specific Recognition Model (DeepSRM)&lt;/em&gt; that is learned based on a &lt;em&gt;generic recognition model&lt;/em&gt;. The &lt;em&gt;generic recognition model&lt;/em&gt; refers to the model trained with data across different conditions, i.e., when people are watching movies of different types. Although the &lt;em&gt;generic recognition model&lt;/em&gt; can provide a baseline for the recognition model trained for each specific condition, the different behaviors people exhibit in different conditions limit the recognition performance of the generic model. Therefore, the &lt;em&gt;specific recognition model&lt;/em&gt; is proposed for each condition separately and built on top of the &lt;em&gt;generic recognition model&lt;/em&gt;. A number of experiments are conducted using a database aiming to study group analysis while each group (i.e., four participants together) were watching a number of long movie segments. Our experimental results show that the proposed &lt;em&gt;deep specific recognition model&lt;/em&gt; (44%) outperforms the &lt;em&gt;generic recognition model&lt;/em&gt; (26%). The recognition of group membership also indicates that the non-verbal behaviors of individuals within a group share commonalities.&lt;/p&gt;</description>
</descriptions>
<fundingReferences>
<fundingReference>
<funderName>European Commission</funderName>
<funderIdentifier funderIdentifierType="Crossref Funder ID">10.13039/501100000780</funderIdentifier>
<awardNumber awardURI="info:eu-repo/grantAgreement/EC/H2020/693092/">693092</awardNumber>
<awardTitle>Training towards a society of data-savvy information professionals to enable open leadership innovation</awardTitle>
</fundingReference>
</fundingReferences>
</resource>

33
3
views