Conference paper Open Access

# Deep Spatio-Temporal Modeling for Object-Level Gaze-Based Relevance Assessment

Stavridis, Konstantinos; Psaltis, Athanasios; Dimou, Anastasios; Papadopoulos, Georgios Th.; Daras, Petros

### DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<identifier identifierType="URL">https://zenodo.org/record/3560515</identifier>
<creators>
<creator>
<creatorName>Stavridis, Konstantinos</creatorName>
<givenName>Konstantinos</givenName>
<familyName>Stavridis</familyName>
<affiliation>CERTH</affiliation>
</creator>
<creator>
<creatorName>Psaltis, Athanasios</creatorName>
<givenName>Athanasios</givenName>
<familyName>Psaltis</familyName>
<affiliation>CERTH</affiliation>
</creator>
<creator>
<creatorName>Dimou, Anastasios</creatorName>
<givenName>Anastasios</givenName>
<familyName>Dimou</familyName>
<affiliation>CERTH</affiliation>
</creator>
<creator>
<givenName>Georgios Th.</givenName>
<affiliation>CERTH</affiliation>
</creator>
<creator>
<creatorName>Daras, Petros</creatorName>
<givenName>Petros</givenName>
<familyName>Daras</familyName>
</creator>
</creators>
<titles>
<title>Deep Spatio-Temporal Modeling for Object-Level Gaze-Based Relevance Assessment</title>
</titles>
<publisher>Zenodo</publisher>
<publicationYear>2019</publicationYear>
<subjects>
<subject>Gaze modeling</subject>
<subject>Deep learning</subject>
<subject>Relevance assessment</subject>
</subjects>
<dates>
<date dateType="Issued">2019-11-18</date>
</dates>
<resourceType resourceTypeGeneral="Text">Conference paper</resourceType>
<alternateIdentifiers>
<alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/3560515</alternateIdentifier>
</alternateIdentifiers>
<relatedIdentifiers>
<relatedIdentifier relatedIdentifierType="DOI" relationType="IsIdenticalTo">10.23919/EUSIPCO.2019.8902990</relatedIdentifier>
</relatedIdentifiers>
<rightsList>
<rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
</rightsList>
<descriptions>
<description descriptionType="Abstract">&lt;p&gt;The current work investigates the problem of objectlevel relevance assessment prediction, taking into account the user&amp;rsquo;s captured gaze signal (behaviour) and following the Deep Learning (DL) paradigm. Human gaze, as a sub-conscious response, is influenced from several factors related to the human mental activity. Several studies have so far proposed methodologies based on the use of gaze statistical modeling and naive classifiers for assessing images or image patches as relevant or not to the user&amp;rsquo;s interests. Nevertheless, the outstanding majority of literature approaches only relied so far on the use of handcrafted features and relative simple classification schemes. On the contrary, the current work focuses on the use of DL schemes that will enable the modeling of complex patterns in the captured gaze signal and the subsequent derivation of corresponding discriminant features. Novel contributions of this study include: a) the introduction of a large-scale annotated gaze dataset, suitable for training DL models, b) a novel method for gaze modeling, capable of handling gaze sensor errors, and c) a DL based method, able to capture gaze patterns for assessing image objects as relevant or non-relevant, with respect to the user&amp;rsquo;s preferences. Extensive experiments demonstrate the efficiency of the proposed method, taking also into consideration key factors related to the human gaze behaviour.&lt;/p&gt;</description>
</descriptions>
<fundingReferences>
<fundingReference>
<funderName>European Commission</funderName>
<funderIdentifier funderIdentifierType="Crossref Funder ID">10.13039/501100000780</funderIdentifier>
<awardNumber awardURI="info:eu-repo/grantAgreement/EC/H2020/787061/">787061</awardNumber>
<awardTitle>Advanced tools for fighting oNline Illegal TrAfficking</awardTitle>
</fundingReference>
</fundingReferences>
</resource>

29
13
views