Conference paper Open Access

Deep Spatio-Temporal Modeling for Object-Level Gaze-Based Relevance Assessment

Stavridis, Konstantinos; Psaltis, Athanasios; Dimou, Anastasios; Papadopoulos, Georgios Th.; Daras, Petros


MARC21 XML Export

<?xml version='1.0' encoding='UTF-8'?>
<record xmlns="http://www.loc.gov/MARC21/slim">
  <leader>00000nam##2200000uu#4500</leader>
  <datafield tag="942" ind1=" " ind2=" ">
    <subfield code="a">2020-05-18</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">Gaze modeling</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">Deep learning</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">Relevance assessment</subfield>
  </datafield>
  <controlfield tag="005">20200518082025.0</controlfield>
  <controlfield tag="001">3560515</controlfield>
  <datafield tag="711" ind1=" " ind2=" ">
    <subfield code="d">2-6 September 2019</subfield>
    <subfield code="g">EUSIPCO</subfield>
    <subfield code="a">2019 27th European Signal Processing Conference</subfield>
    <subfield code="c">A Coruna, Spain</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">CERTH</subfield>
    <subfield code="a">Psaltis, Athanasios</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">CERTH</subfield>
    <subfield code="a">Dimou, Anastasios</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">CERTH</subfield>
    <subfield code="a">Papadopoulos, Georgios Th.</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="a">Daras, Petros</subfield>
  </datafield>
  <datafield tag="856" ind1="4" ind2=" ">
    <subfield code="s">1003747</subfield>
    <subfield code="z">md5:07e86d309910b5b27d6c316231b0f9b4</subfield>
    <subfield code="u">https://zenodo.org/record/3560515/files/Deep Spatio-Temporal Modeling for Object-Level Gaze-Based Relevance Assessment .pdf</subfield>
  </datafield>
  <datafield tag="542" ind1=" " ind2=" ">
    <subfield code="l">open</subfield>
  </datafield>
  <datafield tag="260" ind1=" " ind2=" ">
    <subfield code="c">2019-11-18</subfield>
  </datafield>
  <datafield tag="909" ind1="C" ind2="O">
    <subfield code="p">openaire</subfield>
    <subfield code="o">oai:zenodo.org:3560515</subfield>
  </datafield>
  <datafield tag="100" ind1=" " ind2=" ">
    <subfield code="u">CERTH</subfield>
    <subfield code="a">Stavridis, Konstantinos</subfield>
  </datafield>
  <datafield tag="245" ind1=" " ind2=" ">
    <subfield code="a">Deep Spatio-Temporal Modeling for Object-Level Gaze-Based Relevance Assessment</subfield>
  </datafield>
  <datafield tag="536" ind1=" " ind2=" ">
    <subfield code="c">787061</subfield>
    <subfield code="a">Advanced tools for fighting oNline Illegal TrAfficking</subfield>
  </datafield>
  <datafield tag="540" ind1=" " ind2=" ">
    <subfield code="u">https://creativecommons.org/licenses/by/4.0/legalcode</subfield>
    <subfield code="a">Creative Commons Attribution 4.0 International</subfield>
  </datafield>
  <datafield tag="650" ind1="1" ind2="7">
    <subfield code="a">cc-by</subfield>
    <subfield code="2">opendefinition.org</subfield>
  </datafield>
  <datafield tag="520" ind1=" " ind2=" ">
    <subfield code="a">&lt;p&gt;The current work investigates the problem of objectlevel relevance assessment prediction, taking into account the user&amp;rsquo;s captured gaze signal (behaviour) and following the Deep Learning (DL) paradigm. Human gaze, as a sub-conscious response, is influenced from several factors related to the human mental activity. Several studies have so far proposed methodologies based on the use of gaze statistical modeling and naive classifiers for assessing images or image patches as relevant or not to the user&amp;rsquo;s interests. Nevertheless, the outstanding majority of literature approaches only relied so far on the use of handcrafted features and relative simple classification schemes. On the contrary, the current work focuses on the use of DL schemes that will enable the modeling of complex patterns in the captured gaze signal and the subsequent derivation of corresponding discriminant features. Novel contributions of this study include: a) the introduction of a large-scale annotated gaze dataset, suitable for training DL models, b) a novel method for gaze modeling, capable of handling gaze sensor errors, and c) a DL based method, able to capture gaze patterns for assessing image objects as relevant or non-relevant, with respect to the user&amp;rsquo;s preferences. Extensive experiments demonstrate the efficiency of the proposed method, taking also into consideration key factors related to the human gaze behaviour.&lt;/p&gt;</subfield>
  </datafield>
  <datafield tag="024" ind1=" " ind2=" ">
    <subfield code="a">10.23919/EUSIPCO.2019.8902990</subfield>
    <subfield code="2">doi</subfield>
  </datafield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">publication</subfield>
    <subfield code="b">conferencepaper</subfield>
  </datafield>
</record>
36
16
views
downloads
Views 36
Downloads 16
Data volume 16.1 MB
Unique views 28
Unique downloads 16

Share

Cite as