Conference paper Open Access

# Attention-enhanced Sensorimotor Object Recognition

Thermos, S; Papadopoulos, GT; Daras, P; Potamianos, G

### DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<identifier identifierType="URL">https://zenodo.org/record/3727849</identifier>
<creators>
<creator>
<creatorName>Thermos, S</creatorName>
<givenName>S</givenName>
<familyName>Thermos</familyName>
</creator>
<creator>
<givenName>GT</givenName>
</creator>
<creator>
<creatorName>Daras, P</creatorName>
<givenName>P</givenName>
<familyName>Daras</familyName>
</creator>
<creator>
<creatorName>Potamianos, G</creatorName>
<givenName>G</givenName>
<familyName>Potamianos</familyName>
</creator>
</creators>
<titles>
<title>Attention-enhanced Sensorimotor Object Recognition</title>
</titles>
<publisher>Zenodo</publisher>
<publicationYear>2018</publicationYear>
<subjects>
<subject>Sensorimotor object recognition, attention mechanism, stream fusion, deep neural networks</subject>
</subjects>
<dates>
<date dateType="Issued">2018-10-10</date>
</dates>
<language>en</language>
<resourceType resourceTypeGeneral="Text">Conference paper</resourceType>
<alternateIdentifiers>
<alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/3727849</alternateIdentifier>
</alternateIdentifiers>
<relatedIdentifiers>
<relatedIdentifier relatedIdentifierType="DOI" relationType="IsIdenticalTo">10.1109/ICIP.2018.8451158</relatedIdentifier>
<relatedIdentifier relatedIdentifierType="URL" relationType="IsPartOf">https://zenodo.org/communities/vrtogether-h2020</relatedIdentifier>
</relatedIdentifiers>
<version>pre-print</version>
<rightsList>
<rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
</rightsList>
<descriptions>
<description descriptionType="Abstract">&lt;p&gt;Sensorimotor learning, namely the process of understanding the physical world by combining visual and motor information, has been recently investigated, achieving promising results for the task of 2D/3D object recognition. Following the recent trend in computer vision, powerful deep neural networks (NNs) have been used to model the &amp;ldquo;sensory&amp;rdquo; and &amp;ldquo;motor&amp;rdquo; information, namely the object appearance and affordance. However, the existing implementations cannot efficiently address the spatio-temporal nature of the humanobject interaction. Inspired by recent work on attention-based learning, this paper introduces an attention-enhanced NN-based model that learns to selectively focus on parts of the physical interaction where the object appearance is corrupted by occlusions and deformations. The model&amp;rsquo;s attention mechanism relies on the confidence of classifying an object based solely on its appearance. Three metrics are used to measure the latter, namely the prediction entropy, the average N-best likelihood difference, and the N-best likelihood dispersion. Evaluation of the attention-enhanced model on the SOR3D dataset reports 33% and 26% relative improvement over the appearance-only and the spatio-temporal fusion baseline models, respectively.&lt;/p&gt;</description>
</descriptions>
</resource>

24
96
views